id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
69453 | Interstellar medium | Matter and radiation in the space between the star systems in a galaxy
The interstellar medium (ISM) is the matter and radiation that exists in the space between the star systems in a galaxy. This matter includes gas in ionic, atomic, and molecular form, as well as dust and cosmic rays. It fills interstellar space and blends smoothly into the surrounding intergalactic space. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field. Although the density of atoms in the ISM is usually far below that in the best laboratory vacuums, the mean free path between collisions is short compared to typical interstellar lengths, so on these scales the ISM behaves as a gas (more precisely, as a plasma: it is everywhere at least slightly ionized), responding to pressure forces, and not as a collection of non-interacting particles.
The interstellar medium is composed of multiple phases distinguished by whether matter is ionic, atomic, or molecular, and the temperature and density of the matter. The interstellar medium is composed primarily of hydrogen, followed by helium with trace amounts of carbon, oxygen, and nitrogen. The thermal pressures of these phases are in rough equilibrium with one another. Magnetic fields and turbulent motions also provide pressure in the ISM, and are typically more important, dynamically, than the thermal pressure. In the interstellar medium, matter is primarily in molecular form and reaches number densities of 1012 molecules per m3 (1 trillion molecules per m3). In hot, diffuse regions, gas is highly ionized, and the density may be as low as 100 ions per m3. Compare this with a number density of roughly 1025 molecules per m3 for air at sea level, and 1016 molecules per m3 (10 quadrillion molecules per m3) for a laboratory high-vacuum chamber. Within our galaxy, by mass, 99% of the ISM is gas in any form, and 1% is dust. Of the gas in the ISM, by number 91% of atoms are hydrogen and 8.9% are helium, with 0.1% being atoms of elements heavier than hydrogen or helium, known as "metals" in astronomical parlance. By mass this amounts to 70% hydrogen, 28% helium, and 1.5% heavier elements. The hydrogen and helium are primarily a result of primordial nucleosynthesis, while the heavier elements in the ISM are mostly a result of enrichment (due to stellar nucleosynthesis) in the process of stellar evolution.
The ISM plays a crucial role in astrophysics precisely because of its intermediate role between stellar and galactic scales. Stars form within the densest regions of the ISM, which ultimately contributes to molecular clouds and replenishes the ISM with matter and energy through planetary nebulae, stellar winds, and supernovae. This interplay between stars and the ISM helps determine the rate at which a galaxy depletes its gaseous content, and therefore its lifespan of active star formation.
"Voyager 1" reached the ISM on August 25, 2012, making it the first artificial object from Earth to do so. Interstellar plasma and dust will be studied until the estimated mission end date of 2025. Its twin "Voyager 2" entered the ISM on November 5, 2018.
Interstellar matter.
Table 1 shows a breakdown of the properties of the components of the ISM of the Milky Way.
The three-phase model.
put forward the static two "phase" equilibrium model to explain the observed properties of the ISM. Their modeled ISM included a cold dense phase ("T" < 300 K), consisting of clouds of neutral and molecular hydrogen, and a warm intercloud phase ("T" ~ 104 K), consisting of rarefied neutral and ionized gas. added a dynamic third phase that represented the very hot ("T" ~ 106 K) gas that had been shock heated by supernovae and constituted most of the volume of the ISM.
These phases are the temperatures where heating and cooling can reach a stable equilibrium. Their paper formed the basis for further study over the subsequent three decades. However, the relative proportions of the phases and their subdivisions are still not well understood.
The basic physics behind these phases can be understood through the behaviour of hydrogen, since this is by far the largest constituent of the ISM. The different phases are roughly in pressure balance over most of the Galactic disk, since regions of excess pressure will expand and cool, and likewise under-pressure regions will be compressed and heated. Therefore, since "P = n k T", hot regions (high "T") generally have low particle number density "n". Coronal gas has low enough density that collisions between particles are rare and so little radiation is produced, hence there is little loss of energy and the temperature can stay high for periods of hundreds of millions of years. In contrast, once the temperature falls to O(105 K) with correspondingly higher density, protons and electrons can recombine to form hydrogen atoms, emitting photons which take energy out of the gas, leading to runaway cooling. Left to itself this would produce the warm neutral medium. However, OB stars are so hot that some of their photons have energy greater than the Lyman limit, "E" > 13.6 eV, enough to ionize hydrogen. Such photons will be absorbed by, and ionize, any neutral hydrogen atom they encounter, setting up a dynamic equilibrium between ionization and recombination such that gas close enough to OB stars is almost entirely ionized, with temperature around 8000 K (unless already in the coronal phase), until the distance where all the ionizing photons are used up. This "ionization front" marks the boundary between the Warm ionized and Warm neutral medium.
OB stars, and also cooler ones, produce many more photons with energies below the Lyman limit, which pass through the ionized region almost unabsorbed. Some of these have high enough energy (> 11.3 eV) to ionize carbon atoms, creating a C II ("ionized carbon") region outside the (hydrogen) ionization front. In dense regions this may also be limited in size by the availability of photons, but often such photons can penetrate throughout the neutral phase and only get absorbed in the outer layers of molecular clouds. Photons with "E" > 4 eV or so can break up molecules such as H2 and CO, creating a photodissociation region (PDR) which is more or less equivalent to the Warm neutral medium. These processes contribute to the heating of the WNM. The distinction between Warm and Cold neutral medium is again due to a range of temperature/density in which runaway cooling occurs.
The densest molecular clouds have significantly higher pressure than the interstellar average, since they are bound together by their own gravity. When stars form in such clouds, especially OB stars, they convert the surrounding gas into the warm ionized phase, a temperature increase of several hundred. Initially the gas is still at molecular cloud densities, and so at vastly higher pressure than the ISM average: this is a classical H II region. The large overpressure causes the ionized gas to expand away from the remaining molecular gas (a Champagne flow), and the flow will continue until either the molecular cloud is fully evaporated or the OB stars reach the end of their lives, after a few millions years. At this point the OB stars explode as supernovas, creating blast waves in the warm gas that increase temperatures to the coronal phase (supernova remnants, SNR). These too expand and cool over several million years until they return to average ISM pressure.
The ISM in different kinds of galaxy.
Most discussion of the ISM concerns spiral galaxies like the Milky Way, in which nearly all the mass in the ISM is confined to a relatively thin disk, typically with scale height about 100 parsecs (300 light years), which can be compared to a typical disk diameter of 30,000 parsecs. Gas and stars in the disk orbit the galactic centre with typical orbital speeds of 200 km/s. This is much faster than the random motions of atoms in the ISM, but since the orbital motion of the gas is coherent, the average motion does not directly affect structure in the ISM. The vertical scale height of the ISM is set in roughly the same way as the Earth's atmosphere, as a balance between the local gravitation field (dominated by the stars in the disk) and the pressure. Further from the disk plane, the ISM is mainly in the low-density warm and coronal phases, which extend at least several thousand parsecs away from the disk plane. This galactic halo or 'corona' also contains significant magnetic field and cosmic ray energy density.
The rotation of galaxy disks influences ISM structures in several ways. Since the angular velocity declines with increasing distance from the centre, any ISM feature, such as giant molecular clouds or magnetic field lines, that extend across a range of radius are sheared by differential rotation, and so tend to become stretched out in the tangential direction; this tendency is opposed by interstellar turbulence (see below) which tends to randomize the structures. Spiral arms are due to perturbations in the disk orbits - essentially ripples in the disk, that cause orbits to alternately converge and diverge, compressing and then expanding the local ISM. The visible spiral arms are the regions of maximum density, and the compression often triggers star formation in molecular clouds, leading to an abundance of H II regions along the arms. Coriolis force also influences large ISM features.
Irregular galaxies such as the Magellanic Clouds have similar interstellar mediums to spirals, but less organized. In elliptical galaxies the ISM is almost entirely in the coronal phase, since there is no coherent disk motion to support cold gas far from the center: instead, the scale height of the ISM must be comperable to the radius of the galaxy. This is consistent with the observation that there is little sign of current star formation in ellipticals. Some elliptical galaxies do show evidence for a small disk component, with ISM similar to spirals, buried close to their centers. The ISM of lenticular galaxies, as with their other properties, appear intermediate between spirals and ellipticals.
Very close to the center of most galaxies (within a few hundred light years at most), the ISM is profoundly modified by the central supermassive black hole: see Galactic Center for the Milky Way, and Active galactic nucleus for extreme examples in other galaxies. The rest of this article will focus on the ISM in the disk plane of spirals, far from the galactic center.
Structures.
Astronomers describe the ISM as turbulent, meaning that the gas has quasi-random motions coherent over a large range of spatial scales. Unlike normal turbulence, in which the fluid motions are highly subsonic, the bulk motions of the ISM are usually larger than the sound speed. Supersonic collisions between gas clouds cause shock waves which compress and heat the gas, increasing the sounds speed so that the flow is locally subsonic; thus supersonic turbulence has been described as 'a box of shocklets', and is inevitably associated with complex density and temperature structure. In the ISM this is further complicated by the magnetic field, which provides wave modes such as Alfvén waves which are often faster than pure sound waves: if turbulent speeds are supersonic but below the Alfvén wave speed, the behaviour is more like subsonic turbulence.
Stars are born deep inside large complexes of molecular clouds, typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM.
Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures – of varying sizes – can be observed, such as stellar wind bubbles and superbubbles of hot gas, seen by X-ray satellite telescopes or turbulent flows observed in radio telescope maps.
Stars and planets, once formed, are unaffected by pressure forces in the ISM, and so do not take part in the turbulent motions, although stars formed in molecular clouds in a galactic disk share their general orbital motion around the galaxy center. Thus stars are usually in motion relative to their surrounding ISM. The Sun is currently traveling through the Local Interstellar Cloud, an irregular clump of the warm neutral medium a few parsecs across, within the low-density Local Bubble, a 100-parsec radius region of coronal gas.
In October 2020, astronomers reported a significant unexpected increase in density in the space beyond the Solar System as detected by the "Voyager 1" and "Voyager 2" space probes. According to the researchers, this implies that "the density gradient is a large-scale feature of the VLISM (very local interstellar medium) in the general direction of the heliospheric nose".
Interaction with interplanetary medium.
The interstellar medium begins where the interplanetary medium of the Solar System ends. The solar wind slows to subsonic velocities at the termination shock, 90–100 astronomical units from the Sun. In the region beyond the termination shock, called the heliosheath, interstellar matter interacts with the solar wind. "Voyager 1", the farthest human-made object from the Earth (after 1998), crossed the termination shock December 16, 2004 and later entered interstellar space when it crossed the heliopause on August 25, 2012, providing the first direct probe of conditions in the ISM .
Interstellar extinction.
Dust grains in the ISM are responsible for extinction and reddening, the decreasing light intensity and shift in the dominant observable wavelengths of light from a star. These effects are caused by scattering and absorption of photons and allow the ISM to be observed with the naked eye in a dark sky. The apparent rifts that can be seen in the band of the Milky Way – a uniform disk of stars – are caused by absorption of background starlight by dust in molecular clouds within a few thousand light years from Earth. This effect decreases rapidly with increasing wavelength ("reddening" is caused by greater absorption of blue than red light), and becomes almost negligible at mid-infrared wavelengths (> 5 μm).
Extinction provides one of the best ways of mapping the three-dimensional structure of the ISM, especially since the advent of accurate distances to millions of stars from the "Gaia" mission. The total amount of dust in front of each star is determined from its reddening, and the dust is then located along the line of sight by comparing the dust column density in front of stars projected close together on the sky, but at different distances. By 2022 it was possible to generate a map of ISM structures within 3 kpc (10,000 light years) of the Sun.
Far ultraviolet light is absorbed effectively by the neutral hydrogen gas in the ISM. Specifically, atomic hydrogen absorbs very strongly at about 121.5 nanometers, the Lyman-alpha transition, and also at the other Lyman series lines. Therefore, it is nearly impossible to see light emitted at those wavelengths from a star farther than a few hundred light years from Earth, because most of it is absorbed during the trip to Earth by intervening neutral hydrogen. All photons with wavelength < 91.6 nm, the Lyman limit, can ionize hydrogen and are also very strongly absorbed. The absorption gradually decreases with increasing photon energy, and the ISM begins to become transparent again in soft X-rays, with wavelengths shorter than about 1 nm.
Heating and cooling.
The ISM is usually far from thermodynamic equilibrium. Collisions establish a Maxwell–Boltzmann distribution of velocities, and the 'temperature' normally used to describe interstellar gas is the 'kinetic temperature', which describes the temperature at which the particles would have the observed Maxwell–Boltzmann velocity distribution in thermodynamic equilibrium. However, the interstellar radiation field is typically much weaker than a medium in thermodynamic equilibrium; it is most often roughly that of an A star (surface temperature of ~10,000 K) highly diluted. Therefore, bound levels within an atom or molecule in the ISM are rarely populated according to the Boltzmann formula .
Depending on the temperature, density, and ionization state of a portion of the ISM, different heating and cooling mechanisms determine the temperature of the gas.
Heating mechanisms.
Grain heating by thermal exchange is very important in supernova remnants where densities and temperatures are very high.
Gas heating via grain-gas collisions is dominant deep in giant molecular clouds (especially at high densities). Far infrared radiation penetrates deeply due to the low optical depth. Dust grains are heated via this radiation and can transfer thermal energy during collisions with the gas. A measure of efficiency in the heating is given by the accommodation coefficient:
formula_0
where "T" is the gas temperature, "Td" the dust temperature, and "T"2 the post-collision temperature of the gas atom or molecule. This coefficient was measured by as "α" = 0.35.
* Gravitational collapse of a cloud
* Supernova explosions
* Stellar winds
* Expansion of H II regions
* Magnetohydrodynamic waves created by supernova remnants
Observations of the ISM.
Despite its extremely low density, photons generated in the ISM are prominent in nearly all bands of the electromagnetic spectrum. In fact the optical band, on which astronomers relied until well into the 20th century, is the one in which the ISM is least obvious.
Radiowave propagation.
Radio waves are affected by the plasma properties of the ISM. The lowest frequency radio waves, below ≈ 0.1 MHz, cannot propagate through the ISM since they are below its plasma frequency. At higher frequencies, the plasma has a significant refractive index, decreasing with increasing frequency, and also dependent on the density of free electrons. Random variations in the electron density cause interstellar scintillation, which broadens the apparent size of distant radio sources seen through the ISM, with the broadening decreasing with frequency squared. The variation of refractive index with frequency causes the arrival times of pulses from pulsars and Fast radio bursts to be delayed at lower frequencies (dispersion). The amount of delay is proportional to the column density of free electrons (Dispersion measure, DM), which is useful for both mapping the distribution of ionized gas in the Galaxy and estimating distances to pulsars (more distant ones have larger DM).
A second propagation effect is Faraday rotation, which affects linearly polarized radio waves, such as those produced by synchrotron radiation, one of the most common sources of radio emission in astrophysics. Faraday rotation depends on both the electron density and the magnetic field strength, and so is used as a probe of the interstellar magnetic field.
The ISM is generally very transparent to radio waves, allowing unimpeded observations right through the disk of the Galaxy. There are a few exceptions to this rule. The most intense spectral lines in the radio spectrum can become opaque, so that only the surface of the line-emitting cloud is visible. This mainly affects the carbon monoxide lines at millimetre wavelengths that are used to trace molecular clouds, but the 21-cm line from neutral hydrogen can become opaque in the cold neutral medium. Such absorption only affects photons at the line frequencies: the clouds are otherwise transparent. The other significant absorption process occurs in dense ionized regions. These emit photons, including radio waves, via thermal bremsstrahlung. At short wavelengths, typically microwaves, these are quite transparent, but their brightness approaches the black body limit as formula_1, and at wavelengths long enough that this limit is reached, they become opaque. Thus metre-wavelength observations show H II regions as cool spots blocking the bright background emission from Galactic synchrotron radiation, while at decametres the entire galactic plane is absorbed, and the longest radio waves observed, 1 km, can only propagate 10-50 parsecs through the Local Bubble. The frequency at which a particular nebula becomes optically thick depends on its "emission measure"
formula_2,
the column density of squared electron number density. Exceptionally dense nebulae can become optically thick at centimetre wavelengths: these are just-formed and so both rare and small ('Ultra-compact H II regions')
The general transparency of the ISM to radio waves, especially microwaves, may seem surprising since radio waves at frequencies > 10 GHz are significantly attenuated by Earth's atmosphere (as seen in the figure). But the column density through the atmosphere is vastly larger than the column through the entire Galaxy, due to the extremely low density of the ISM.
History of knowledge of interstellar space.
The word 'interstellar' (between the stars) was coined by Francis Bacon in the context of the ancient theory of a literal sphere of fixed stars. Later in the 17th century, when the idea that stars were scattered through infinite space became popular, it was debated whether that space was a true vacuum or filled with a hypothetical fluid, sometimes called "aether", as in René Descartes' vortex theory of planetary motions. While vortex theory did not survive the success of Newtonian physics, an invisible luminiferous aether was re-introduced in the early 19th century as the medium to carry light waves; e.g., in 1862 a journalist wrote: "this efflux occasions a thrill, or vibratory motion, in the ether which fills the interstellar spaces."
In 1864, William Huggins used spectroscopy to determine that a nebula is made of gas. Huggins had a private observatory with an 8-inch telescope, with a lens by Alvan Clark; but it was equipped for spectroscopy, which enabled breakthrough observations.
From around 1889, Edward Barnard pioneered deep photography of the sky, finding many 'holes in the Milky Way'. At first he compared them to sunspots, but by 1899 was prepared to write: "One can scarcely conceive a vacancy with holes in it, unless there is nebulous matter covering these apparently vacant places in which holes might occur". These holes are now known as dark nebulae, dusty molecular clouds silhouetted against the background star field of the galaxy; the most prominent are listed in his Barnard Catalogue. The first direct detection of cold diffuse matter in interstellar space came in 1904, when Johannes Hartmann observed the binary star Mintaka (Delta Orionis) with the Potsdam Great Refractor. Hartmann reported that absorption from the "K" line of calcium appeared "extraordinarily weak, but almost perfectly sharp" and also reported the "quite surprising result that the calcium line at 393.4 nanometres does not share in the periodic displacements of the lines caused by the orbital motion of the spectroscopic binary star". The stationary nature of the line led Hartmann to conclude that the gas responsible for the absorption was not present in the atmosphere of the star, but was instead located within an isolated cloud of matter residing somewhere along the line of sight to this star. This discovery launched the study of the interstellar medium.
Interstellar gas was further confirmed by Slipher in 1909, and then by 1912 interstellar dust was confirmed by Slipher. Interstellar sodium was detected by Mary Lea Heger in 1919 through the observation of stationary absorption from the atom's "D" lines at 589.0 and 589.6 nanometres towards Delta Orionis and Beta Scorpii.
In the series of investigations, Viktor Ambartsumian introduced the now commonly accepted notion that interstellar matter occurs in the form of clouds.
Subsequent observations of the "H" and "K" lines of calcium by revealed double and asymmetric profiles in the spectra of Epsilon and Zeta Orionis. These were the first steps in the study of the very complex interstellar sightline towards Orion. Asymmetric absorption line profiles are the result of the superposition of multiple absorption lines, each corresponding to the same atomic transition (for example the "K" line of calcium), but occurring in interstellar clouds with different radial velocities. Because each cloud has a different velocity (either towards or away from the observer/Earth), the absorption lines occurring within each cloud are either blue-shifted or red-shifted (respectively) from the lines' rest wavelength through the Doppler Effect. These observations confirming that matter is not distributed homogeneously were the first evidence of multiple discrete clouds within the ISM.
The growing evidence for interstellar material led to comment: "While the interstellar absorbing medium may be simply the ether, yet the character of its selective absorption, as indicated by Kapteyn, is characteristic of a gas, and free gaseous molecules are certainly there, since they are probably constantly being expelled by the Sun and stars."
The same year, Victor Hess's discovery of cosmic rays, highly energetic charged particles that rain onto the Earth from space, led others to speculate whether they also pervaded interstellar space. The following year, the Norwegian explorer and physicist Kristian Birkeland wrote: "It seems to be a natural consequence of our points of view to assume that the whole of space is filled with electrons and flying electric ions of all kinds. We have assumed that each stellar system in evolutions throws off electric corpuscles into space. It does not seem unreasonable therefore to think that the greater part of the material masses in the universe is found, not in the solar systems or nebulae, but in 'empty' space" .
noted that "it could scarcely have been believed that the enormous gaps between the stars are completely void. Terrestrial aurorae are not improbably excited by charged particles emitted by the Sun. If the millions of other stars are also ejecting ions, as is undoubtedly true, no absolute vacuum can exist within the galaxy."
In September 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs), subjected to "interstellar medium (ISM)" conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics, "a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature, which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks."
In February 2014, NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
In April 2019, scientists, working with the Hubble Space Telescope, reported the confirmed detection of the large and complex ionized molecules of buckminsterfullerene (C60) (also known as "buckyballs") in the interstellar medium spaces between the stars.
In September 2020, evidence was presented of solid-state water in the interstellar medium, and particularly, of water ice mixed with silicate grains in cosmic dust grains.
See also.
<templatestyles src="Div col/styles.css"/>
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha = \\frac{T_2 - T}{T_d - T}"
},
{
"math_id": 1,
"text": "\\propto \\lambda^{2.1}"
},
{
"math_id": 2,
"text": "EM = \\int n_e^2\\, dl"
}
]
| https://en.wikipedia.org/wiki?curid=69453 |
69457599 | High-multiplicity bin packing | High-multiplicity bin packing is a special case of the bin packing problem, in which the number of different item-sizes is small, while the number of items with each size is large. While the general bin-packing problem is NP-hard, the high-multiplicity setting can be solved in polynomial time, assuming that the number of different sizes is a fixed constant.
Problem definition.
The inputs to the problem are positive integers:
The output is a "packing" - an assignment of the items to bins, such that the total size of items in each bin is at most "B", and subject to this, the number of bins is as small as possible.
Example: suppose "d"=2, "s"1=30, "s"2=40, "n"1="n"2=5, "B"=120. So there are "n"=10 items with sizes: 30,30,30,30,30,40,40,40,40,40. Then, a possible packing is: {30,30,30,30}, {40,40,40}, {30,40,40}, which uses 3 bins.
Configurations.
A "configuration" is a set of items that can fit into a single bin. It can be represented by a vector of "d" integers, denoting the multiplicities of the different sizes in the configuration. Formally, for each configuration "c" we define an integer vector ac="a"c,1, ..., "ac,d" such that ac ≤ n and ac·s ≤ B.
In the above example, one of the configurations is "c"={30,40,40}, since 1*30+2*40 ≤ 120. Its corresponding vector is ac=(1,2). Other configuration vectors are (4,0), (3,0), (2,0), (2,1), (1,0), (1,1), (1,2), (0,1), (0,2), (0,3). If we had only three items of size 3, then we could not use the (4,0) configuration.
It is possible to present the problem using the "configuration linear program": for each configuration "c", there is a variable "xc", denoting the number of bins in which "c" is used. The total number of bins used is simply the sum of "x"c over all configurations, denoted by 1·x. The total number of items used from each size is the sum of the vectors ac · xc" over all configurations c. Then, the problem is to minimize 1·x such that the sum of ac · xc", over all configurations c, is at least n, so that all items are packed.
Algorithms.
Basic algorithms.
Suppose first that all items are large, that is, every "si" is at least "e·B" for some fraction "e">0. Then, the total number of items in each bin is at most 1/"e", so the total number of configuration is at most "d"1/"e". Each configuration appears at most "n" times. Therefore, there are at most formula_0 combinations to check. For each combination, we have to check "d" constraints (one for each size), so the run-time is formula_1, which is polynomial in "n" when "d, e" are constant.
The main problem with this algorithm (besides the fact that it works only when the items are large) is that its runtime is polynomial in "n", but the length of the input (in binary representation) is linear in log("V"), which is of the order of magnitude of log("n").
Run-time polynomial in the input size.
Filippi and Agnetis presented an algorithm that finds a solution with at most OPT+"d"-2 bins in time O(poly(log "V")). In particular, for "d"=2 different sizes, their algorithm finds an optimal solution in time O(log "V").
Goemans and Rothvoss presented an algorithm for any fixed "d", that finds the optimal solution when all numbers are given in binary encoding. Their algorithm solves the following problem: given two "d"-dimensional polytopes "P" and "Q", find the minimum number of integer points in "P" whose sum lies in "Q". Their algorithm runs in time formula_2. Their algorithm can be adapted to other problems, such as Identical-machines scheduling and unrelated-machines scheduling with various constraints.
Rounding a general instance to a high-multiplicity instance.
Several approximation algorithms for the general bin-packing problem use the following scheme:
The algorithms differ in how they round the instance.
Linear rounding.
Lueker and de-la-Vega and invented the idea of "adaptive input rounding". Order the items by their size, and group them into 1/"e"2 groups of cardinality "ne"2. In each group, round the sizes upwards to the maximum size in the group. Now, there are only "d"=1/"e"2 different sizes. The solution of the rounded instance is feasible for the original instance too, but the number of bins may be larger than necessary. To quantify the loss, consider the instance rounded "down" to the maximum size in the "previous" group (the first group is rounded down to 0). The rounded-down instance "D" is almost equal to the rounded-up instance "U", except that in "D" there are some "ne"2 zeros while in "U" there are some "ne"2 large items instead; but their size is at most "B". Therefore, U requires at most "ne"2 more bins than "D". Since "D" requires fewer bins than the optimum, we get that Bins("U") ≤ OPT + "ne"2, that is, we have an additive error that can be made as small as we like by choosing "e".
If all items are large (of size at least "eB"), then each bin in OPT contains at most 1/"e" items (of size at least "eB"), so OPT must be at least "en". Therefore, Bins("U") ≤ (1+"e")OPT. After handling the small items, we get at most formula_3.
Geometric rounding.
Karmarkar and Karp present a more efficient rounding method which they call "geometric rounding" (in contrast to the linear rounding of de-la-Vega and Lueker). Based on these innovations, they present an algorithm with run-time polynomial in formula_4 and formula_5. Their algorithm finds a solution with size at most formula_6.
Improvements.
This technique was later improved by several authors:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " n^{d^{1/e}}"
},
{
"math_id": 1,
"text": "d\\cdot n^{d^{1/e}}"
},
{
"math_id": 2,
"text": "(\\log V)^{2^{O(d)}}"
},
{
"math_id": 3,
"text": "(1+2e)\\mathrm{OPT}+1"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "1/\\varepsilon"
},
{
"math_id": 6,
"text": "\\mathrm{OPT} + \\mathcal{O}(\\log^2(\\mathrm{OPT}))"
},
{
"math_id": 7,
"text": "\\mathrm{OPT} + O(\\log(\\mathrm{OPT})\\cdot \\log\\log(\\mathrm{OPT}))"
},
{
"math_id": 8,
"text": "\\mathrm{OPT} + O(\\log(\\mathrm{OPT}))"
}
]
| https://en.wikipedia.org/wiki?curid=69457599 |
69458295 | 1 Samuel 9 | First Book of Samuel chapter
1 Samuel 9 is the ninth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter describes the meeting between Saul and Samuel which led to Saul's first anointing as king (1 Samuel 10:1–16), within a section comprising 1 Samuel 7–15 which records the rise of the monarchy in Israel and the account of the first years of King Saul.
Text.
This chapter was originally written in the Hebrew language. It is divided into 27 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 6–8, 10–12, 16–24.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
This chapter introduces Saul, who was to be the first king of Israel, as a resolution to the request of king left unfinished in previous chapter. The narrative bears some features of folk-tales: a young man setting out to find his father's missing donkeys comes out as designated king. Saul's search led him to the prophet Samuel, who privately anointed Saul as king and provided three signs as confirmation to its legitimacy, all of which were fulfilled in 1 Samuel 10:2–7. Throughout the account, Saul appeared to be humble, but also showed lack of confidence and perhaps doubts about his calling to kingship.
Saul's genealogy (9:1–2).
The listing of Saul's ancestry in the beginning of this chapter recalls the opening of the Books of Samuel () which delineates Samuel's genealogy. In both genealogies Samuel and Saul are listed in the sixth position. The connection of Samuel's name to the word "asked" (Hebrew: "shaul") in 1 Samuel 1:28 may also relate to the name of Saul (Hebrew: "shaul") Saul's genealogy has two noteworthy features:
These may emphasize God's direct participation in the events that Saul, a youth belonging to the smallest of the Israel tribes and the humblest of families (9:21) was endowed with extraordinary characteristics (9:2) to be elected as the first king of Israel.
"Now there was a man of Benjamin, whose name was Kish, the son of Abiel, the son of Zeror, the son of Bechorath, the son of Aphiah, a Benjamite, a mighty man of power."
Verse 1.
Some ancestors seem to omitted, among whom are Matri, mentioned in 1 Samuel 10:21;
and Jehiel, mentioned in 1 Chronicles 9:35 (cf. 1 Chronicles 8:29), who was described as the first settler and coloniser of Gibeon, and as husband of Maachah, a daughter or granddaughter of Caleb. An ancestor of Saul could have been among the 600 men of Benjamin who escaped to the rock Rimmon during the slaughter of the whole tribe by the other tribes of Israel (Judges 20:47–21:1).
Samuel and Saul meet (9:3–27).
Saul was told by his father, Kish, to look for their stray donkeys, so he and a servant went through the hill country of Ephraim until they arrived in the land of Zuph (9:5). The servant persuaded Saul to visit a nameless seer (9:6–10), who was unfamiliar to them (cf. 9:18), and turned out to be Samuel (9:14, 19). A day before Samuel had been told by YHWH that the chosen man would come to him (9:16). God commanded Samuel to anoint Saul not as "king" (Hebrew: "melek"), but "ruler" (Hebrew: "nagid"; "prince"), in contrast to the instruction for Samuel to anoint David as "king" in . After God clearly point Saul to Samuel ("Behold the man"; , the prophet introduced himself to Saul as the seer and demonstrating his credential by saying accurately about Saul's donkeys. Saul was invited by Samuel to a meal and given a choice of meat which had been set aside for Saul beforehand, again indicating that the meeting was not coincidental. This "pre-coronation meal" was similar to the one organized later when Samuel anointed David (a meal and invited guests; 9:22). Samuel did not use the occasion of the dinner to anoint Saul, but waited instead to the next morning (as described in 1 Samuel 10).
" Now the donkeys of Kish, the father of Saul, were lost. And Kish said to his son Saul, "Take now one of the servants with you, and arise, go find the donkeys.""
Verse 3.
The Syriac Peshitta version has additional words: ""So Saul arose and went out. He took with him one of the boys and went out to look for his father’s donkeys"."
"When they had come to the land of Zuph, Saul said to his servant who was with him, "Come, let us return, lest my father cease caring about the donkeys and become worried about us."
"As they were going down to the outskirts of the city, Samuel said to Saul, “Tell the servant to go on ahead of us.” And he went on. “But you stand here awhile, that I may announce to you the word of God.”"
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69458295 |
6946171 | Canonical Huffman code | In computer science and information theory, a canonical Huffman code is a particular type of Huffman code with unique properties which allow it to be described in a very compact manner. Rather than storing the structure of the code tree explicitly, canonical Huffman codes are ordered in such a way that it suffices to only store the lengths of the codewords, which reduces the overhead of the codebook.
Motivation.
Data compressors generally work in one of two ways. Either the decompressor can infer what codebook the compressor has used from previous context, or the compressor must tell the decompressor what the codebook is. Since a canonical Huffman codebook can be stored especially efficiently, most compressors start by generating a "normal" Huffman codebook, and then convert it to canonical Huffman before using it.
In order for a symbol code scheme such as the Huffman code to be decompressed, the same model that the encoding algorithm used to compress the source data must be provided to the decoding algorithm so that it can use it to decompress the encoded data. In standard Huffman coding this model takes the form of a tree of variable-length codes, with the most frequent symbols located at the top of the structure and being represented by the fewest bits.
However, this code tree introduces two critical inefficiencies into an implementation of the coding scheme. Firstly, each node of the tree must store either references to its child nodes or the symbol that it represents. This is expensive in memory usage and if there is a high proportion of unique symbols in the source data then the size of the code tree can account for a significant amount of the overall encoded data. Secondly, traversing the tree is computationally costly, since it requires the algorithm to jump randomly through the structure in memory as each bit in the encoded data is read in.
Canonical Huffman codes address these two issues by generating the codes in a clear standardized format; all the codes for a given length are assigned their values sequentially. This means that instead of storing the structure of the code tree for decompression only the lengths of the codes are required, reducing the size of the encoded data. Additionally, because the codes are sequential, the decoding algorithm can be dramatically simplified so that it is computationally efficient.
Algorithm.
The normal Huffman coding algorithm assigns a variable length code to every symbol in the alphabet. More frequently used symbols will be assigned a shorter code. For example, suppose we have the following "non"-canonical codebook:
A = 11
B = 0
C = 101
D = 100
Here the letter A has been assigned 2 bits, B has 1 bit, and C and D both have 3 bits. To make the code a "canonical" Huffman code, the codes are renumbered. The bit lengths stay the same with the code book being sorted "first" by codeword length and "secondly" by alphabetical value of the letter:
B = 0
A = 11
C = 101
D = 100
Each of the existing codes are replaced with a new one of the same length, using the following algorithm:
By following these three rules, the "canonical" version of the code book produced will be:
B = 0
A = 10
C = 110
D = 111
As a fractional binary number.
Another perspective on the canonical codewords is that they are the digits past the radix point (binary decimal point) in a binary representation of a certain series. Specifically, suppose the lengths of the codewords are "l"1 ... "l"n. Then the canonical codeword for symbol "i" is the first "l"i binary digits past the radix point in the binary representation of
formula_0
This perspective is particularly useful in light of Kraft's inequality, which says that the sum above will always be less than or equal to 1 (since the lengths come from a prefix free code). This shows that adding one in the algorithm above never overflows and creates a codeword that is longer than intended.
Encoding the codebook.
The advantage of a canonical Huffman tree is that it can be encoded in fewer bits than an arbitrary tree.
Let us take our original Huffman codebook:
A = 11
B = 0
C = 101
D = 100
There are several ways we could encode this Huffman tree. For example, we could write each symbol followed by the number of bits and code:
('A',2,11), ('B',1,0), ('C',3,101), ('D',3,100)
Since we are listing the symbols in sequential alphabetical order, we can omit the symbols themselves, listing just the number of bits and code:
(2,11), (1,0), (3,101), (3,100)
With our "canonical" version we have the knowledge that the symbols are in sequential alphabetical order "and" that a later code will always be higher in value than an earlier one. The only parts left to transmit are the bit-lengths (number of bits) for each symbol. Note that our canonical Huffman tree always has higher values for longer bit lengths and that any symbols of the same bit length ("C" and "D") have higher code values for higher symbols:
A = 10 (code value: 2 decimal, bits: 2)
B = 0 (code value: 0 decimal, bits: 1)
C = 110 (code value: 6 decimal, bits: 3)
D = 111 (code value: 7 decimal, bits: 3)
Since two-thirds of the constraints are known, only the number of bits for each symbol need be transmitted:
2, 1, 3, 3
With knowledge of the canonical Huffman algorithm, it is then possible to recreate the entire table (symbol and code values) from just the bit-lengths. Unused symbols are normally transmitted as having zero bit length.
Another efficient way representing the codebook is to list all symbols in increasing order by their bit-lengths, and record the number of symbols for each bit-length. For the example mentioned above, the encoding becomes:
(1,1,2), ('B','A','C','D')
This means that the first symbol "B" is of length 1, then the "A" of length 2, and remains of 3. Since the symbols are sorted by bit-length, we can efficiently reconstruct the codebook. A pseudo code describing the reconstruction is introduced on the next section.
This type of encoding is advantageous when only a few symbols in the alphabet are being compressed. For example, suppose the codebook contains only 4 letters "C", "O", "D" and "E", each of length 2. To represent the letter "O" using the previous method, we need to either add a lot of zeros:
0, 0, 2, 2, 2, 0, ... , 2, ...
or record which 4 letters we have used. Each way makes the description longer than:
(0,4), ('C','O','D','E')
The JPEG File Interchange Format uses this method of encoding, because at most only 162 symbols out of the 8-bit alphabet, which has size 256, will be in the codebook.
Pseudocode.
Given a list of symbols sorted by bit-length, the following pseudocode will print a canonical Huffman code book:
"code" := 0
while more symbols do
print symbol, "code"
"code" := ("code" + 1) « ((bit length of the next symbol) − (current bit length))
algorithm compute huffman code is
input: message ensemble (set of (message, probability)).
base "D".
output: code ensemble (set of (message, code)).
1- sort the message ensemble by decreasing probability.
2- "N" is the cardinal of the message ensemble (number of different
messages).
3- compute the integer &NoBreak;&NoBreak; such as &NoBreak;&NoBreak; and &NoBreak;&NoBreak; is integer.
4- select the &NoBreak;&NoBreak; least probable messages, and assign them each a
digit code.
5- substitute the selected messages by a composite message summing
their probability, and re-order it.
6- while there remains more than one message, do steps thru 8.
7- select "D" least probable messages, and assign them each a
digit code.
8- substitute the selected messages by a composite message
summing their probability, and re-order it.
9- the code of each message is given by the concatenation of the
code digits of the aggregate they've been put in. | [
{
"math_id": 0,
"text": "\\sum_{j = 1}^{i - 1} 2^{-l_j}."
}
]
| https://en.wikipedia.org/wiki?curid=6946171 |
69462019 | Simplex tree | Topological data
In topological data analysis, a simplex tree is a type of trie used to represent efficiently any general simplicial complex. Through its nodes, this data structure notably explicitly represents all the simplices. Its flexible structure allows implementation of many basic operations useful to computing persistent homology. This data structure was invented by Jean-Daniel Boissonnat and Clément Maria in 2014, in the article "The Simplex Tree: An Efficient Data Structure for General Simplicial Complexes". This data structure offers efficient operations on sparse simplicial complexes. For dense or maximal simplices, Skeleton-Blocker representations or Toplex Map representations are used.
Definitions.
Many researchers in topological data analysis consider the simplex tree to be the most compact simplex-based data structure for simplicial complexes, and a data structure allowing an intuitive understanding of simplicial complexes due to integrated usage of their mathematical properties.
Heuristic definition.
Consider any simplicial complex is a set composed of points (0 dimensions), line segments (1 dimension), triangles (2 dimensions), and their "n"-dimensional counterparts, called n-simplexes within a topological space. By the mathematical properties of simplexes, any n-simplex is composed of multiple formula_0-simplexes. Thus, lines are composed of points, triangles of lines, tetrahedrons of triangle. Notice each higher level adds 1 vertex to the vertices of the n-simplex. The data structure is simplex-based, therefore, it should represent all simplexes uniquely by the points defining the simplex. A simple way to achieve this is to define each simplex by its points in sorted order.
Let formula_1 be a simplicial complex of dimension k, formula_2 its vertex set, where vertices are labeled from 1 to formula_3 and ordered accordingly. Now, construct a dictionary size formula_3 containing all vertex labels in order. This represents the 0-dimensional simplexes. Then, for the path to the initial dictionary of each entry in the initial dictionary, add as a child dictionary all vertices fully-connected to the current set of vertices, all of which having a label greater than formula_4. Represent this step on k levels. Clearly, considering the first dictionary as depth 0, any entry at depth formula_5 of any dictionary in this data structure uniquely represents a formula_5-simplex within formula_1. For completeness, the point to the initial dictionary is considered the representation of the empty simplex. For practicality of the operations, labels that are repeated on the same level are linked together, forming a looped linked list. Finally, child dictionaries also have pointers to their parent dictionary, for fast ancestor access.
Constructive definition.
Let formula_1 be a simplicial complex of dimension k. We begin by decomposing the simplicial complex into mutually exclusive simplexes. This can be achieved in a greedy way by iteratively removing from the simplicial complex the highest order simplexes until the simplicial complex is empty. We then need to label each vertex from 1 to formula_3 and associate each simplex with its corresponding "word", that is the ordered list of its vertices by label. Ordering the labels ensures no repetition in the simplex tree, as there is only one way to describe a simplex. We start with a null root, representing the null simplex. Then, we iterate through all simplexes, and through each label of each simplex word. If the label is available as a child to the current root, make that child the temporary root of the insertion process, otherwise, create a new node for the child, make it the new temporary root, and continue with the rest of the word. During this process, k dictionaries are maintained with all the labels and insert the address of the node for the corresponding label. If an address is already at that space in the dictionary, a pointer is created from the old node to the new node. Once the process is finished all children of each node are entered into a dictionary, and all pointers are loop to make looped linked lists. A wide range of dictionaries could be applied here, like hash tables, but some operations assume the possibility of an ordered traversal of the entries, leading most of the implementations to use red-black trees are dictionaries.
Operations.
While simplex trees are not the most space efficient data structures for simplicial complex representation, their operations on sparse data are considered state-of-art. Here, we give the bounds of different useful operations possible through this representation. Many implementations of these operations are available.
We first introduce the notation. Consider formula_6 is a given simplex, formula_7 is a given node corresponding to the last vertex of formula_6, formula_4 is the label associate to that node, formula_8 is the depth of that node, formula_9 is the dimension of the simplicial complex, formula_10 is the maximal number of operations to access formula_7 in a dictionary (if the dictionary is a red-black tree, formula_11 is the complexity) . Consider formula_12 is the number of cofaces of formula_6, and formula_13 is the number of nodes of the simplex tree ending with the label formula_4 at depth greater than formula_8. Notice formula_14.
achieved in formula_17.
As for construction, as seeing in the constructive definition, construction is proportional to the number and complexity of simplexes in the simplicial complex. This can be especially expensive if the simplicial complex is dense. However, some optimizations for particular simplicial complexes, including for Flag complexes, Rips complexes and Witness complexes.
Applications.
Simplex tree are efficient in sparse simplicial complexes. For this purpose, many persistent homology algorithms focusing on high-dimensional real data (often sparse) use simplex trees within these algorithms. While simplex trees are not as efficient as incidence matrix, their simplex-based structure allows them to be useful efficient for simplicial complex storage within persistent homology algorithms.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(n-1)"
},
{
"math_id": 1,
"text": "\\Kappa"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "\\left\\vert V \\right\\vert"
},
{
"math_id": 4,
"text": "l"
},
{
"math_id": 5,
"text": "\\tau"
},
{
"math_id": 6,
"text": "s"
},
{
"math_id": 7,
"text": "\\sigma"
},
{
"math_id": 8,
"text": "j"
},
{
"math_id": 9,
"text": "k"
},
{
"math_id": 10,
"text": "D_\\sigma"
},
{
"math_id": 11,
"text": "D_\\sigma=O(log(deg(\\sigma)))"
},
{
"math_id": 12,
"text": "C_s"
},
{
"math_id": 13,
"text": "N_l^{>j}"
},
{
"math_id": 14,
"text": "N_l^{>j} \\leq C_s"
},
{
"math_id": 15,
"text": "O(j D_\\sigma)"
},
{
"math_id": 16,
"text": "O(2^j D_\\sigma)"
},
{
"math_id": 17,
"text": "O(k N_l^{>j}+C_s D_\\sigma)"
},
{
"math_id": 18,
"text": "O(k N_l^{>j})"
},
{
"math_id": 19,
"text": "O(j^2 D_\\sigma)"
}
]
| https://en.wikipedia.org/wiki?curid=69462019 |
69480 | Very high frequency | Electromagnetic wave range of 30-300 MHz
Very high frequency (VHF) is the ITU designation for the range of radio frequency electromagnetic waves (radio waves) from 30 to 300 megahertz (MHz), with corresponding wavelengths of ten meters to one meter.
Frequencies immediately below VHF are denoted high frequency (HF), and the next higher frequencies are known as ultra high frequency (UHF).
VHF radio waves propagate mainly by line-of-sight, so they are blocked by hills and mountains, although due to refraction they can travel somewhat beyond the visual horizon out to about 160 km (100 miles). Common uses for radio waves in the VHF band are Digital Audio Broadcasting (DAB) and FM radio broadcasting, television broadcasting, two-way land mobile radio systems (emergency, business, private use and military), long range data communication up to several tens of kilometers with radio modems, amateur radio, and marine communications. Air traffic control communications and air navigation systems (e.g. VOR and ILS) work at distances of or more to aircraft at cruising altitude.
In the Americas and many other parts of the world, VHF Band I was used for the transmission of analog television. As part of the worldwide transition to digital terrestrial television most countries require broadcasters to air television in the VHF range using digital, rather than analog encoding.
Propagation characteristics.
Radio waves in the VHF band propagate mainly by line-of-sight and ground-bounce paths; unlike in the HF band there is only some reflection at lower frequencies from the ionosphere (skywave propagation). They do not follow the contour of the Earth as ground waves and so are blocked by hills and mountains, although because they are weakly refracted (bent) by the atmosphere they can travel somewhat beyond the visual horizon out to about 160 km (100 miles). They can penetrate building walls and be received indoors, although in urban areas reflections from buildings cause multipath propagation, which can interfere with television reception. Atmospheric radio noise and interference (RFI) from electrical equipment is less of a problem in this and higher frequency bands than at lower frequencies. The VHF band is the first band at which efficient transmitting antennas are small enough that they can be mounted on vehicles and portable devices, so the band is used for two-way land mobile radio systems, such as walkie-talkies, and two way radio communication with aircraft (Airband) and ships (marine radio). Occasionally, when conditions are right, VHF waves can travel long distances by tropospheric ducting due to refraction by temperature gradients in the atmosphere.
Line-of-sight calculation.
VHF transmission range is a function of transmitter power, receiver sensitivity, and distance to the horizon, since VHF signals propagate under normal conditions as a near line-of-sight phenomenon. The distance to the radio horizon is slightly extended over the geometric line of sight to the horizon, as radio waves are weakly bent back toward the Earth by the atmosphere.
An approximation to calculate the line-of-sight horizon distance (on Earth) is:
These approximations are only valid for antennas at heights that are small compared to the radius of the Earth. They may not necessarily be accurate in mountainous areas, since the landscape may not be transparent enough for radio waves.
In engineered communications systems, more complex calculations are required to assess the probable coverage area of a proposed transmitter station.
Antennas.
VHF is the first band at which wavelengths are small enough that efficient transmitting antennas are short enough to mount on vehicles and handheld devices, a quarter wave whip antenna at VHF frequencies is 25 cm to 2.5 meter (10 inches to 8 feet) long. So the VHF and UHF wavelengths are used for two-way radios in vehicles, aircraft, and handheld transceivers and walkie-talkies. Portable radios usually use whips or rubber ducky antennas, while base stations usually use larger fiberglass whips or collinear arrays of vertical dipoles.
For directional antennas, the Yagi antenna is the most widely used as a high gain or "beam" antenna. For television reception, the Yagi is used, as well as the log-periodic antenna due to its wider bandwidth. Helical and turnstile antennas are used for satellite communication since they employ circular polarization. For even higher gain, multiple Yagis or helicals can be mounted together to make array antennas. Vertical collinear arrays of dipoles can be used to make high gain omnidirectional antennas, in which more of the antenna's power is radiated in horizontal directions. Television and FM broadcasting stations use collinear arrays of specialized dipole antennas such as batwing antennas.
Universal use.
Certain subparts of the VHF band have the same use around the world. Some national uses are detailed below.
By country.
Australia.
The VHF TV band in Australia was originally allocated channels 1 to 10-with channels 2, 7 and 9 assigned for the initial services in Sydney and Melbourne, and later the same channels were assigned in Brisbane, Adelaide and Perth. Other capital cities and regional areas used a combination of these and other frequencies as available. The initial commercial services in Hobart and Darwin were respectively allocated channels 6 and 8 rather than 7 or 9.
By the early 1960s it became apparent that the 10 VHF channels were insufficient to support the growth of television services. This was rectified by the addition of three additional frequencies-channels 0, 5A and 11. Older television sets using rotary dial tuners required adjustment to receive these new channels. Most TVs of that era were not equipped to receive these broadcasts, and so were modified at the owners' expense to be able to tune into these bands; otherwise the owner had to buy a new TV.
Several TV stations were allocated to VHF channels 3, 4 and 5, which were within the FM radio bands although not yet used for that purpose. A couple of notable examples were NBN-3 Newcastle, WIN-4 Wollongong and ABC Newcastle on channel 5. While some Channel 5 stations were moved to 5A in the 1970s and 80s, beginning in the 1990s, the Australian Broadcasting Authority began a process to move these stations to UHF bands to free up valuable VHF spectrum for its original purpose of FM radio. In addition, by 1985 the federal government decided new TV stations are to be broadcast on the UHF band.
Two new VHF channels, 9A and 12, have since been made available and are being used primarily for digital services (e.g. ABC in capital cities) but also for some new analogue services in regional areas. Because channel 9A is not used for television services in or near Sydney, Melbourne, Brisbane, Adelaide or Perth, digital radio in those cities are broadcast on DAB frequencies blocks 9A, 9B and 9C.
VHF radio is also used for marine Radio as per its long-distance reachability comparing UHF frequencies.
Example allocation of VHF–UHF frequencies:
New Zealand.
Until 2013, the four main free-to-air TV stations in New Zealand used the VHF television bands (Band I and Band III) to transmit to New Zealand households. Other stations, including a variety of pay and regional free-to-air stations, were forced to broadcast in the UHF band, since the VHF band had been very overloaded with four stations sharing a very small frequency band, which was so overcrowded that one or more channels would not be available in some smaller towns.
However, at the end of 2013, all television channels stopped broadcasting on the VHF bands, as New Zealand moved to digital television broadcasting, requiring all stations to either broadcast on UHF or satellite (where UHF was unavailable) utilising the Freeview service.
Refer to Australasian television frequencies for more information.
United Kingdom.
British television originally used VHF band I and band III. Television on VHF was in black and white with 405-line format (although there were experiments with all three colour systems-NTSC, PAL, and SECAM-adapted for the 405-line system in the late 1950s and early 1960s).
British colour television was broadcast on UHF (channels 21–69), beginning in the late 1960s. From then on, TV was broadcast on both VHF and UHF (VHF being a monochromatic downconversion from the 625-line colour signal), with the exception of BBC2 (which had always broadcast solely on UHF). The last British VHF TV transmitters closed down on January 3, 1985. VHF band III is now used in the UK for digital audio broadcasting, and VHF band II is used for FM radio, as it is in most of the world.
Unusually, the UK has an amateur radio allocation at 4 metres, 70–70.5 MHz.
United States and Canada.
Frequency assignments between US and Canadian users are closely coordinated since much of the Canadian population is within VHF radio range of the US border. Certain discrete frequencies are reserved for radio astronomy.
The general services in the VHF band are:
Cable television, though not transmitted aerially, uses a spectrum of frequencies overlapping VHF.
VHF television.
The U.S. FCC allocated television broadcasting to a channelized roster as early as 1938 with 19 channels. That changed three more times: in 1940 when Channel 19 was deleted and several channels changed frequencies, then in 1946 with television going from 18 channels to 13 channels, again with different frequencies, and finally in 1948 with the removal of Channel 1 (analog channels 2–13 remain as they were, even on cable television). Channels 14–19 later appeared on the UHF band, while channel 1 remains unused.
87.5–87.9 MHz.
87.5–87.9 MHz is a radio band which, in most of the world, is used for FM broadcasting. In North America, however, this bandwidth is allocated to VHF television channel 6 (82–88 MHz). The analog audio for TV channel 6 is broadcast at 87.75 MHz (adjustable down to 87.74). Several stations, known as Frankenstations, most notably those joining the Pulse 87 franchise, have operated on this frequency as radio stations, though they use television licenses. As a result, FM radio receivers such as those found in automobiles which are designed to tune into this frequency range could receive the audio for analog-mode programming on the local TV channel 6 while in North America. The practice largely ended with the DTV transition in 2009, although some still exist.
The FM broadcast channel at 87.9 MHz is normally off-limits for FM audio broadcasting; it is reserved for displaced class D stations which have no other frequencies in the normal 88.1–107.9 MHz subband to move to. So far, only two stations have qualified to operate on 87.9 MHz: 10–watt KSFH in Mountain View, California and 34–watt translator K200AA in Sun Valley, Nevada.
Unlicensed operation.
In some countries, particularly the United States and Canada, limited low-power license-free operation is available in the FM broadcast band for purposes such as micro-broadcasting and sending output from CD or digital media players to radios without auxiliary-in jacks, though this is illegal in some other countries. This practice was legalised in the United Kingdom on 8 December 2006.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1.23\\times\\sqrt{ A_\\textrm{ft}}"
},
{
"math_id": 1,
"text": "A_\\textrm{ft}"
},
{
"math_id": 2,
"text": "\\sqrt{12.746 \\times A_\\textrm{m}}"
},
{
"math_id": 3,
"text": "A_\\textrm{m}"
}
]
| https://en.wikipedia.org/wiki?curid=69480 |
69480905 | 1 Samuel 10 | First Book of Samuel chapter
1 Samuel 10 is the tenth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter describes the anointing of Saul as the first king of Israel, within a section comprising 1 Samuel 7–15 which records the rise of the monarchy in Israel and the account of the first years of King Saul.
Text.
This chapter was originally written in the Hebrew language. It is divided into 27 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 3–12, 14, 16, 18, 24–27.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Samuel anoints Saul (10:1–16).
The anointing of Saul, as performed by Samuel under God's direction, set the king apart from the rest of the people as "anointed of the Lord" (cf. 1 Samuel 12:3, 1 Samuel 12:5, etc.), and sanctified as , "nagid', which is "prince" or "ruler" (also "captain", "leader" or "commander".
"Then Samuel took a flask of oil and poured it on his head, and kissed him and said: “Is it not because the Lord has anointed you commander over His inheritance?""
Saul proclaimed king of Israel (10:17–27).
This section can be considered the continuation of the narrative in 1 Samuel 8:1–22 that the previously dismissed assembly was at this time reconvened to appoint a king. Samuel started by saying a judgement oracle (verses17–19) that the people chose to reject God and elect a king, despite God's continuous protection and ability to deliver them. The election by lot was used elsewhere to find a hidden offender (Joshua 7; 1 Samuel 14:38-44), but this time, it is to confirm that Saul was God's choice, which was also acclaimed because of Saul's stature (verses 21b–27; cf. 1 Samuel 9:2). YHWH's displeasure with the people's request to have a king
did not make Saul's election invalid. The public acclamation of Saul (verse 24), an important element in a king's installation (cf. 1 Kings 1:25, 34, 39; 2 Kings 11:12), was followed by the reading of the rights and duties of the kingship (cf. 1 Samuel 8:11–18; Deuteronomy 17:18–20), establishing the 'subjugation of the monarchy to prophetic authority'.
"And Samuel said to all the people, "Do you see him whom the Lord has chosen, that there is no one like him among all the people?""
"So all the people shouted and said, "Long live the king!""
"Then Samuel told the people the manner of the kingdom, and wrote it in a book, and laid it up before the Lord. And Samuel sent all the people away, every man to his house."
"But some worthless fellows said, “How can this man save us?” And they despised him and brought him no present. But he held his peace."
Extended version of 10:27.
There is a textual variation between verse 27 and chapter 11 found in The Samuel Scroll, one of the Dead Sea Scrolls. The Masoretic text has something of a abrupt transition between 10:27 and 11:1, which changes topics to discuss the actions of Nahash of Ammon. The Dead Sea Scrolls version includes four sentences introducing Nahash as oppressing the nearby Jewish tribes. Additionally, the historian Josephus's work "Jewish Antiquities" features a very similar sentence that seems to be a paraphrase of the same material. It is unclear whether older forms of the text lacked the section, and a scribe added it to the Dead Sea Scrolls version as a bridge, or if the reverse occurred and a scribal error dropped copying the section for the copies that would later become the basis for the Masoretic text. If such an error did occur, it apparently happened early enough to affect the Greek translation of Samuel in the Septuagint as well.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69480905 |
6948409 | Binary erasure channel | In coding theory and information theory, a binary erasure channel (BEC) is a communications channel model. A transmitter sends a bit (a zero or a one), and the receiver either receives the bit correctly, or with some probability formula_0 receives a message that the bit was not received ("erased") .
Definition.
A binary erasure channel with erasure probability formula_0 is a channel with binary input, ternary output, and probability of erasure formula_0. That is, let formula_1 be the transmitted random variable with alphabet formula_2. Let formula_3 be the received variable with alphabet formula_4, where formula_5 is the erasure symbol. Then, the channel is characterized by the conditional probabilities:
formula_6
Capacity.
The channel capacity of a BEC is formula_7, attained with a uniform distribution for formula_1 (i.e. half of the inputs should be 0 and half should be 1).
If the sender is notified when a bit is erased, they can repeatedly transmit each bit until it is correctly received, attaining the capacity formula_7. However, by the noisy-channel coding theorem, the capacity of formula_7 can be obtained even without such feedback.
Related channels.
If bits are flipped rather than erased, the channel is a binary symmetric channel (BSC), which has capacity formula_9 (for the binary entropy function formula_8), which is less than the capacity of the BEC for formula_10. If bits are erased but the receiver is not notified (i.e. does not receive the output formula_11) then the channel is a deletion channel, and its capacity is an open problem.
History.
The BEC was introduced by Peter Elias of MIT in 1955 as a toy example.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P_e"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\{0,1\\}"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "\\{0,1,\\text{e} \\}"
},
{
"math_id": 5,
"text": "\\text{e}"
},
{
"math_id": 6,
"text": "\\begin{align}\n\\operatorname {Pr} [ Y = 0 | X = 0 ] &= 1 - P_e \\\\\n\\operatorname {Pr} [ Y = 0 | X = 1 ] &= 0 \\\\\n\\operatorname {Pr} [ Y = 1 | X = 0 ] &= 0 \\\\\n\\operatorname {Pr} [ Y = 1 | X = 1 ] &= 1 - P_e \\\\\n\\operatorname {Pr} [ Y = e | X = 0 ] &= P_e \\\\\n\\operatorname {Pr} [ Y = e | X = 1 ] &= P_e\n\\end{align}"
},
{
"math_id": 7,
"text": "1-P_e"
},
{
"math_id": 8,
"text": "\\operatorname{H}_\\text{b}"
},
{
"math_id": 9,
"text": "1 - \\operatorname H_\\text{b}(P_e)"
},
{
"math_id": 10,
"text": "0<P_e<1/2"
},
{
"math_id": 11,
"text": "e"
}
]
| https://en.wikipedia.org/wiki?curid=6948409 |
694843 | Alexander–Spanier cohomology | Cohomology theory for topological spaces
In mathematics, particularly in algebraic topology, Alexander–Spanier cohomology is a cohomology theory for topological spaces.
History.
It was introduced by James W. Alexander (1935) for the special case of compact metric spaces, and by Edwin H. Spanier (1948) for all topological spaces, based on a suggestion of Alexander D. Wallace.
Definition.
If "X" is a topological space and "G" is an "R" module where "R" is a ring with unity, then there is a cochain complex "C" whose "p"-th term formula_0 is the set of all functions from formula_1 to "G" with differential formula_2 given by
formula_3
The defined cochain complex formula_4 does not rely on the topology of formula_5. In fact, if formula_5 is a nonempty space, formula_6 where formula_7 is a graded module whose only nontrivial module is formula_7 at degree 0.
An element formula_8 is said to be "locally zero" if there is a covering formula_9 of formula_5 by open sets such that formula_10 vanishes on any formula_11-tuple of formula_5 which lies in some element of formula_9 (i.e. formula_10 vanishes on formula_12).
The subset of formula_13 consisting of locally zero functions is a submodule, denote by formula_14.
formula_15 is a cochain subcomplex of formula_16 so we define a quotient cochain complex formula_17.
The Alexander–Spanier cohomology groups formula_18 are defined to be the cohomology groups of formula_19.
Induced homomorphism.
Given a function formula_20 which is not necessarily continuous, there is an induced cochain map
formula_21
defined by formula_22
If formula_23 is continuous, there is an induced cochain map
formula_24
Relative cohomology module.
If formula_25 is a subspace of formula_5 and formula_26 is an inclusion map, then there is an induced epimorphism formula_27. The kernel of formula_28 is a cochain subcomplex of formula_29 which is denoted by formula_30. If formula_31 denote the subcomplex of formula_16 of functions formula_10 that are locally zero on formula_25, then formula_32.
The "relative module" is formula_33 is defined to be the cohomology module of formula_30.
formula_34 is called the "Alexander cohomology module of formula_35 of degree formula_36 with coefficients formula_7" and this module satisfies all cohomology axioms. The resulting cohomology theory is called the "Alexander (or Alexander-Spanier) cohomology theory"
Alexander cohomology with compact supports.
A subset formula_45 is said to be "cobounded" if formula_46 is bounded, i.e. its closure is compact.
Similar to the definition of Alexander cohomology module, one can define Alexander cohomology module with "compact supports" of a pair formula_35 by adding the property that formula_47 is locally zero on some cobounded subset of formula_5.
Formally, one can define as follows : For given topological pair formula_35, the submodule formula_48 of formula_49 consists of formula_47 such that formula_10 is locally zero on some cobounded subset of formula_5.
Similar to the Alexander cohomology module, one can get a cochain complex formula_50 and a cochain complex formula_51.
The cohomology module induced from the cochain complex formula_52 is called the "Alexander cohomology of formula_35 with compact supports" and denoted by formula_53. Induced homomorphism of this cohomology is defined as the Alexander cohomology theory.
Under this definition, we can modify "homotopy axiom" for cohomology to a "proper homotopy axiom" if we define a coboundary homomorphism formula_54 only when formula_55 is a "closed" subset. Similarly, "excision axiom" can be modified to "proper excision axiom" i.e. the excision map is a proper map.
Property.
One of the most important property of this Alexander cohomology module with compact support is the following theorem:
formula_58
Example.
as formula_59. Hence if formula_60, formula_61 and formula_62 are not of the same "proper" homotopy type.
Relation with tautness.
Using this tautness property, one can show the following two facts:
Difference from singular cohomology theory.
Recall that the singular cohomology module of a space is the direct product of the singular cohomology modules of its path components.
A nonempty space formula_5 is connected if and only if formula_76. Hence for any connected space which is not path connected, singular cohomology and Alexander cohomology differ in degree 0.
If formula_77 is an open covering of formula_5 by pairwise disjoint sets, then there is a natural isomorphism formula_78. In particular, if formula_79 is the collection of components of a locally connected space formula_5, there is a natural isomorphism formula_80.
Variants.
It is also possible to define Alexander–Spanier homology and Alexander–Spanier cohomology with compact supports.
Connection to other cohomologies.
The Alexander–Spanier cohomology groups coincide with Čech cohomology groups for compact Hausdorff spaces, and coincide with singular cohomology groups for locally finite complexes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C^p"
},
{
"math_id": 1,
"text": "X^{p+1}"
},
{
"math_id": 2,
"text": "d\\colon C^{p-1} \\to C^{p}"
},
{
"math_id": 3,
"text": "df(x_0,\\ldots,x_p)= \\sum_i(-1)^if(x_0,\\ldots,x_{i-1},x_{i+1},\\ldots,x_p)."
},
{
"math_id": 4,
"text": "C^*(X;G)"
},
{
"math_id": 5,
"text": "X"
},
{
"math_id": 6,
"text": "G\\simeq H^*(C^*(X;G))"
},
{
"math_id": 7,
"text": "G"
},
{
"math_id": 8,
"text": "\\varphi\\in C^p(X)"
},
{
"math_id": 9,
"text": "\\{U\\}"
},
{
"math_id": 10,
"text": "\\varphi"
},
{
"math_id": 11,
"text": "(p+1)"
},
{
"math_id": 12,
"text": "\\bigcup_{U\\in\\{U\\}}U^{p+1}"
},
{
"math_id": 13,
"text": "C^p(X)"
},
{
"math_id": 14,
"text": "C_0^p(X)"
},
{
"math_id": 15,
"text": "C^*_0(X) = \\{C_0^p(X),d\\}"
},
{
"math_id": 16,
"text": "C^*(X)"
},
{
"math_id": 17,
"text": "\\bar{C}^*(X)=C^*(X)/C_0^*(X)"
},
{
"math_id": 18,
"text": "\\bar{H}^p(X,G)"
},
{
"math_id": 19,
"text": "\\bar{C}^*(X)"
},
{
"math_id": 20,
"text": "f:X\\to Y"
},
{
"math_id": 21,
"text": "f^\\sharp:C^*(Y;G)\\to C^*(X;G)"
},
{
"math_id": 22,
"text": "(f^\\sharp\\varphi)(x_0,...,x_p) = (\\varphi f)(x_0,...,x_p),\\ \\varphi\\in C^p(Y);\\ x_0,...,x_p\\in X"
},
{
"math_id": 23,
"text": "f"
},
{
"math_id": 24,
"text": "f^\\sharp:\\bar{C}^*(Y;G)\\to\\bar{C}^*(X;G)"
},
{
"math_id": 25,
"text": "A"
},
{
"math_id": 26,
"text": "i:A\\hookrightarrow X"
},
{
"math_id": 27,
"text": "i^\\sharp:\\bar{C}^*(X;G)\\to \\bar{C}^*(A;G)"
},
{
"math_id": 28,
"text": "i^\\sharp"
},
{
"math_id": 29,
"text": "\\bar{C}^*(X;G)"
},
{
"math_id": 30,
"text": "\\bar{C}^*(X,A;G)"
},
{
"math_id": 31,
"text": "C^*(X,A)"
},
{
"math_id": 32,
"text": "\\bar{C}^*(X,A) = C^*(X,A)/C^*_0(X)"
},
{
"math_id": 33,
"text": "\\bar{H}^*(X,A;G)"
},
{
"math_id": 34,
"text": "\\bar{H}^q(X,A;G)"
},
{
"math_id": 35,
"text": "(X,A)"
},
{
"math_id": 36,
"text": "q"
},
{
"math_id": 37,
"text": "G\\simeq \\bar{H}^*(X;G)"
},
{
"math_id": 38,
"text": "j:X\\hookrightarrow (X,A)"
},
{
"math_id": 39,
"text": "\\cdots\\to\\bar{H}^q(X,A;G) \\xrightarrow{j^*} \\bar{H}^q(X;G)\\xrightarrow{i^*}\\bar{H}^q(A;G)\\xrightarrow{\\delta^*}\\bar{H}^{q+1}(X,A;G)\\to\\cdots"
},
{
"math_id": 40,
"text": "U"
},
{
"math_id": 41,
"text": "\\bar{U}\\subset\\operatorname{int}A"
},
{
"math_id": 42,
"text": "\\bar{C}^*(X,A)\\simeq \\bar{C}^*(X-U,A-U)"
},
{
"math_id": 43,
"text": "f_0,f_1:(X,A)\\to(Y,B)"
},
{
"math_id": 44,
"text": "f_0^* = f_1^*:H^*(Y,B;G)\\to H^*(X,A;G)"
},
{
"math_id": 45,
"text": "B\\subset X"
},
{
"math_id": 46,
"text": "X-B"
},
{
"math_id": 47,
"text": "\\varphi\\in C^q(X,A;G)"
},
{
"math_id": 48,
"text": "C^q_c(X,A;G)"
},
{
"math_id": 49,
"text": "C^q(X,A;G)"
},
{
"math_id": 50,
"text": "C^*_c(X,A;G) = \\{C^q_c(X,A;G),\\delta\\}"
},
{
"math_id": 51,
"text": "\\bar{C}^*_c(X,A;G) = C^*_c(X,A;G)/C_0^*(X;G)"
},
{
"math_id": 52,
"text": "\\bar{C}^*_c"
},
{
"math_id": 53,
"text": "\\bar{H}^*_c(X,A;G)"
},
{
"math_id": 54,
"text": "\\delta^*:\\bar{H}^q_c(A;G)\\to \\bar{H}^{q+1}_c(X,A;G)"
},
{
"math_id": 55,
"text": "A\\subset X"
},
{
"math_id": 56,
"text": "X^+"
},
{
"math_id": 57,
"text": "\\bar{H}^q_c(X;G)\\simeq \\tilde{\\bar{H}}^q(X^+;G)."
},
{
"math_id": 58,
"text": "\\bar{H}^q_c(\\R^n;G)\\simeq\\begin{cases} 0 & q\\neq n\\\\ G & q = n\\end{cases}"
},
{
"math_id": 59,
"text": "(\\R^n)^+\\cong S^n"
},
{
"math_id": 60,
"text": "n\\neq m"
},
{
"math_id": 61,
"text": "\\R^n"
},
{
"math_id": 62,
"text": "\\R^m"
},
{
"math_id": 63,
"text": "B\\subset A\\subset X"
},
{
"math_id": 64,
"text": "B"
},
{
"math_id": 65,
"text": "(A,B)"
},
{
"math_id": 66,
"text": "(Y,B)"
},
{
"math_id": 67,
"text": "Y"
},
{
"math_id": 68,
"text": "f:(X,A)\\to(Y,B)"
},
{
"math_id": 69,
"text": "X-A"
},
{
"math_id": 70,
"text": "Y-B"
},
{
"math_id": 71,
"text": "f^*:\\bar{H}^q(Y,B;G)\\xrightarrow{\\sim}\\bar{H}^q(X,A;G)"
},
{
"math_id": 72,
"text": "\\{(X_\\alpha,A_\\alpha)\\}_\\alpha"
},
{
"math_id": 73,
"text": "(X,A) =(\\bigcap X_\\alpha,\\bigcap A_\\alpha)"
},
{
"math_id": 74,
"text": "i_\\alpha:(X,A)\\to (X_\\alpha,A_\\alpha)"
},
{
"math_id": 75,
"text": "\\{i^*_\\alpha\\}:\\varinjlim\\bar{H}^q(X_\\alpha,A_\\alpha;M)\\xrightarrow{\\sim}\\bar{H}^q(X,A;M)"
},
{
"math_id": 76,
"text": "G\\simeq \\bar{H}^0(X;G)"
},
{
"math_id": 77,
"text": "\\{U_j\\}"
},
{
"math_id": 78,
"text": "\\bar{H}^q(X;G)\\simeq \\prod_j\\bar{H}^q(U_j;G)"
},
{
"math_id": 79,
"text": "\\{C_j\\}"
},
{
"math_id": 80,
"text": "\\bar{H}^q(X;G)\\simeq \\prod_j\\bar{H}^q(C_j;G)"
}
]
| https://en.wikipedia.org/wiki?curid=694843 |
69488173 | Korkine–Zolotarev lattice basis reduction algorithm | The Korkine–Zolotarev (KZ) lattice basis reduction algorithm or Hermite–Korkine–Zolotarev (HKZ) algorithm is a lattice reduction algorithm.
For lattices in formula_0 it yields a lattice basis with orthogonality defect at most formula_1, unlike the formula_2 bound of the LLL reduction. KZ has exponential complexity versus the polynomial complexity of the LLL reduction algorithm, however it may still be preferred for solving multiple closest vector problems (CVPs) in the same lattice, where it can be more efficient.
History.
The definition of a KZ-reduced basis was given by Aleksandr Korkin and Yegor Ivanovich Zolotarev in 1877, a strengthened version of Hermite reduction. The first algorithm for constructing a KZ-reduced basis was given in 1983 by Kannan.
The block Korkine-Zolotarev (BKZ) algorithm was introduced in 1987.
Definition.
A KZ-reduced basis for a lattice is defined as follows:
Given a basis
formula_3
define its Gram–Schmidt process orthogonal basis
formula_4
and the Gram-Schmidt coefficients
formula_5, for any formula_6.
Also define projection functions
formula_7
which project formula_8 orthogonally onto the span of formula_9.
Then the basis formula_10 is KZ-reduced if the following holds:
Note that the first condition can be reformulated recursively as stating that formula_15 is a shortest vector in the lattice, and formula_16 is a KZ-reduced basis for the lattice formula_17.
Also note that the second condition guarantees that the reduced basis is length-reduced (adding an integer multiple of one basis vector to another will not decrease its length); the same condition is used in the LLL reduction.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}^n"
},
{
"math_id": 1,
"text": "n^n"
},
{
"math_id": 2,
"text": "2^{n^2/2}"
},
{
"math_id": 3,
"text": "\\mathbf{B}=\\{ \\mathbf{b}_1,\\mathbf{b}_2, \\dots, \\mathbf{b}_n \\},"
},
{
"math_id": 4,
"text": "\\mathbf{B}^*=\\{ \\mathbf{b}^*_1, \\mathbf{b}^*_2, \\dots, \\mathbf{b}^*_n \\},"
},
{
"math_id": 5,
"text": "\\mu_{i,j}=\\frac{\\langle\\mathbf{b}_i,\\mathbf{b}^*_j\\rangle}{\\langle\\mathbf{b}^*_j,\\mathbf{b}^*_j\\rangle}"
},
{
"math_id": 6,
"text": "1 \\le j < i \\le n"
},
{
"math_id": 7,
"text": "\\pi_i(\\mathbf{x}) = \\sum_{j \\geq i} \\frac{\\langle\\mathbf{x},\\mathbf{b}^*_j\\rangle}{\\langle\\mathbf{b}^*_j,\\mathbf{b}^*_j\\rangle} \\mathbf{b}^*_j"
},
{
"math_id": 8,
"text": "\\mathbf{x}"
},
{
"math_id": 9,
"text": "\\mathbf{b}^*_i, \\cdots, \\mathbf{b}^*_n"
},
{
"math_id": 10,
"text": "B"
},
{
"math_id": 11,
"text": "\\mathbf{b}^*_i"
},
{
"math_id": 12,
"text": "\\pi_i(\\mathcal{L}(\\mathbf{B}))"
},
{
"math_id": 13,
"text": "j < i"
},
{
"math_id": 14,
"text": "\\left|\\mu_{i,j}\\right| \\leq 1/2"
},
{
"math_id": 15,
"text": "\\mathbf{b}_1"
},
{
"math_id": 16,
"text": "\\{\\pi_2(\\mathbf{b}_2), \\cdots \\pi_2(\\mathbf{b}_n)\\}"
},
{
"math_id": 17,
"text": "\\pi_2(\\mathcal{L}(\\mathbf{B}))"
}
]
| https://en.wikipedia.org/wiki?curid=69488173 |
694952 | Min-max theorem | Variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces
In linear algebra and functional analysis, the min-max theorem, or variational theorem, or Courant–Fischer–Weyl min-max principle, is a result that gives a variational characterization of eigenvalues of compact Hermitian operators on Hilbert spaces. It can be viewed as the starting point of many results of similar nature.
This article first discusses the finite-dimensional case and its applications before considering compact operators on infinite-dimensional Hilbert spaces.
We will see that for compact operators, the proof of the main theorem uses essentially the same idea from the finite-dimensional argument.
In the case that the operator is non-Hermitian, the theorem provides an equivalent characterization of the associated singular values.
The min-max theorem can be extended to self-adjoint operators that are bounded below.
Matrices.
Let A be a "n" × "n" Hermitian matrix. As with many other variational results on eigenvalues, one considers the Rayleigh–Ritz quotient "RA" : C"n" \ {0} → R defined by
formula_0
where (⋅, ⋅) denotes the Euclidean inner product on C"n".
Clearly, the Rayleigh quotient of an eigenvector is its associated eigenvalue. Equivalently, the Rayleigh–Ritz quotient can be replaced by
formula_1
For Hermitian matrices "A", the range of the continuous function "RA"("x"), or "f"("x"), is a compact interval ["a", "b"] of the real line. The maximum "b" and the minimum "a" are the largest and smallest eigenvalue of "A", respectively. The min-max theorem is a refinement of this fact.
Min-max theorem.
Let formula_2 be Hermitian on an inner product space formula_3 with dimension formula_4, with spectrum ordered in descending order formula_5.
Let formula_6 be the corresponding unit-length orthogonal eigenvectors.
Reverse the spectrum ordering, so that formula_7.
<templatestyles src="Math_theorem/styles.css" />
(Poincaré’s inequality) — Let formula_8 be a subspace of formula_3 with dimension formula_9, then there exists unit vectors formula_10, such that
formula_11, and formula_12.
<templatestyles src="Math_proof/styles.css" />Proof
Part 2 is a corollary, using formula_13.
formula_8 is a formula_9 dimensional subspace, so if we pick any list of formula_14 vectors, their span formula_15 must intersect formula_8 on at least a single line.
Take unit formula_16. That’s what we need.
formula_17, since formula_18.
Since formula_19, we find formula_20.
<templatestyles src="Math_theorem/styles.css" />
min-max theorem — formula_21
<templatestyles src="Math_proof/styles.css" />Proof
Part 2 is a corollary of part 1, by using formula_13.
By Poincare’s inequality, formula_22 is an upper bound to the right side.
By setting formula_23, the upper bound is achieved.
Counterexample in the non-Hermitian case.
Let "N" be the nilpotent matrix
formula_24
Define the Rayleigh quotient formula_25 exactly as above in the Hermitian case. Then it is easy to see that the only eigenvalue of "N" is zero, while the maximum value of the Rayleigh quotient is . That is, the maximum value of the Rayleigh quotient is larger than the maximum eigenvalue.
Applications.
Min-max principle for singular values.
The singular values {"σk"} of a square matrix "M" are the square roots of the eigenvalues of "M"*"M" (equivalently "MM*"). An immediate consequence of the first equality in the min-max theorem is:
formula_26
Similarly,
formula_27
Here formula_28 denotes the "k"th entry in the increasing sequence of σ's, so that formula_29.
Cauchy interlacing theorem.
Let A be a symmetric "n" × "n" matrix. The "m" × "m" matrix "B", where "m" ≤ "n", is called a compression of A if there exists an orthogonal projection "P" onto a subspace of dimension "m" such that "PAP*" = "B". The Cauchy interlacing theorem states:
Theorem. If the eigenvalues of A are "α"1 ≤ ... ≤ "αn", and those of "B" are "β"1 ≤ ... ≤ "βj" ≤ ... ≤ "βm", then for all "j" ≤ "m",
formula_30
This can be proven using the min-max principle. Let "βi" have corresponding eigenvector "bi" and "Sj" be the "j" dimensional subspace "Sj"
span{"b"1, ..., "bj"}, then
formula_31
According to first part of min-max, "αj" ≤ "βj". On the other hand, if we define "S""m"−"j"+1
span{"bj", ..., "bm"}, then
formula_32
where the last inequality is given by the second part of min-max.
When "n" − "m"
1, we have "αj" ≤ "βj" ≤ "α""j"+1, hence the name "interlacing" theorem.
Compact operators.
Let A be a compact, Hermitian operator on a Hilbert space "H". Recall that the spectrum of such an operator (the set of eigenvalues) is a set of real numbers whose only possible cluster point is zero.
It is thus convenient to list the positive eigenvalues of A as
formula_33
where entries are repeated with multiplicity, as in the matrix case. (To emphasize that the sequence is decreasing, we may write formula_34.)
When "H" is infinite-dimensional, the above sequence of eigenvalues is necessarily infinite.
We now apply the same reasoning as in the matrix case. Letting "Sk" ⊂ "H" be a "k" dimensional subspace, we can obtain the following theorem.
Theorem (Min-Max). Let A be a compact, self-adjoint operator on a Hilbert space H, whose positive eigenvalues are listed in decreasing order ... ≤ "λk" ≤ ... ≤ "λ"1. Then:
formula_35
A similar pair of equalities hold for negative eigenvalues.
<templatestyles src="Math_proof/styles.css" />Proof
Let "S' " be the closure of the linear span formula_36.
The subspace "S' " has codimension "k" − 1. By the same dimension count argument as in the matrix case, "S' " ∩ "Sk" has positive dimension. So there exists "x" ∈ "S' " ∩ "Sk" with formula_37. Since it is an element of "S' ", such an "x" necessarily satisfy
formula_38
Therefore, for all "Sk"
formula_39
But A is compact, therefore the function "f"("x") = ("Ax", "x") is weakly continuous. Furthermore, any bounded set in "H" is weakly compact. This lets us replace the infimum by minimum:
formula_40
So
formula_41
Because equality is achieved when formula_42,
formula_43
This is the first part of min-max theorem for compact self-adjoint operators.
Analogously, consider now a ("k" − 1)-dimensional subspace "S""k"−1, whose the orthogonal complement is denoted by "S""k"−1⊥. If "S' " = span{"u"1..."uk"},
formula_44
So
formula_45
This implies
formula_46
where the compactness of "A" was applied. Index the above by the collection of "k-1"-dimensional subspaces gives
formula_47
Pick "S""k"−1 = span{"u"1, ..., "u""k"−1} and we deduce
formula_48
Self-adjoint operators.
The min-max theorem also applies to (possibly unbounded) self-adjoint operators. Recall the essential spectrum is the spectrum without isolated eigenvalues of finite multiplicity.
Sometimes we have some eigenvalues below the essential spectrum, and we would like to approximate the eigenvalues and eigenfunctions.
Theorem (Min-Max). Let "A" be self-adjoint, and let formula_49 be the eigenvalues of "A" below the essential spectrum. Then
formula_50.
If we only have "N" eigenvalues and hence run out of eigenvalues, then we let formula_51 (the bottom of the essential spectrum) for "n>N", and the above statement holds after replacing min-max with inf-sup.
Theorem (Max-Min). Let "A" be self-adjoint, and let formula_49 be the eigenvalues of "A" below the essential spectrum. Then
formula_52.
If we only have "N" eigenvalues and hence run out of eigenvalues, then we let formula_51 (the bottom of the essential spectrum) for "n > N", and the above statement holds after replacing max-min with sup-inf.
The proofs use the following results about self-adjoint operators:
Theorem. Let "A" be self-adjoint. Then formula_53 for formula_54 if and only if formula_55.
Theorem. If "A" is self-adjoint, then
formula_56
and
formula_57.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R_A(x) = \\frac{(Ax, x)}{(x,x)}"
},
{
"math_id": 1,
"text": "f(x) = (Ax, x), \\; \\|x\\| = 1."
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "\\lambda_1 \\geq ... \\geq \\lambda_n"
},
{
"math_id": 6,
"text": "v_1, ..., v_n"
},
{
"math_id": 7,
"text": "\\xi_1 = \\lambda_n, ..., \\xi_n = \\lambda_1"
},
{
"math_id": 8,
"text": "M"
},
{
"math_id": 9,
"text": "k"
},
{
"math_id": 10,
"text": "x, y\\in M"
},
{
"math_id": 11,
"text": "\\langle x, Ax\\rangle\\leq \\lambda_k"
},
{
"math_id": 12,
"text": "\\langle y, Ay\\rangle \\geq \\xi_k"
},
{
"math_id": 13,
"text": "-A"
},
{
"math_id": 14,
"text": "n-k+1"
},
{
"math_id": 15,
"text": "N := span(v_k, ... v_n)"
},
{
"math_id": 16,
"text": "x \\in M\\cap N"
},
{
"math_id": 17,
"text": "x = \\sum_{i=k}^n a_i v_i"
},
{
"math_id": 18,
"text": "x\\in N"
},
{
"math_id": 19,
"text": "\\sum_{i=k}^n |a_i|^2 = 1"
},
{
"math_id": 20,
"text": "\\langle x,Ax \\rangle = \\sum_{i=k}^n |a_i|^2\\lambda_i \\leq \\lambda_k"
},
{
"math_id": 21,
"text": "\\begin{aligned}\n\\lambda_k &=\\max _{\\begin{array}{c} \\mathcal{M} \\subset V \\\\ \\operatorname{dim}(\\mathcal{M})=k \\end{array}} \\min _{\\begin{array}{c} x \\in \\mathcal{M} \\\\ \\|x\\|=1 \\end{array}}\\langle x, A x\\rangle\\\\\n&=\\min _{\\begin{array}{c} \\mathcal{M} \\subset V \\\\ \\operatorname{dim}(\\mathcal{M})=n-k+1 \\end{array}} \\max _{\\begin{array}{c} x \\in \\mathcal{M} \\\\ \\|x\\|=1 \\end{array}}\\langle x, A x\\rangle \\text{. }\n\\end{aligned}"
},
{
"math_id": 22,
"text": "\\lambda_k"
},
{
"math_id": 23,
"text": "\\mathcal M = span(v_1, ... v_k)"
},
{
"math_id": 24,
"text": "\\begin{bmatrix} 0 & 1 \\\\ 0 & 0 \\end{bmatrix}."
},
{
"math_id": 25,
"text": " R_N(x) "
},
{
"math_id": 26,
"text": "\\sigma_k^{\\uparrow} = \\min_{S:\\dim(S)=k} \\max_{x \\in S, \\|x\\| = 1} (M^* Mx, x)^{\\frac{1}{2}}=\\min_{S:\\dim(S)=k} \\max_{x \\in S, \\|x\\| = 1} \\| Mx \\|."
},
{
"math_id": 27,
"text": "\\sigma_k^{\\uparrow} = \\max_{S:\\dim(S)=n-k+1} \\min_{x \\in S, \\|x\\| = 1} \\| Mx \\|."
},
{
"math_id": 28,
"text": "\\sigma_k=\\sigma_k^\\uparrow"
},
{
"math_id": 29,
"text": "\\sigma_1\\leq\\sigma_2\\leq\\cdots "
},
{
"math_id": 30,
"text": "\\alpha_j \\leq \\beta_j \\leq \\alpha_{n-m+j}."
},
{
"math_id": 31,
"text": "\\beta_j = \\max_{x \\in S_j, \\|x\\| = 1} (Bx, x) = \\max_{x \\in S_j, \\|x\\| = 1} (PAP^*x, x) \\geq \\min_{S_j} \\max_{x \\in \nS_j, \\|x\\| = 1} (A(P^*x), P^*x) = \\alpha_j."
},
{
"math_id": 32,
"text": "\\beta_j = \\min_{x \\in S_{m-j+1}, \\|x\\| = 1} (Bx, x) = \\min_{x \\in S_{m-j+1}, \\|x\\| = 1} (PAP^*x, x)= \\min_{x \\in S_{m-j+1}, \\|x\\| = 1} (A(P^*x), P^*x) \\leq \\alpha_{n-m+j},"
},
{
"math_id": 33,
"text": "\\cdots \\le \\lambda_k \\le \\cdots \\le \\lambda_1,"
},
{
"math_id": 34,
"text": "\\lambda_k = \\lambda_k^\\downarrow"
},
{
"math_id": 35,
"text": "\\begin{align}\n\\max_{S_k} \\min_{x \\in S_k, \\|x\\| = 1} (Ax,x) &= \\lambda_k ^{\\downarrow}, \\\\\n\\min_{S_{k-1}} \\max_{x \\in S_{k-1}^{\\perp}, \\|x\\|=1} (Ax, x) &= \\lambda_k^{\\downarrow}.\n\\end{align}"
},
{
"math_id": 36,
"text": "S' =\\operatorname{span}\\{u_k,u_{k+1},\\ldots\\}"
},
{
"math_id": 37,
"text": "\\|x\\|=1"
},
{
"math_id": 38,
"text": "(Ax, x) \\le \\lambda_k."
},
{
"math_id": 39,
"text": "\\inf_{x \\in S_k, \\|x\\| = 1}(Ax,x) \\le \\lambda_k"
},
{
"math_id": 40,
"text": "\\min_{x \\in S_k, \\|x\\| = 1}(Ax,x) \\le \\lambda_k."
},
{
"math_id": 41,
"text": "\\sup_{S_k} \\min_{x \\in S_k, \\|x\\| = 1}(Ax,x) \\le \\lambda_k."
},
{
"math_id": 42,
"text": "S_k=\\operatorname{span}\\{u_1,\\ldots,u_k\\}"
},
{
"math_id": 43,
"text": "\\max_{S_k} \\min_{x \\in S_k, \\|x\\| = 1}(Ax,x) = \\lambda_k."
},
{
"math_id": 44,
"text": "S' \\cap S_{k-1}^{\\perp} \\ne {0}."
},
{
"math_id": 45,
"text": "\\exists x \\in S_{k-1}^{\\perp} \\, \\|x\\| = 1, (Ax, x) \\ge \\lambda_k."
},
{
"math_id": 46,
"text": "\\max_{x \\in S_{k-1}^{\\perp}, \\|x\\| = 1} (Ax, x) \\ge \\lambda_k"
},
{
"math_id": 47,
"text": "\\inf_{S_{k-1}} \\max_{x \\in S_{k-1}^{\\perp}, \\|x\\|=1} (Ax, x) \\ge \\lambda_k."
},
{
"math_id": 48,
"text": "\\min_{S_{k-1}} \\max_{x \\in S_{k-1}^{\\perp}, \\|x\\|=1} (Ax, x) = \\lambda_k."
},
{
"math_id": 49,
"text": "E_1\\le E_2\\le E_3\\le\\cdots"
},
{
"math_id": 50,
"text": "E_n=\\min_{\\psi_1,\\ldots,\\psi_{n}}\\max\\{\\langle\\psi,A\\psi\\rangle:\\psi\\in\\operatorname{span}(\\psi_1,\\ldots,\\psi_{n}), \\, \\| \\psi \\| = 1\\}"
},
{
"math_id": 51,
"text": "E_n:=\\inf\\sigma_{ess}(A)"
},
{
"math_id": 52,
"text": "E_n=\\max_{\\psi_1,\\ldots,\\psi_{n-1}}\\min\\{\\langle\\psi,A\\psi\\rangle:\\psi\\perp\\psi_1,\\ldots,\\psi_{n-1}, \\, \\| \\psi \\| = 1\\}"
},
{
"math_id": 53,
"text": "(A-E)\\ge0"
},
{
"math_id": 54,
"text": "E\\in\\mathbb{R}"
},
{
"math_id": 55,
"text": "\\sigma(A)\\subseteq[E,\\infty)"
},
{
"math_id": 56,
"text": "\\inf\\sigma(A)=\\inf_{\\psi\\in\\mathfrak{D}(A),\\|\\psi\\|=1}\\langle\\psi,A\\psi\\rangle"
},
{
"math_id": 57,
"text": "\\sup\\sigma(A)=\\sup_{\\psi\\in\\mathfrak{D}(A),\\|\\psi\\|=1}\\langle\\psi,A\\psi\\rangle"
}
]
| https://en.wikipedia.org/wiki?curid=694952 |
695026 | Noncrossing partition | In combinatorial mathematics, the topic of noncrossing partitions has assumed some importance because of (among other things) its application to the theory of free probability. The number of noncrossing partitions of a set of "n" elements is the "n"th Catalan number. The number of noncrossing partitions of an "n"-element set with "k" blocks is found in the Narayana number triangle.
Definition.
A partition of a set "S" is a set of non-empty, pairwise disjoint subsets of "S", called "parts" or "blocks", whose union is all of "S". Consider a finite set that is linearly ordered, or (equivalently, for purposes of this definition) arranged in a cyclic order like the vertices of a regular "n"-gon. No generality is lost by taking this set to be "S" = { 1, ..., "n" }. A noncrossing partition of "S" is a partition in which no two blocks "cross" each other, i.e., if "a" and "b" belong to one block and "x" and "y" to another, they are not arranged in the order "a x b y". If one draws an arch based at "a" and "b", and another arch based at "x" and "y", then the two arches cross each other if the order is "a x b y" but not if it is "a x y b" or "a b x y". In the latter two orders the partition { { "a", "b" }, { "x", "y" } } is noncrossing.
Equivalently, if we label the vertices of a regular "n"-gon with the numbers 1 through "n", the convex hulls of different blocks of the partition are disjoint from each other, i.e., they also do not "cross" each other.
The set of all non-crossing partitions of "S" is denoted formula_0. There is an obvious order isomorphism between formula_1 and formula_2 for two finite sets formula_3 with the same size. That is, formula_0 depends essentially only on the size of formula_4 and we denote by formula_5 the non-crossing partitions on "any" set of size "n".
Lattice structure.
Like the set of all partitions of the set { 1, ..., "n" }, the set of all noncrossing partitions is a lattice when partially ordered by saying that a finer partition is "less than" a coarser partition. However, although it is a subset of the lattice of all set partitions, it is "not" a sublattice, because the subset is not closed under the join operation in the larger lattice. In other words, the finest partition that is coarser than both of two noncrossing partitions is not always the finest "noncrossing" partition that is coarser than both of them.
Unlike the lattice of all partitions of the set, the lattice of all noncrossing partitions is self-dual, i.e., it is order-isomorphic to the lattice that results from inverting the partial order ("turning it upside-down"). This can be seen by observing that each noncrossing partition has a non-crossing complement. Indeed, every interval within this lattice is self-dual.
Role in free probability theory.
The lattice of noncrossing partitions plays the same role in defining free cumulants in free probability theory that is played by the lattice of "all" partitions in defining joint cumulants in classical probability theory. To be more precise, let formula_6 be a non-commutative probability space (See free probability for terminology.), formula_7 a non-commutative random variable with free cumulants formula_8. Then
formula_9
where formula_10 denotes the number of blocks of length formula_11 in the non-crossing partition formula_12.
That is, the moments of a non-commutative random variable can be expressed as a sum of free cumulants over the sum non-crossing partitions. This is the free analogue of the moment-cumulant formula in classical probability.
See also Wigner semicircle distribution. | [
{
"math_id": 0,
"text": "\\text{NC}(S)"
},
{
"math_id": 1,
"text": "\\text{NC}(S_1)"
},
{
"math_id": 2,
"text": "\\text{NC}(S_2)"
},
{
"math_id": 3,
"text": " S_1,S_2"
},
{
"math_id": 4,
"text": " S"
},
{
"math_id": 5,
"text": "\\text{NC}(n)"
},
{
"math_id": 6,
"text": "(\\mathcal{A},\\phi)"
},
{
"math_id": 7,
"text": "a\\in\\mathcal{A}"
},
{
"math_id": 8,
"text": "(k_n)_{n\\in\\mathbb{N}}"
},
{
"math_id": 9,
"text": "\\phi(a^n) = \\sum_{\\pi\\in\\text{NC}(n)} \\prod_{j} k_j^{N_j(\\pi)}"
},
{
"math_id": 10,
"text": "N_j(\\pi)"
},
{
"math_id": 11,
"text": " j"
},
{
"math_id": 12,
"text": "\\pi"
}
]
| https://en.wikipedia.org/wiki?curid=695026 |
69504592 | BD+60 1417b | Exoplanet
BD+60 1417b is a confirmed exoplanet discovered in the year 2021 using the imaging method. BD+60 1417b is the only known exoplanet in the system BD+60 1417, around 45 parsecs from Earth. BD+60 1417 is a young K0 star, while BD+60 1417 b has a late-L spectral type. The planet might be the first discovery of a directly imaged exoplanet found by a citizen scientist. Discovery of exoplanets involving amateurs are usually transiting exoplanets and are rarely discovered with other methods. Another example of a non-transiting exoplanet discovery by an amateur is the microlensing exoplanet Kojima-1Lb.
Discovery.
Previous direct imaging planet-searching surveys with Gemini, Keck and Palomar failed to detect an exoplanet around BD+60 1417. A co-moving source around the star was first spotted with the WiseView Tool by the Backyard Worlds citizen scientist Jörg Schümann. WiseView uses data from the Wide-Field Infrared Survey Explorer (WISE). Additional observations with optical spectroscopy of the star at the Lick Observatory and infrared spectroscopy at NASA IRTF confirmed the presence of a young star with a planetary-mass companion around BD+60 1417.
BD+60 1417b is the second directly imaged exoplanet the WISE-telescope was able to discover, after COCONUTS-2b.
Host Star.
The host star BD+60 1417 is a young K0 star with a mass of 1 M☉ and a radius of 0.797 ±0.051 R☉. It has a brightness of 9.37 magnitude. The star shows typical signs of youth, such as x-ray detection with ROSAT and lithium absorption lines. Its age is estimated at 50-150 Million years. The star rotates with a period of 7.50 ± 0.86 days, which is seen due to evolving starspots in the TESS light curve.The host star was observed with the Large Binocular Telescope PEPSI instrument in 2023. This constrained several chemical abundances of the star, such as [Fe/H] = 0.27 ± 0.03 dex, C/O = 0.23 ± 0.12 and Mg/Si = 1.41 ± 0.19.
BD+60 1417 is the only main sequence star with about one solar mass that is orbited by a planetary-mass object at a separation larger than 1000 astronomical units. All other systems with a separation >1000 au have a primary with <0.5 solar masses or are the stellar remnant WD0806.
Physical properties.
The infrared spectrum of the planet shows a red L8γ-type object with water vapor, carbon monoxide, iron(I) hydride and potassium iodide in its atmosphere. In the near-infrared it is one of the reddest substellar objects discovered to date with formula_0 mag. The spectrum of the exoplanet closely resembles objects with a suspected low surface gravity. A low surface gravity is a sign of youth for substellar objects. The researchers also found similarities with an archived SINFONI spectrum of the exoplanet 2M1207b and spectra of the HR 8799 exoplanets. The object was studied in 2024 and the team concluded that the model strongly favours a cloudy model over a cloudless model. Clouds of forsterite and enstatite should form on BD+60 1417b, if it has similar chemical abundances when compared to the host star. Quartz clouds should not form on this object. The researchers find that BD+60 1417b is spectroscopically very similar to WISEP J004701.06+680352.1, to the point that they call them spectroscopic twins.
Orbit.
The planet has an orbital period of about 95,000 years. BD+60 1417b has a large separation of 1662 astronomical units from its host star. If this exoplanet has formed in this wide orbit, it is likely to have formed similar to isolated brown dwarfs. It could also have formed in a closer orbit around the star via core accretion or disk instability and was later dynamically disturbed into a higher orbit, for example by a planet-planet fly-by.
Status as an exoplanet.
According to the NASA Exoplanet Archive BD+60 1417b is an exoplanet and it falls within their definition: An object with a minimum mass lower than 30 Jupiter masses and a not free-floating object with sufficient follow-up. The official working definition by the International Astronomical Union allows only exoplanets with a maximum mass of 13 Jupiter masses and according to current knowledge BD+60 1417b could be more massive than this limit.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "J-Ks = 2.72"
}
]
| https://en.wikipedia.org/wiki?curid=69504592 |
695046 | Quaternionic representation | Representation of a group or algebra in terms of an algebra with quaternionic structure
In mathematical field of representation theory, a quaternionic representation is a representation on a complex vector space "V" with an invariant quaternionic structure, i.e., an antilinear equivariant map
formula_0
which satisfies
formula_1
Together with the imaginary unit "i" and the antilinear map "k" := "ij", "j" equips "V" with the structure of a quaternionic vector space (i.e., "V" becomes a module over the division algebra of quaternions). From this point of view, quaternionic representation of a group "G" is a group homomorphism "φ": "G" → GL("V", H), the group of invertible quaternion-linear transformations of "V". In particular, a quaternionic matrix representation of "g" assigns a square matrix of quaternions "ρ"(g) to each element "g" of "G" such that "ρ"(e) is the identity matrix and
formula_2
Quaternionic representations of associative and Lie algebras can be defined in a similar way.
Properties and related concepts.
If "V" is a unitary representation and the quaternionic structure "j" is a unitary operator, then "V" admits an invariant complex symplectic form "ω", and hence is a symplectic representation. This always holds if "V" is a representation of a compact group (e.g. a finite group) and in this case quaternionic representations are also known as symplectic representations. Such representations, amongst irreducible representations, can be picked out by the Frobenius-Schur indicator.
Quaternionic representations are similar to real representations in that they are isomorphic to their complex conjugate representation. Here a real representation is taken to be a complex representation with an invariant real structure, i.e., an antilinear equivariant map
formula_0
which satisfies
formula_3
A representation which is isomorphic to its complex conjugate, but which is not a real representation, is sometimes called a pseudoreal representation.
Real and pseudoreal representations of a group "G" can be understood by viewing them as representations of the real group algebra R["G"]. Such a representation will be a direct sum of central simple R-algebras, which, by the Artin-Wedderburn theorem, must be matrix algebras over the real numbers or the quaternions. Thus a real or pseudoreal representation is a direct sum of irreducible real representations and irreducible quaternionic representations. It is real if no quaternionic representations occur in the decomposition.
Examples.
A common example involves the quaternionic representation of rotations in three dimensions. Each (proper) rotation is represented by a quaternion with unit norm. There is an obvious one-dimensional quaternionic vector space, namely the space H of quaternions themselves under left multiplication. By restricting this to the unit quaternions, we obtain a quaternionic representation of the spinor group Spin(3).
This representation "ρ": Spin(3) → GL(1,H) also happens to be a unitary quaternionic representation because
formula_4
for all "g" in Spin(3).
Another unitary example is the spin representation of Spin(5). An example of a non-unitary quaternionic representation would be the two dimensional irreducible representation of Spin(5,1).
More generally, the spin representations of Spin("d") are quaternionic when "d" equals 3 + 8"k", 4 + 8"k", and 5 + 8"k" dimensions, where "k" is an integer. In physics, one often encounters the spinors of Spin("d", 1). These representations have the same type of real or quaternionic structure as the spinors of Spin("d" − 1).
Among the compact real forms of the simple Lie groups, irreducible quaternionic representations only exist for the Lie groups of type "A"4"k"+1, "B"4"k"+1, "B"4"k"+2, "C""k", "D"4"k"+2, and "E"7. | [
{
"math_id": 0,
"text": "j\\colon V\\to V"
},
{
"math_id": 1,
"text": "j^2=-1."
},
{
"math_id": 2,
"text": "\\rho(gh)=\\rho(g)\\rho(h)\\text{ for all }g, h \\in G."
},
{
"math_id": 3,
"text": "j^2=+1."
},
{
"math_id": 4,
"text": "\\rho(g)^\\dagger \\rho(g)=\\mathbf{1}"
}
]
| https://en.wikipedia.org/wiki?curid=695046 |
6950659 | Arf invariant | In mathematics, the Arf invariant of a nonsingular quadratic form over a field of characteristic 2 was defined by Turkish mathematician Cahit Arf (1941) when he started the systematic study of quadratic forms over arbitrary fields of characteristic 2. The Arf invariant is the substitute, in characteristic 2, for the discriminant for quadratic forms in characteristic not 2. Arf used his invariant, among others, in his endeavor to classify quadratic forms in characteristic 2.
In the special case of the 2-element field F2 the Arf invariant can be described as the element of F2 that occurs most often among the values of the form. Two nonsingular quadratic forms over F2 are isomorphic if and only if they have the same dimension and the same Arf invariant. This fact was essentially known to Leonard Dickson (1901), even for any finite field of characteristic 2, and Arf proved it for an arbitrary perfect field.
The Arf invariant is particularly applied in geometric topology, where it is primarily used to define an invariant of (4"k" + 2)-dimensional manifolds (singly even-dimensional manifolds: surfaces (2-manifolds), 6-manifolds, 10-manifolds, etc.) with certain additional structure called a framing, and thus the Arf–Kervaire invariant and the Arf invariant of a knot. The Arf invariant is analogous to the signature of a manifold, which is defined for 4"k"-dimensional manifolds (doubly even-dimensional); this 4-fold periodicity corresponds to the 4-fold periodicity of L-theory. The Arf invariant can also be defined more generally for certain 2"k"-dimensional manifolds.
Definitions.
The Arf invariant is defined for a quadratic form "q" over a field "K" of characteristic 2 such that "q" is nonsingular, in the sense that the associated bilinear form formula_0 is nondegenerate. The form formula_1 is alternating since "K" has characteristic 2; it follows that a nonsingular quadratic form in characteristic 2 must have even dimension. Any binary (2-dimensional) nonsingular quadratic form over "K" is equivalent to a form formula_2 with formula_3 in "K". The Arf invariant is defined to be the product formula_4. If the form formula_5 is equivalent to formula_6, then the products formula_4 and formula_7 differ by an element of the form formula_8 with formula_9 in "K". These elements form an additive subgroup "U" of "K". Hence the coset of formula_4 modulo "U" is an invariant of formula_10, which means that it is not changed when formula_10 is replaced by an equivalent form.
Every nonsingular quadratic form formula_10 over "K" is equivalent to a direct sum formula_11 of nonsingular binary forms. This was shown by Arf, but it had been earlier observed by Dickson in the case of finite fields of characteristic 2. The Arf invariant Arf(formula_10) is defined to be the sum of the Arf invariants of the formula_12. By definition, this is a coset of "K" modulo "U". Arf showed that indeed formula_13 does not change if formula_10 is replaced by an equivalent quadratic form, which is to say that it is an invariant of formula_10.
The Arf invariant is additive; in other words, the Arf invariant of an orthogonal sum of two quadratic forms is the sum of their Arf invariants.
For a field "K" of characteristic 2, Artin–Schreier theory identifies the quotient group of "K" by the subgroup "U" above with the Galois cohomology group "H"1("K", F2). In other words, the nonzero elements of "K"/"U" are in one-to-one correspondence with the separable quadratic extension fields of "K". So the Arf invariant of a nonsingular quadratic form over "K" is either zero or it describes a separable quadratic extension field of "K". This is analogous to the discriminant of a nonsingular quadratic form over a field "F" of characteristic not 2. In that case, the discriminant takes values in "F"*/("F"*)2, which can be identified with "H"1("F", F2) by Kummer theory.
Arf's main results.
If the field "K" is perfect, then every nonsingular quadratic form over "K" is uniquely determined (up to equivalence) by its dimension and its Arf invariant. In particular, this holds over the field F2. In this case, the subgroup "U" above is zero, and hence the Arf invariant is an element of the base field F2; it is either 0 or 1.
If the field "K" of characteristic 2 is not perfect (that is, "K" is different from its subfield "K"2 of squares), then the Clifford algebra is another important invariant of a quadratic form. A corrected version of Arf's original statement is that if the degree ["K": "K"2] is at most 2, then every quadratic form over "K" is completely characterized by its dimension, its Arf invariant and its Clifford algebra. Examples of such fields are function fields (or power series fields) of one variable over perfect base fields.
Quadratic forms over F2.
Over F2, the Arf invariant is 0 if the quadratic form is equivalent to a direct sum of copies of the binary form formula_14, and it is 1 if the form is a direct sum of formula_15 with a number of copies of formula_14.
William Browder has called the Arf invariant the "democratic invariant" because it is the value which is assumed most often by the quadratic form. Another characterization: "q" has Arf invariant 0 if and only if the underlying 2"k"-dimensional vector space over the field F2 has a "k"-dimensional subspace on which "q" is identically 0 – that is, a totally isotropic subspace of half the dimension. In other words, a nonsingular quadratic form of dimension 2"k" has Arf invariant 0 if and only if its isotropy index is "k" (this is the maximum dimension of a totally isotropic subspace of a nonsingular form).
The Arf invariant in topology.
Let "M" be a compact, connected 2"k"-dimensional manifold with a boundary formula_16
such that the induced morphisms in formula_17-coefficient homology
formula_18
are both zero (e.g. if formula_19 is closed). The intersection form
formula_20
is non-singular. (Topologists usually write F2 as formula_17.) A quadratic refinement for formula_21 is a function formula_22 which satisfies
formula_23
Let formula_24 be any 2-dimensional subspace of formula_25, such that formula_26. Then there are two possibilities. Either all of formula_27 are 1, or else just one of them is 1, and the other two are 0. Call the first case formula_28, and the second case formula_29. Since every form is equivalent to a symplectic form, we can always find subspaces formula_24 with "x" and "y" being formula_30-dual. We can therefore split formula_25 into a direct sum of subspaces isomorphic to either formula_29 or formula_28. Furthermore, by a clever change of basis, formula_31 We therefore define the Arf invariant
formula_32
formula_47
Note that formula_48 so we had to stabilise, taking formula_49 to be at least 4, in order to get an element of formula_17. The case formula_50 is also admissible as long as we take the residue modulo 2 of the framing.
formula_75
refining the homological intersection form formula_30. The Arf invariant of this form is the Kervaire invariant of ("f","b"). In the special case formula_76 this is the Kervaire invariant of "M". The Kervaire invariant features in the classification of exotic spheres by Michel Kervaire and John Milnor, and more generally in the classification of manifolds by surgery theory. William Browder defined formula_66 using functional Steenrod squares, and C. T. C. Wall defined formula_66 using framed immersions. The quadratic enhancement formula_61 crucially provides more information than formula_77 : it is possible to kill "x" by surgery if and only if formula_78. The corresponding Kervaire invariant detects the surgery obstruction of formula_79 in the L-group formula_80.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "b(u,v)=q(u+v)-q(u)-q(v)"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "q(x,y)= ax^2 + xy +by^2"
},
{
"math_id": 3,
"text": "a, b"
},
{
"math_id": 4,
"text": "ab"
},
{
"math_id": 5,
"text": "q'(x,y)=a'x^2 + xy+b'y^2"
},
{
"math_id": 6,
"text": "q(x,y)"
},
{
"math_id": 7,
"text": "a'b'"
},
{
"math_id": 8,
"text": "u^2+u "
},
{
"math_id": 9,
"text": "u"
},
{
"math_id": 10,
"text": "q"
},
{
"math_id": 11,
"text": "q = q_1 + \\cdots + q_r"
},
{
"math_id": 12,
"text": "q_i"
},
{
"math_id": 13,
"text": "\\operatorname{Arf}(q)"
},
{
"math_id": 14,
"text": "xy"
},
{
"math_id": 15,
"text": "x^2+xy+y^2"
},
{
"math_id": 16,
"text": "\\partial M"
},
{
"math_id": 17,
"text": "\\Z_2"
},
{
"math_id": 18,
"text": "H_k(M,\\partial M;\\Z_2) \\to H_{k-1}(\\partial M;\\Z_2), \\quad H_k(\\partial M;\\Z_2) \\to H_k(M;\\Z_2)"
},
{
"math_id": 19,
"text": "M"
},
{
"math_id": 20,
"text": "\\lambda : H_k(M;\\Z_2)\\times H_k(M;\\Z_2)\\to \\Z_2"
},
{
"math_id": 21,
"text": " \\lambda"
},
{
"math_id": 22,
"text": "\\mu : H_k(M;\\Z_2) \\to \\Z_2"
},
{
"math_id": 23,
"text": "\\mu(x+y) + \\mu(x) + \\mu(y) \\equiv \\lambda(x,y) \\pmod 2 \\; \\forall \\,x,y \\in H_k(M;\\Z_2)"
},
{
"math_id": 24,
"text": "\\{x,y\\}"
},
{
"math_id": 25,
"text": "H_k(M;\\Z_2)"
},
{
"math_id": 26,
"text": "\\lambda(x,y) = 1"
},
{
"math_id": 27,
"text": "\\mu(x+y), \\mu(x), \\mu(y)"
},
{
"math_id": 28,
"text": "H^{1,1}"
},
{
"math_id": 29,
"text": "H^{0,0}"
},
{
"math_id": 30,
"text": "\\lambda"
},
{
"math_id": 31,
"text": "H^{0,0} \\oplus H^{0,0} \\cong H^{1,1} \\oplus H^{1,1}."
},
{
"math_id": 32,
"text": "\\operatorname{Arf}(H_k(M;\\Z_2);\\mu) = (\\text{number of copies of } H^{1,1} \\text{ in a decomposition mod 2}) \\in \\Z_2. "
},
{
"math_id": 33,
"text": "g"
},
{
"math_id": 34,
"text": "S^m"
},
{
"math_id": 35,
"text": "m \\geq 4"
},
{
"math_id": 36,
"text": "m =3"
},
{
"math_id": 37,
"text": "x_1, x_2, \\ldots, x_{2g-1},x_{2g}"
},
{
"math_id": 38,
"text": "H_1(M)=\\Z^{2g}"
},
{
"math_id": 39,
"text": "x_i:S^1 \\subset M"
},
{
"math_id": 40,
"text": "S^1 \\subset M \\subset S^m"
},
{
"math_id": 41,
"text": "S^1 \\subset S^m"
},
{
"math_id": 42,
"text": "S^1 \\to SO(m-1)"
},
{
"math_id": 43,
"text": "\\pi_1(SO(m-1)) \\cong \\Z_2"
},
{
"math_id": 44,
"text": "S^1"
},
{
"math_id": 45,
"text": "\\Omega^\\text{framed}_1 \\cong \\pi_m(S^{m-1}) \\, (m \\geq 4) \\cong \\Z_2"
},
{
"math_id": 46,
"text": "\\mu(x)\\in \\Z_2"
},
{
"math_id": 47,
"text": " \\Phi(M) = \\operatorname{Arf}(H_1(M,\\partial M;\\Z_2);\\mu) \\in \\Z_2 "
},
{
"math_id": 48,
"text": "\\pi_1(SO(2)) \\cong \\Z,"
},
{
"math_id": 49,
"text": "m"
},
{
"math_id": 50,
"text": "m=3"
},
{
"math_id": 51,
"text": "\\Phi(M)"
},
{
"math_id": 52,
"text": "T^2"
},
{
"math_id": 53,
"text": "H_1(T^2;\\Z_2)"
},
{
"math_id": 54,
"text": "\\pi_1(SO(3))"
},
{
"math_id": 55,
"text": "\\Omega^\\text{framed}_2 \\cong \\pi_m(S^{m-2}) \\, (m \\geq 4) \\cong \\Z_2"
},
{
"math_id": 56,
"text": "(M^2,\\partial M) \\subset S^3"
},
{
"math_id": 57,
"text": "\\partial M = K : S^1 \\hookrightarrow S^3"
},
{
"math_id": 58,
"text": "D^2"
},
{
"math_id": 59,
"text": "x \\in H_1(M;\\Z_2)"
},
{
"math_id": 60,
"text": "x"
},
{
"math_id": 61,
"text": "\\mu(x)"
},
{
"math_id": 62,
"text": "S^3"
},
{
"math_id": 63,
"text": "D^4"
},
{
"math_id": 64,
"text": "x \\in H_1(M,\\partial M)"
},
{
"math_id": 65,
"text": "M \\hookrightarrow D^4"
},
{
"math_id": 66,
"text": "\\mu"
},
{
"math_id": 67,
"text": "H_{2k+1}(M;\\Z_2)"
},
{
"math_id": 68,
"text": "k \\neq 0,1,3"
},
{
"math_id": 69,
"text": "x \\in H_{2k+1}(M;\\Z_2)"
},
{
"math_id": 70,
"text": "x\\colon S^{2k+1}\\subset M"
},
{
"math_id": 71,
"text": "\\pi_{4k+2}^S \\to \\Z_2"
},
{
"math_id": 72,
"text": "4k+2"
},
{
"math_id": 73,
"text": "(f,b):M \\to X"
},
{
"math_id": 74,
"text": "(K_{2k+1}(M;\\Z_2),\\mu)"
},
{
"math_id": 75,
"text": "K_{2k+1}(M;\\Z_2)=ker(f_*:H_{2k+1}(M;\\Z_2)\\to H_{2k+1}(X;\\Z_2))"
},
{
"math_id": 76,
"text": "X=S^{4k+2}"
},
{
"math_id": 77,
"text": "\\lambda(x,x)"
},
{
"math_id": 78,
"text": "\\mu(x)=0"
},
{
"math_id": 79,
"text": "(f,b)"
},
{
"math_id": 80,
"text": "L_{4k+2}(\\Z)=\\Z_2"
},
{
"math_id": 81,
"text": "(4k + 1)"
}
]
| https://en.wikipedia.org/wiki?curid=6950659 |
69507899 | 1 Samuel 11 | First Book of Samuel chapter
1 Samuel 11 is the eleventh chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter describes Saul obliterating the army of Nahash king of Ammon and liberating Jabesh-Gilead, thereby convincing the people about his ability to lead, and causing them to appoint him king. This is within a section comprising 1 Samuel 7–15 which records the rise of the monarchy in Israel and the account of the first years of King Saul.
Text.
This chapter was originally written in the Hebrew language. It is divided into 15 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–2, 7–12.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
The threat of the Ammonites (11:1–3).
For this narrative, 4QSama (among the Dead Sea Scrolls; from first century BCE) and the writing of Josephus from first century CE, provide a background information that Nahash king of the Ammonites have subdued Israel's Transjordanian tribes (Gadites and Reubenites) and gouged the right eye of his captives (cf. 11:2 for explanation), but 7000 Israelites escaped and hid in Jabesh-Gilead, so now Nahash came to threaten the city. Significantly, Jabesh-Gilead was the only town refusing the call of arms in previous time (Judges 21), so now their chance of receiving help from other Israel tribes were slim, and that's probably why Nahash allowed them seven days to send messengers to try asking. Due to their prior refusal to join the call to arms, the people of Jabesh-Gilead were slaughtered by other tribes, except for 400 virgin girls who were left alive and given to be the wives of the survivors of the tribe of Benjamin (among 600 men) for a separate slaughter by the Israel tribes, so were it not for the inhabitants of Jabesh Gilead, the tribe of Benjamin would be annihilated.
"Then Nahash the Ammonite came up and encamped against Jabesh Gilead; and all the men of Jabesh said to Nahash, "Make a covenant with us, and we will serve you."""
Verse 1.
Prior to the first word "Then..." 4QSama and Greek Septuagint texts have a phrase: "about a month later".
Prior to the whole verse, 4QSama and Josephus ("Antiquities" 6.5.1. [68-71]) attest to an addition which explains Nahash's practice of enemy mutilation, and by so doing provides a smoother transition to the following paragraph than is found in the Masoretic Text, or Greek Septuagint manuscripts. NRSV renders it as verse 10:27b as follows: "Now Nahash, king of the Ammonites, had been grievously oppressing the Gadites and the Reubenites. He would gouge out the right eye of each of them and would not grant Israel a deliverer. No one was left of the Israelites across the Jordan whose right eye Nahash, king of the Ammonites, had not gouged out. But there were 7,000 men who had escaped from the Ammonites and had entered Jabesh Gilead. About a month later, Nahash the Ammonite went up and besieged Jabesh Gilead." The variations may be explained as scribal errors due to homeoteleuton, in which case the scribe jumps from one word to another word with a similar ending later in the text. Comparing to the reading in 4QSama, NET Bible suggests that the scribe of the MT may have skipped from the phrase , ', at the end of 1 Samuel 10:27, which should possibly be , ', and picked up after the phrase , ', "it happened about a month later...". 4QSama also contains a case of homeoteleuton in this passage, that the scribe first skipped from one case of , ', "Gilead", to another, then inserted the missing 10 words between the lines of the 4QSama text. The fact that the scribe made this type of mistake and was able to make corrections indicates that the person was copying from a source that had these verses in it. Moreover, the 4QSama text first introduces Nahash with his full title, as the king of the Ammonites, which is considered the usual style.
And Nahash the Ammonite answered them, "On this condition I will make a covenant with you, that I may put out all your right eyes, and bring reproach on all Israel."
Saul defeated the Ammonites and rescued Jabesh Gilead (11:4–15).
When the messengers from Jabesh Gilead reached Saul's hometown, Gibeah, Saul was working as a farmer and only heard about the situation second hand, after witnessing the townpeople publicly weeping over the news. Unlike others, Saul became angry after hearing the message, and it is said that God's spirit who brought on his anger (11:6; cf. Judges 3:10; 6:34; 11:29; 13:25; especially Samson in 14:6, 19; 15:14). The way Saul called the people to arms was by dismembering a pair of his oxen ("a yoke of oxen") and sending the pieces to all places in the territory of Israel (cf. Judges 19:29–30), with a message that the people who refused to respond would have a fate like that of the oxen. Saul's strategy and eventual victory was similar to that of former judges: by dividing the forces (cf. Judges 7) to surround the enemy camp and attacking in an early morning, but the attribution of the victory was to YHWH (verse 12). The victory proves Saul's worthiness of the kingship contrary to the words of his opponents (10:26), but those critics were spared according to Saul's own wish and Saul was acclaimed king once more at Gilgal.
"And all the people went to Gilgal; and there they made Saul king before the LORD in Gilgal; and there they sacrificed sacrifices of peace offerings before the LORD; and there Saul and all the men of Israel rejoiced greatly."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69507899 |
69509 | Vigenère cipher | Simple type of polyalphabetic encryption system
The Vigenère cipher () is a method of encrypting alphabetic text where each letter of the plaintext is encoded with a different Caesar cipher, whose increment is determined by the corresponding letter of another text, the key.
For example, if the plaintext is codice_0 and the key is codice_1, then
and so on; yielding the message codice_11. If the recipient of the message knows the key, they can recover the plaintext by reversing this process.
The Vigenère cipher is therefore a special case of a polyalphabetic substitution.
First described by Giovan Battista Bellaso in 1553, the cipher is easy to understand and implement, but it resisted all attempts to break it until 1863, three centuries later. This earned it the description le chiffrage indéchiffrable (French for 'the indecipherable cipher'). Many people have tried to implement encryption schemes that are essentially Vigenère ciphers. In 1863, Friedrich Kasiski was the first to publish a general method of deciphering Vigenère ciphers.
In the 19th century, the scheme was misattributed to Blaise de Vigenère (1523–1596) and so acquired its present name.
History.
The very first well-documented description of a polyalphabetic cipher was by Leon Battista Alberti around 1467 and used a metal cipher disk to switch between cipher alphabets. Alberti's system only switched alphabets after several words, and switches were indicated by writing the letter of the corresponding alphabet in the ciphertext. Later, Johannes Trithemius, in his work "Polygraphiae" (which was completed in manuscript form in 1508 but first published in 1518), invented the tabula recta, a critical component of the Vigenère cipher. The Trithemius cipher, however, provided a progressive, rather rigid and predictable system for switching between cipher alphabets.
In 1586 Blaise de Vigenère published a type of polyalphabetic cipher called an autokey cipher – because its key is based on the original plaintext – before the court of Henry III of France. The cipher now known as the Vigenère cipher, however, is based on that originally described by Giovan Battista Bellaso in his 1553 book "La cifra del Sig. Giovan Battista Bellaso". He built upon the tabula recta of Trithemius but added a repeating "countersign" (a key) to switch cipher alphabets every letter.
Whereas Alberti and Trithemius used a fixed pattern of substitutions, Bellaso's scheme meant the pattern of substitutions could be easily changed, simply by selecting a new key. Keys were typically single words or short phrases, known to both parties in advance, or transmitted "out of band" along with the message, Bellaso's method thus required strong security for only the key. As it is relatively easy to secure a short key phrase, such as by a previous private conversation, Bellaso's system was considerably more secure.
Note, however, as opposed to the modern Vigenère cipher, Bellaso's cipher didn't have 26 different "shifts" (different Caesar's ciphers) for every letter, instead having 13 shifts for pairs of letters. In the 19th century, the invention of this cipher, essentially designed by Bellaso, was misattributed to Vigenère. David Kahn, in his book, "The Codebreakers" lamented this misattribution, saying that history had "ignored this important contribution and instead named a regressive and elementary cipher for him [Vigenère] though he had nothing to do with it".
The Vigenère cipher gained a reputation for being exceptionally strong. Noted author and mathematician Charles Lutwidge Dodgson (Lewis Carroll) called the Vigenère cipher unbreakable in his 1868 piece "The Alphabet Cipher" in a children's magazine. In 1917, "Scientific American" described the Vigenère cipher as "impossible of translation". That reputation was not deserved. Charles Babbage is known to have broken a variant of the cipher as early as 1854 but did not publish his work. Kasiski entirely broke the cipher and published the technique in the 19th century, but even in the 16th century, some skilled cryptanalysts could occasionally break the cipher.
The Vigenère cipher is simple enough to be a field cipher if it is used in conjunction with cipher disks. The Confederate States of America, for example, used a brass cipher disk to implement the Vigenère cipher during the American Civil War. The Confederacy's messages were far from secret, and the Union regularly cracked its messages. Throughout the war, the Confederate leadership primarily relied upon three key phrases: "Manchester Bluff", "Complete Victory" and, as the war came to a close, "Come Retribution".
A Vigenère cipher with a completely random (and non-reusable) key which is as long as the message becomes a one-time pad, a theoretically unbreakable cipher. Gilbert Vernam tried to repair the broken cipher (creating the Vernam–Vigenère cipher in 1918), but the technology he used was so cumbersome as to be impracticable.
Description.
In a Caesar cipher, each letter of the alphabet is shifted along some number of places. For example, in a Caesar cipher of shift 3, codice_2 would become codice_13, codice_14 would become codice_15, codice_16 would become codice_17 and so on. The Vigenère cipher has several Caesar ciphers in sequence with different shift values.
To encrypt, a table of alphabets can be used, termed a "tabula recta", "Vigenère square" or "Vigenère table". It has the alphabet written out 26 times in different rows, each alphabet shifted cyclically to the left compared to the previous alphabet, corresponding to the 26 possible Caesar ciphers. At different points in the encryption process, the cipher uses a different alphabet from one of the rows. The alphabet used at each point depends on a repeating keyword.
For example, suppose that the plaintext to be encrypted is
codice_18.
The person sending the message chooses a keyword and repeats it until it matches the length of the plaintext, for example, the keyword "LEMON":
codice_19
Each row starts with a key letter. The rest of the row holds the letters A to Z (in shifted order). Although there are 26 key rows shown, a code will use only as many keys (different alphabets) as there are unique letters in the key string, here just 5 keys: {L, E, M, O, N}. For successive letters of the message, successive letters of the key string will be taken and each message letter enciphered by using its corresponding key row. When a new character of the message is selected, the next letter of the key is chosen, and the row corresponding to that char is gone along to find the column heading that matches the message character. The letter at the intersection of [key-row, msg-col] is the enciphered letter.
For example, the first letter of the plaintext, codice_2, is paired with codice_21, the first letter of the key. Therefore, row codice_21 and column codice_23 of the Vigenère square are used, namely codice_21. Similarly, for the second letter of the plaintext, the second letter of the key is used. The letter at row codice_15 and column codice_26 is codice_27. The rest of the plaintext is enciphered in a similar fashion:
Decryption is performed by going to the row in the table corresponding to the key, finding the position of the ciphertext letter in that row and then using the column's label as the plaintext. For example, in row codice_21 (from codice_29), the ciphertext codice_21 appears in column codice_23, so codice_2 is the first plaintext letter. Next, in row codice_15 (from codice_34), the ciphertext codice_27 is located in column codice_26. Thus codice_5 is the second plaintext letter.
Algebraic description.
Vigenère can also be described algebraically. If the letters codice_23–codice_39 are taken to be the numbers 0–25 (formula_0, formula_1, etc.), and addition is performed modulo 26, Vigenère encryption formula_2 using the key formula_3 can be written as
formula_4
and decryption formula_5 using the key formula_3 as
formula_6
in which formula_7 is the message, formula_8 is the ciphertext and formula_9 is the key obtained by repeating the keyword formula_10 times in which formula_11 is the keyword length.
Thus, by using the previous example, to encrypt formula_0 with key letter formula_12 the calculation would result in formula_13.
formula_14
Therefore, to decrypt formula_15 with key letter formula_16, the calculation would result in formula_17.
formula_18
In general, if formula_19 is the alphabet of length formula_20, and formula_11 is the length of key, Vigenère encryption and decryption can be written:
formula_21
formula_22
formula_23 denotes the offset of the "i"-th character of the plaintext formula_24 in the alphabet formula_19. For example, by taking the 26 English characters as the alphabet formula_25, the offset of A is 0, the offset of B is 1 etc. formula_26 and formula_27 are similar.
Cryptanalysis.
The idea behind the Vigenère cipher, like all other polyalphabetic ciphers, is to disguise the plaintext letter frequency to interfere with a straightforward application of frequency analysis. For instance, if codice_40 is the most frequent letter in a ciphertext whose plaintext is in English, one might suspect that codice_40 corresponds to codice_42 since codice_42 is the most frequently used letter in English. However, by using the Vigenère cipher, codice_42 can be enciphered as different ciphertext letters at different points in the message, which defeats simple frequency analysis.
The primary weakness of the Vigenère cipher is the repeating nature of its key. If a cryptanalyst correctly guesses the key's length "n", the cipher text can be treated as "n" interleaved Caesar ciphers, which can easily be broken individually. The key length may be discovered by brute force testing each possible value of "n", or Kasiski examination and the Friedman test can help to determine the key length (see below: and ).
Kasiski examination.
In 1863, Friedrich Kasiski was the first to publish a successful general attack on the Vigenère cipher. Earlier attacks relied on knowledge of the plaintext or the use of a recognizable word as a key. Kasiski's method had no such dependencies. Although Kasiski was the first to publish an account of the attack, it is clear that others had been aware of it. In 1854, Charles Babbage was goaded into breaking the Vigenère cipher when John Hall Brock Thwaites submitted a "new" cipher to the "Journal of the Society of the Arts". When Babbage showed that Thwaites' cipher was essentially just another recreation of the Vigenère cipher, Thwaites presented a challenge to Babbage: given an original text (from Shakespeare's "The Tempest": Act 1, Scene 2) and its enciphered version, he was to find the key words that Thwaites had used to encipher the original text. Babbage soon found the key words: "two" and "combined". Babbage then enciphered the same passage from Shakespeare using different key words and challenged Thwaites to find Babbage's key words. Babbage never explained the method that he used. Studies of Babbage's notes reveal that he had used the method later published by Kasiski and suggest that he had been using the method as early as 1846.
The Kasiski examination, also called the Kasiski test, takes advantage of the fact that repeated words are, by chance, sometimes encrypted using the same key letters, leading to repeated groups in the ciphertext. For example, consider the following encryption using the keyword codice_45:
Key: ABCDABCDABCDABCDABCDABCDABCD
Plaintext: cryptoisshortforcryptography
Ciphertext: CSASTPKVSIQUTGQUCSASTPIUAQJB
There is an easily noticed repetition in the ciphertext, and so the Kasiski test will be effective.
The distance between the repetitions of codice_46 is 16. If it is assumed that the repeated segments represent the same plaintext segments, that implies that the key is 16, 8, 4, 2, or 1 characters long. (All factors of the distance are possible key lengths; a key of length one is just a simple Caesar cipher, and its cryptanalysis is much easier.) Since key lengths 2 and 1 are unrealistically short, one needs to try only lengths 16, 8, and 4. Longer messages make the test more accurate because they usually contain more repeated ciphertext segments. The following ciphertext has two segments that are repeated:
Ciphertext: VHVSSPQUCEMRVBVBBBVHVSURQGIBDUGRNICJQUCERVUAXSSR
The distance between the repetitions of codice_47 is 18. If it is assumed that the repeated segments represent the same plaintext segments, that implies that the key is 18, 9, 6, 3, 2, or 1 characters long. The distance between the repetitions of codice_48 is 30 characters. That means that the key length could be 30, 15, 10, 6, 5, 3, 2, or 1 characters long. By taking the intersection of those sets, one could safely conclude that the most likely key length is 6 since 3, 2, and 1 are unrealistically short.
Friedman test.
The Friedman test (sometimes known as the kappa test) was invented during the 1920s by William F. Friedman, who used the index of coincidence, which measures the unevenness of the cipher letter frequencies to break the cipher. By knowing the probability formula_28 that any two randomly chosen source language letters are the same (around 0.067 for case-insensitive English) and the probability of a coincidence for a uniform random selection from the alphabet formula_29 (<templatestyles src="Fraction/styles.css" />1⁄26 = 0.0385 for English), the key length can be estimated as the following:
formula_30
from the observed coincidence rate
formula_31
in which "c" is the size of the alphabet (26 for English), "N" is the length of the text and "n"1 to "n""c" are the observed ciphertext letter frequencies, as integers.
That is, however, only an approximation; its accuracy increases with the length of the text. It would, in practice, be necessary to try various key lengths that are close to the estimate. A better approach for repeating-key ciphers is to copy the ciphertext into rows of a matrix with as many columns as an assumed key length and then to compute the average index of coincidence with each column considered separately. When that is done for each possible key length, the highest average index of coincidence then corresponds to the most-likely key length. Such tests may be supplemented by information from the Kasiski examination.
Frequency analysis.
Once the length of the key is known, the ciphertext can be rewritten into that many columns, with each column corresponding to a single letter of the key. Each column consists of plaintext that has been encrypted by a single Caesar cipher. The Caesar key (shift) is just the letter of the Vigenère key that was used for that column. Using methods similar to those used to break the Caesar cipher, the letters in the ciphertext can be discovered.
An improvement to the Kasiski examination, known as Kerckhoffs' method, matches each column's letter frequencies to shifted plaintext frequencies to discover the key letter (Caesar shift) for that column. Once every letter in the key is known, all the cryptanalyst has to do is to decrypt the ciphertext and reveal the plaintext. Kerckhoffs' method is not applicable if the Vigenère table has been scrambled, rather than using normal alphabetic sequences, but Kasiski examination and coincidence tests can still be used to determine key length.
Key elimination.
The Vigenère cipher, with normal alphabets, essentially uses modulo arithmetic, which is commutative. Therefore, if the key length is known (or guessed), subtracting the cipher text from itself, offset by the key length, will produce the plain text subtracted from itself, also offset by the key length. If any "probable word" in the plain text is known or can be guessed, its self-subtraction can be recognized, which allows recovery of the key by subtracting the known plaintext from the cipher text. Key elimination is especially useful against short messages. For example, using codice_49 as the key below:
Then subtract the ciphertext from itself with a shift of the key length 4 for codice_49.
Which is nearly equivalent to subtracting the plaintext from itself by the same shift.
Which is algebraically represented for formula_32 as:
formula_33
In this example, the words codice_51 are known.
This result codice_52 corresponds with the 9th through 12th letters in the result of the larger examples above. The known section and its location is verified.
Subtract codice_53 from that range of the ciphertext.
This produces the final result, the reveal of the key codice_49.
Variants.
Running key.
The running key variant of the Vigenère cipher was also considered unbreakable at one time. For the key, this version uses a block of text as long as the plaintext. Since the key is as long as the message, the Friedman and Kasiski tests no longer work, as the key is not repeated.
If multiple keys are used, the effective key length is the least common multiple of the lengths of the individual keys. For example, using the two keys codice_55 and codice_56, whose lengths are 2 and 3, one obtains an effective key length of 6 (the least common multiple of 2 and 3). This can be understood as the point where both keys line up.
Encrypting twice, first with the key codice_55 and then with the key codice_56 is the same as encrypting once with a key produced by encrypting one key with the other.
This is demonstrated by encrypting codice_18 with codice_60, to produce the same ciphertext as in the original example.
If key lengths are relatively prime, the effective key length grows exponentially as the individual key lengths are increased. For example, while the effective length of keys 10, 12, and 15 characters is only 60, that of keys of 8, 11, and 15 characters is 1320. If this effective key length is longer than the ciphertext, it achieves the same immunity to the Friedman and Kasiski tests as the running key variant.
If one uses a key that is truly random, is at least as long as the encrypted message, and is used only once, the Vigenère cipher is theoretically unbreakable. However, in that case, the key, not the cipher, provides cryptographic strength, and such systems are properly referred to collectively as one-time pad systems, irrespective of the ciphers employed.
Variant Beaufort.
A simple variant is to encrypt by using the Vigenère decryption method and to decrypt by using Vigenère encryption. That method is sometimes referred to as "Variant Beaufort". It is different from the Beaufort cipher, created by Francis Beaufort, which is similar to Vigenère but uses a slightly modified enciphering mechanism and tableau. The Beaufort cipher is a reciprocal cipher.
Gronsfeld cipher.
Despite the Vigenère cipher's apparent strength, it never became widely used throughout Europe. The Gronsfeld cipher is a variant attributed by Gaspar Schott to Count Gronsfeld (Josse Maximilaan van Gronsveld né van Bronckhorst) but was actually used much earlier by an ambassador of Duke of Mantua in 1560s-1570s. It is identical to the Vigenère cipher except that it uses just 10 different cipher alphabets, corresponding to the digits 0 to 9: a Gronsfeld key of 0123 is the same as a Vigenere key of ABCD. The Gronsfeld cipher is strengthened because its key is not a word, but it is weakened because it has just 10 cipher alphabets. It is Gronsfeld's cipher that became widely used throughout Germany and Europe, despite its weaknesses.
Vigenèreʼs autokey cipher.
Vigenère actually invented a stronger cipher, an autokey cipher. The name "Vigenère cipher" became associated with a simpler polyalphabetic cipher instead. In fact, the two ciphers were often confused, and both were sometimes called "le chiffre indéchiffrable". Babbage actually broke the much-stronger autokey cipher, but Kasiski is generally credited with the first published solution to the fixed-key polyalphabetic ciphers.
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A \\,\\widehat{=}\\, 0"
},
{
"math_id": 1,
"text": "B \\,\\widehat{=}\\, 1"
},
{
"math_id": 2,
"text": "E"
},
{
"math_id": 3,
"text": "K"
},
{
"math_id": 4,
"text": "C_i = E_K(M_i) = (M_i+K_i) \\bmod 26"
},
{
"math_id": 5,
"text": "D"
},
{
"math_id": 6,
"text": "M_i = D_K(C_i) = (C_i-K_i) \\bmod 26,"
},
{
"math_id": 7,
"text": "M = M_1 \\dots M_n"
},
{
"math_id": 8,
"text": "C = C_1 \\dots C_n"
},
{
"math_id": 9,
"text": "K = K_1 \\dots K_n"
},
{
"math_id": 10,
"text": "\\lceil n / m \\rceil"
},
{
"math_id": 11,
"text": "m"
},
{
"math_id": 12,
"text": "L \\,\\widehat{=}\\, 11"
},
{
"math_id": 13,
"text": "11 \\,\\widehat{=}\\, L"
},
{
"math_id": 14,
"text": "11 = (0+11) \\bmod 26"
},
{
"math_id": 15,
"text": "R \\,\\widehat{=}\\, 17"
},
{
"math_id": 16,
"text": "E \\,\\widehat{=}\\, 4"
},
{
"math_id": 17,
"text": "13 \\,\\widehat{=}\\, N"
},
{
"math_id": 18,
"text": "13 = (17-4) \\bmod 26"
},
{
"math_id": 19,
"text": "\\Sigma"
},
{
"math_id": 20,
"text": "\\ell"
},
{
"math_id": 21,
"text": "C_i = E_K(M_i) = (M_i+K_{(i \\bmod m)}) \\bmod \\ell,"
},
{
"math_id": 22,
"text": "M_i = D_K(C_i) = (C_i-K_{(i \\bmod m)}) \\bmod \\ell."
},
{
"math_id": 23,
"text": "M_i"
},
{
"math_id": 24,
"text": "M"
},
{
"math_id": 25,
"text": "\\Sigma = (A,B,C,\\ldots,X,Y,Z)"
},
{
"math_id": 26,
"text": "C_i"
},
{
"math_id": 27,
"text": "K_i"
},
{
"math_id": 28,
"text": "\\kappa_\\text{p}"
},
{
"math_id": 29,
"text": "\\kappa_\\text{r}"
},
{
"math_id": 30,
"text": "\\frac{\\kappa_\\text{p}-\\kappa_\\text{r}}{\\kappa_\\text{o}-\\kappa_\\text{r}}"
},
{
"math_id": 31,
"text": "\\kappa_\\text{o}=\\frac{\\sum_{i=1}^{c}n_i(n_i -1)}{N(N-1)}"
},
{
"math_id": 32,
"text": "i \\in [1, n - m]"
},
{
"math_id": 33,
"text": "\n\\begin{align}\n(C_i - C_{(i + m)}) \\bmod \\ell &= (E_K(M_i) - E_K(M_{(i + m)})) \\bmod \\ell \\\\\n &= ((M_i + K_{(i \\bmod m)}) \\bmod \\ell - (M_{(i + m)} + K_{((i + m) \\bmod m)}) \\bmod \\ell) \\bmod \\ell \\\\\n &= ((M_i + K_{(i \\bmod m)}) - (M_{(i + m)} + K_{((i + m) \\bmod m)})) \\bmod \\ell \\\\\n &= (M_i + K_{(i \\bmod m)} - M_{(i + m)} - K_{((i + m) \\bmod m)}) \\bmod \\ell \\\\\n &= (M_i - M_{(i + m)} + K_{(i \\bmod m)} - K_{((i + m) \\bmod m)}) \\bmod \\ell \\\\\n &= (M_i - M_{(i + m)} + K_{(i \\bmod m)} - K_{(i \\bmod m)}) \\bmod \\ell \\\\\n &= (M_i - M_{(i + m)}) \\bmod \\ell \\\\\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=69509 |
695101 | Real representation | Type of representation in representation theory
In the mathematical field of representation theory a real representation is usually a representation on a real vector space "U", but it can also mean a representation on a complex vector space "V" with an invariant real structure, i.e., an antilinear equivariant map
formula_0
which satisfies
formula_1
The two viewpoints are equivalent because if "U" is a real vector space acted on by a group "G" (say), then "V" = "U"⊗C is a representation on a complex vector space with an antilinear equivariant map given by complex conjugation. Conversely, if "V" is such a complex representation, then "U" can be recovered as the fixed point set of "j" (the eigenspace with eigenvalue 1).
In physics, where representations are often viewed concretely in terms of matrices, a real representation is one in which the entries of the matrices representing the group elements are real numbers. These matrices can act either on real or complex column vectors.
A real representation on a complex vector space is isomorphic to its complex conjugate representation, but the converse is not true: a representation which is isomorphic to its complex conjugate but which is not real is called a pseudoreal representation. An irreducible pseudoreal representation "V" is necessarily a quaternionic representation: it admits an invariant quaternionic structure, i.e., an antilinear equivariant map
formula_0
which satisfies
formula_2
A direct sum of real and quaternionic representations is neither real nor quaternionic in general.
A representation on a complex vector space can also be isomorphic to the dual representation of its complex conjugate. This happens precisely when the representation admits a nondegenerate invariant sesquilinear form, e.g. a hermitian form. Such representations are sometimes said to be complex or (pseudo-)hermitian.
Frobenius-Schur indicator.
A criterion (for compact groups "G") for reality of irreducible representations in terms of character theory is based on the Frobenius-Schur indicator defined by
formula_3
where "χ" is the character of the representation and "μ" is the Haar measure with μ("G") = 1. For a finite group, this is given by
formula_4
The indicator may take the values 1, 0 or −1. If the indicator is 1, then the representation is real. If the indicator is zero, the representation is complex (hermitian), and if the indicator is −1, the representation is quaternionic.
Examples.
All representation of the symmetric groups are real (and in fact rational), since we can build a complete set of irreducible representations using Young tableaux.
All representations of the rotation groups on odd-dimensional spaces are real, since they all appear as subrepresentations of tensor products of copies of the fundamental representation, which is real.
Further examples of real representations are the spinor representations of the spin groups in 8"k"−1, 8"k", and 8"k"+1 dimensions for "k" = 1, 2, 3 ... This periodicity "modulo" 8 is known in mathematics not only in the theory of Clifford algebras, but also in algebraic topology, in KO-theory; see spin representation and Bott periodicity.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "j\\colon V\\to V"
},
{
"math_id": 1,
"text": "j^2=+1."
},
{
"math_id": 2,
"text": "j^2=-1."
},
{
"math_id": 3,
"text": "\\int_{g\\in G}\\chi(g^2)\\,d\\mu"
},
{
"math_id": 4,
"text": "{1\\over |G|}\\sum_{g\\in G}\\chi(g^2)."
}
]
| https://en.wikipedia.org/wiki?curid=695101 |
69513035 | Container method | Method in combinatorics
The method of (hypergraph) containers is a powerful tool that can help characterize the typical structure and/or answer extremal questions about families of discrete objects with a prescribed set of local constraints. Such questions arise naturally in extremal graph theory, additive combinatorics, discrete geometry, coding theory, and Ramsey theory; they include some of the most classical problems in the associated fields.
These problems can be expressed as questions of the following form: given a hypergraph "H" on finite vertex set "V" with edge set "E" (i.e. a collection of subsets of "V" with some size constraints), what can we say about the independent sets of "H" (i.e. those subsets of "V" that contain no element of "E")? The hypergraph container lemma provides a method for tackling such questions.
History.
One of the foundational problems of extremal graph theory, dating to work of Mantel in 1907 and Turán from the 1940s, asks to characterize those graphs that do not contain a copy of some fixed forbidden "H" as a subgraph. In a different domain, one of the motivating questions in additive combinatorics is understanding how large a set of integers can be without containing a "k"-term arithmetic progression, with upper bounds on this size given by Roth (formula_0) and Szemerédi (general "k").
The method of containers (in graphs) was initially pioneered by Kleitman and Winston in 1980, who bounded the number of lattices and graphs without 4-cycles. Container-style lemmas were independently developed by multiple mathematicians in different contexts, notably including Sapozhenko, who initially used this approach in 2002-2003 to enumerate independent sets in regular graphs, sum-free sets in abelian groups, and study a variety of other enumeration problems
A generalization of these ideas to a hypergraph container lemma was devised independently by Saxton and Thomason and Balogh, Morris, and Samotij in 2015, inspired by a variety of previous related work.
Main idea and informal statement.
Many problems in combinatorics can be recast as questions about independent sets in graphs and hypergraphs. For example, suppose we wish to understand subsets of integers "1" to "n", which we denote by formula_1 that lack a "k"-term arithmetic progression. These sets are exactly the independent sets in the "k"-uniform hypergraph formula_2, where "E" is the collection of all "k"-term arithmetic progressions in formula_3.
In the above (and many other) instances, there are usually two natural classes of problems posed about a hypergraph "H":
These problems are connected by a simple observation. Let formula_4 be the size of a largest independent set of "H" and suppose formula_5 has formula_6 independent sets. Then,
formula_7
where the lower bound follows by taking all subsets of a maximum independent set. These bounds are relatively far away from each other unless formula_4 is very large, close to the number of vertices of the hypergraph. However, in many hypergraphs that naturally arise in combinatorial problems, we have reason to believe that the lower bound is closer to the true value; thus the primary goal is to improve the upper bounds on "i(H)".
The hypergraph container lemma provides a powerful approach to understanding the structure and size of the family of independent sets in a hypergraph. At its core, the hypergraph container method enables us to extract from a hypergraph, a collection of "containers", subsets of vertices that satisfy the following properties:
The name container alludes to this last condition. Such containers often provide an effective approach to characterizing the family of independent sets (subsets of the containers) and to enumerating the independent sets of a hypergraph (by simply considering all possible subsets of a container).
The hypergraph container lemma achieves the above container decomposition in two pieces. It constructs a deterministic function "f". Then, it provides an algorithm that extracts from each independent set "I" in hypergraph "H", a relatively small collection of vertices formula_8, called a "fingerprint," with the property that formula_9. Then, the containers are the collection of sets formula_10 that arise in the above process, and the small size of the fingerprints provides good control on the number of such container sets.
Graph container algorithm.
We first describe a method for showing strong upper bounds on the number of independent sets in a graph; this exposition is adapted from a survey of Samotij about the graph container method, originally employed by Kleitman-Winston and Sapozhenko.
Notation.
We use the following notation in the below section.
Kleitman-Winston algorithm.
The following algorithm gives a small "fingerprint" for every independent set in a graph and a deterministic function of the fingerprint to construct a not-too-large subset that contains the entire independent set
Fix graph "G", independent set formula_19 and positive integer formula_20.
Analysis.
By construction, the output of the above algorithm has property that formula_31, noting that formula_30 is a vertex subset that is completely determined by formula_32 and not otherwise a function of formula_33. To emphasize this we will write formula_34. We also observe that we can reconstruct the set formula_35 in the above algorithm just from the vector formula_29.
This suggests that formula_36 might be a good choice of a "fingerprint" and formula_37 a good choice for a "container". More precisely, we can bound the number of independent sets of formula_38 of some size formula_39 as a sum over output sequences formula_40
formula_41,
where we can sum across formula_42 to get a bound on the total number of independent sets of the graph:
formula_43.
When trying to minimize this upper bound, we want to pick formula_44 that balances/minimizes these two terms. This result illustrates the value of ordering vertices by maximum degree (to minimize formula_45).
Lemmas.
The above inequalities and observations can be stated in a more general setting, divorced from an explicit sum over vectors formula_46.
Lemma 1: Given a graph formula_38 with formula_47 and assume that integer formula_44 and real numbers formula_48 satisfy formula_49.
Suppose that every induced subgraph on at least formula_50 vertices has edge density at least formula_51. Then for every integer formula_39,
formula_52
Lemma 2: Let formula_38 be a graph on formula_47 vertices and assume that an integer formula_44 and reals formula_53 are chosen such that formula_54. If all subsets formula_55 of at least formula_50 vertices have at least formula_56 edges, then there is a collection formula_57 of subsets of formula_44 vertices ("fingerprints") and a deterministic function formula_58, so that for every independent set formula_59, there is formula_60 such that formula_61.
Hypergraph container lemma.
Informally, the hypergraph container lemma tells us that we can assign a small "fingerprint" formula_8 to each independent set, so that all independent sets with the same fingerprint belong to the same larger set, formula_62, the associated "container," that has size bounded away from the number of vertices of the hypergraph. Further, these fingerprints are small (and thus there are few containers), and we can upper bound their size in an essentially optimal way using some simple properties of the hypergraph.
We recall the following notation associated to formula_63 uniform hypergraph formula_64.
Statement.
We state the version of this lemma found in a work of Balogh, Morris, Samotij, and Saxton.
Let formula_64 be a formula_63-uniform hypergraph and suppose that for every formula_69 and some formula_70, we have that formula_71. Then, there is a collection formula_72 and a function formula_73 such that
Example applications.
Regular graphs.
Upper bound on the number of independent sets.
We will show that there is an absolute constant "C" such that every formula_47-vertex formula_80-regular graph formula_38 satisfies formula_81.
We can bound the number of independent sets of each size formula_42 by using the trivial bound formula_82 for formula_83.
For larger formula_42, take formula_84 With these parameters, "d"-regular graph formula_38 satisfies the conditions of Lemma 1 and thus,
formula_85
Summing over all formula_86 gives
formula_87,
which yields the desired result when we plug in formula_88
Sum-free sets.
A set formula_89 of elements of an abelian group is called "sum-free" if there are no formula_90 satisfying formula_91. We will show that there are at most formula_92 sum-free subsets of formula_93.
This will follow from our above bounds on the number of independent sets in a regular graph. To see this, we will need to construct an auxiliary graph. We first observe that up to lower order terms, we can restrict our focus to sum-free sets with at least formula_94 elements smaller than formula_95 (since the number of subsets in the complement of this is at most formula_96).
Given some subset formula_97, we define an auxiliary graph formula_98 with vertex set formula_1 and edge set formula_99, and observe that our auxiliary graph is formula_100 regular since each element of "S" is smaller than formula_95. Then if formula_101 are the smallest formula_94 elements of subset formula_102, the set formula_103 is an independent set in the graph formula_104. Then, by our previous bound, we see that the number of sum-free subsets of formula_1 is at most
formula_105
Triangle-free graphs.
We give an illustration of using the hypergraph container lemma to answer an enumerative question by giving an asymptotically tight upper bound on the number of triangle-free graphs with formula_47 vertices.
Informal statement.
Since bipartite graphs are triangle-free, the number of triangle free graphs with formula_47 vertices is at least formula_106, obtained by enumerating all possible subgraphs of the balanced complete bipartite graph formula_107.
We can construct an auxiliary "3"-uniform hypergraph "H" with vertex set formula_108 and edge set formula_109. This hypergraph "encodes" triangles in the sense that the family of triangle-free graphs on formula_47 vertices is exactly the collection of independent sets of this hypergraph, formula_110.
The above hypergraph has a nice degree distribution: each edge of formula_111, and thus vertex in formula_112 is contained in exactly formula_113 triangles and each pair of elements in formula_112 is contained in at most 1 triangle. Therefore, applying the hypergraph container lemma (iteratively), we are able to show that there is a family of formula_114 containers that each contain few triangles that contain every triangle-free graph/independent set of the hypergraph.
Upper bound on the number of triangle-free graphs.
We first specialize the generic hypergraph container lemma to 3-uniform hypergraphs as follows:
Lemma: For every formula_115, there exists formula_116 such that the following holds. Let formula_5 be a 3-uniform hypergraph with average degree formula_117 and suppose that formula_118. Then there exists a collection formula_72 of at most formula_119 containers such that
Applying this lemma iteratively will give the following theorem (as proved below):
Theorem: For all formula_122, there exists formula_123 such that the following holds. For each positive integer "n", there exists a collection formula_124 of graphs on "n" vertices with formula_125 such that
Proof: Consider the hypergraph formula_5 defined above. As observed informally earlier, the hypergraph satisfies formula_128 for every formula_129. Therefore, we can apply the above Lemma to formula_5 with formula_130 to find some collection formula_131 of formula_114 subsets of formula_132 (i.e. graphs on formula_47 vertices) such that
This is not quite as strong as the result we want to show, so we will iteratively apply the container lemma. Suppose we have some container formula_78 with at least formula_127 triangles. We can apply the container lemma to the induced sub-hypergraph formula_134. The average degree of formula_134 is at least formula_135, since every triangle in formula_136 is an edge in formula_134, and this induced subgraph has at most formula_137 vertices. Thus, we can apply Lemma with parameter formula_138, remove formula_136 from our set of containers, replacing it by this set of containers, the containers covering formula_139.
We can keep iterating until we have a final collection of containers formula_131 that each contain fewer than formula_127 triangles. We observe that this collection cannot be too big; all of our induced subgraphs have at most formula_137 vertices and average degree at least formula_135, meaning that each iteration results in at most formula_114 new containers. Further, the container size shrinks by a factor of formula_140 each time, so after a bounded (depending on formula_141) number of iterations, the iterative process will terminate.
See also.
Independent set (graph theory) <br>
Szemerédi's theorem <br>
Szemerédi regularity lemma
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k=3"
},
{
"math_id": 1,
"text": "[n]"
},
{
"math_id": 2,
"text": " H = (\\{1,2,\\ldots,n\\}, E) "
},
{
"math_id": 3,
"text": " \\{1,2,\\ldots,n\\} "
},
{
"math_id": 4,
"text": "\\alpha(H)"
},
{
"math_id": 5,
"text": "H"
},
{
"math_id": 6,
"text": "i(H)"
},
{
"math_id": 7,
"text": "2^{\\alpha(H)} \\le i(H) \\le \\sum_{r = 0}^{\\alpha(H)} {|V(H)| \\choose r},"
},
{
"math_id": 8,
"text": "S \\subset I"
},
{
"math_id": 9,
"text": " S \\subset I \\subset S \\cup f(S)"
},
{
"math_id": 10,
"text": "S \\cup f(S)"
},
{
"math_id": 11,
"text": " G = (V, E) "
},
{
"math_id": 12,
"text": "|V| = n"
},
{
"math_id": 13,
"text": " \\{v_1, \\ldots , v_n \\} "
},
{
"math_id": 14,
"text": " \\ell(G) "
},
{
"math_id": 15,
"text": " i(G) := |\\ell(G)| "
},
{
"math_id": 16,
"text": "i(G, r)"
},
{
"math_id": 17,
"text": " A \\subset V"
},
{
"math_id": 18,
"text": "G[A]"
},
{
"math_id": 19,
"text": "I \\in \\ell(G)"
},
{
"math_id": 20,
"text": "q \\le |I|"
},
{
"math_id": 21,
"text": "A=V(G), S=\\emptyset"
},
{
"math_id": 22,
"text": "s = 1,2,\\ldots, q"
},
{
"math_id": 23,
"text": "A,\\,(v_1, \\ldots v_{|A|})"
},
{
"math_id": 24,
"text": "j_s"
},
{
"math_id": 25,
"text": "v_{j_s} \\in I"
},
{
"math_id": 26,
"text": "S \\leftarrow S \\cup \\{v_{j_s}\\},\\,A \\leftarrow A \\backslash (\\{v_1, \\ldots, v_{j_s}\\} \\cup N(v_{j_s}))"
},
{
"math_id": 27,
"text": "N(v)"
},
{
"math_id": 28,
"text": "v"
},
{
"math_id": 29,
"text": "(j_1, \\ldots, j_q)"
},
{
"math_id": 30,
"text": "A \\cap I"
},
{
"math_id": 31,
"text": "\\{v_{j_1}, \\ldots, v_{j_q}\\} \\subset I \\subset \\{v_{j_1}, \\ldots, v_{j_q}\\} \\cup (A \\cap I)"
},
{
"math_id": 32,
"text": "\\{j_1, \\ldots, j_q\\}"
},
{
"math_id": 33,
"text": "I"
},
{
"math_id": 34,
"text": "A = A(j_1, \\ldots, j_q)"
},
{
"math_id": 35,
"text": "S = \\{v_{j_1}, \\ldots, v_{j_q}\\} = S(j_1, \\ldots j_q)"
},
{
"math_id": 36,
"text": "S"
},
{
"math_id": 37,
"text": "S(j_1, \\ldots j_q) \\cup A(j_1, \\ldots, j_q)"
},
{
"math_id": 38,
"text": "G"
},
{
"math_id": 39,
"text": "r \\ge q"
},
{
"math_id": 40,
"text": "(j_1, \\ldots j_q)"
},
{
"math_id": 41,
"text": "i(G, r) = \\sum_{(j_s)_{s = 1}^q} i(G[A(j_1, \\ldots j_q)], r-q) \\le \\sum_{(j_s)} {A(j_1, \\ldots j_q)\\choose r- q}"
},
{
"math_id": 42,
"text": "r"
},
{
"math_id": 43,
"text": "i(G) = \\sum_{r = 0}^{q-1} {n \\choose r} + \\sum_{(j_s)_{s = 1}^q} i(G[A(j_1, \\ldots j_q)]) \\le \\sum_{r = 0}^{q-1} {n \\choose r} + \\sum_{(j_s)} 2^{|A(j_1, \\ldots j_q)|}"
},
{
"math_id": 44,
"text": "q"
},
{
"math_id": 45,
"text": "|A(j_1, \\ldots j_q)|"
},
{
"math_id": 46,
"text": "(j_s)"
},
{
"math_id": 47,
"text": "n"
},
{
"math_id": 48,
"text": "R, \\beta \\in [0, 1]"
},
{
"math_id": 49,
"text": "R \\ge e^{-\\beta q}n"
},
{
"math_id": 50,
"text": "R"
},
{
"math_id": 51,
"text": "\\beta"
},
{
"math_id": 52,
"text": "i(G, r) \\le {n \\choose q}{R \\choose r - q}."
},
{
"math_id": 53,
"text": "R, D"
},
{
"math_id": 54,
"text": "n \\le R+ qD"
},
{
"math_id": 55,
"text": "U"
},
{
"math_id": 56,
"text": "D|U|/2"
},
{
"math_id": 57,
"text": "\\mathcal{F}"
},
{
"math_id": 58,
"text": "f\\colon \\mathcal{C} \\rightarrow \\mathcal{P}(V(G))"
},
{
"math_id": 59,
"text": "I \\subset V(G)"
},
{
"math_id": 60,
"text": "S \\in \\mathcal{F}"
},
{
"math_id": 61,
"text": "S \\subset I \\subset f(S) \\cup S"
},
{
"math_id": 62,
"text": "C = f(S)"
},
{
"math_id": 63,
"text": "k"
},
{
"math_id": 64,
"text": "\\mathcal{H}"
},
{
"math_id": 65,
"text": "\\Delta_l(\\mathcal{H}) := \\max\\{d_{H}(A) \\mid A \\subset V(\\mathcal{H}), |A| = l\\}"
},
{
"math_id": 66,
"text": "1\\le l \\le k"
},
{
"math_id": 67,
"text": "d_{\\mathcal{H}}(A) = |\\{e \\in E(\\mathcal{H}) \\mid A \\subset e\\}|"
},
{
"math_id": 68,
"text": "\\mathcal{I}(\\mathcal{H})"
},
{
"math_id": 69,
"text": "l \\in \\{1,2,\\ldots, k\\}"
},
{
"math_id": 70,
"text": "b, r \\in \\mathbb{N}"
},
{
"math_id": 71,
"text": "\\Delta_l(H) \\le \\left( \\frac{b}{|V(H)|} \\right)^{l-1} \\frac{|E(H)|}{r}"
},
{
"math_id": 72,
"text": "\\mathcal{C} \\subset \\mathcal{P}(V(H))"
},
{
"math_id": 73,
"text": "f\\colon \\mathcal{P}(V(H)) \\rightarrow \\mathcal{C}"
},
{
"math_id": 74,
"text": "I \\in \\mathcal{I}(H)"
},
{
"math_id": 75,
"text": "|S|\\le(k-1)b"
},
{
"math_id": 76,
"text": "I\\subset f(S)"
},
{
"math_id": 77,
"text": "|C| \\le |V(H)| - \\delta r"
},
{
"math_id": 78,
"text": "C \\in \\mathcal{C}"
},
{
"math_id": 79,
"text": "\\delta = 2^{-k(k+1)}"
},
{
"math_id": 80,
"text": "d"
},
{
"math_id": 81,
"text": "i(G) \\le 2^{\\left(1 + C \\sqrt{\\frac{\\log d}{d}}\\right)\\frac{n}{2}}"
},
{
"math_id": 82,
"text": "i(G, r) \\le {n \\choose r} \\le {n \\choose n/10} \\le 2^{0.48n}"
},
{
"math_id": 83,
"text": "r \\le n/10"
},
{
"math_id": 84,
"text": "\\beta > 10/n, q = \\lfloor 1/\\beta \\rfloor, R = \\frac{n}{2} + \\frac{\\beta n^2}{2d}."
},
{
"math_id": 85,
"text": " i(G, r) \\le {n \\choose q}{R \\choose r - q} \\le {n \\choose q}{\\frac{n}{2} + \\frac{\\beta n^2}{2d} \\choose r - q} \\le \\left(\\frac{en}{q} \\right)^q {\\frac{n}{2} + \\frac{\\beta n^2}{2d} \\choose r - q} \\le (e\\beta n)^{\\lfloor 1/\\beta \\rfloor} {\\frac{n}{2} + \\frac{\\beta n^2}{2d} \\choose r - q}."
},
{
"math_id": 86,
"text": "0\\le r \\le n"
},
{
"math_id": 87,
"text": "i(G) \\le 2^{0.49n} + 2^{\\frac{n}{2} + \\frac{\\beta n^2}{2d} + \\lfloor 1/\\beta \\rfloor \\log_2(e \\beta n)}"
},
{
"math_id": 88,
"text": "\\beta = \\sqrt{d \\log d}/n."
},
{
"math_id": 89,
"text": "A"
},
{
"math_id": 90,
"text": "x,y,z\\in A"
},
{
"math_id": 91,
"text": "x+y=z"
},
{
"math_id": 92,
"text": "2^{(1/2+o(1))n}"
},
{
"math_id": 93,
"text": "[n]:=\\{1,2,\\ldots , n\\}"
},
{
"math_id": 94,
"text": "n^{2/3}"
},
{
"math_id": 95,
"text": "n/2"
},
{
"math_id": 96,
"text": " (n/2)^{n^{2/3}} 2^{n/2 + 1}"
},
{
"math_id": 97,
"text": " S \\subset \\{1,2,\\ldots, \\lceil n/2 \\rceil - 1\\}"
},
{
"math_id": 98,
"text": "G_S"
},
{
"math_id": 99,
"text": "\\{\\{x, y\\} \\mid x + s \\equiv y \\pmod n \\text{ for some } s \\in S \\cup (-S)\\}"
},
{
"math_id": 100,
"text": "2|S|"
},
{
"math_id": 101,
"text": "S_A"
},
{
"math_id": 102,
"text": "A \\subset [n]"
},
{
"math_id": 103,
"text": "A \\backslash S_A"
},
{
"math_id": 104,
"text": "G_{S_A}"
},
{
"math_id": 105,
"text": "(n/2)^{n^{2/3}} 2^{n/2+} + {n/2 \\choose n^{2/3}} 2^{(1 + O(n^{-1/3} \\sqrt{\\log n}))\\frac{n}{2}} \\le 2^{(1/2 + O(n^{-1/3}\\log n))n}."
},
{
"math_id": 106,
"text": "2^{\\lfloor n^2/4 \\rfloor}"
},
{
"math_id": 107,
"text": "K_{\\lfloor n/2\\rfloor,\\lceil n/2 \\rceil}"
},
{
"math_id": 108,
"text": "V(H) = E(K_n)"
},
{
"math_id": 109,
"text": "E(H) = \\{\\{e_1, e_2, e_3\\} \\subset E(K_n) = V(H) \\mid e_1, e_2, e_3 \\text{ form a triangle}\\}"
},
{
"math_id": 110,
"text": "\\mathcal{I}(H)"
},
{
"math_id": 111,
"text": "K_n"
},
{
"math_id": 112,
"text": "V(H)"
},
{
"math_id": 113,
"text": "n-2"
},
{
"math_id": 114,
"text": "n^{O(n^{3/2})}"
},
{
"math_id": 115,
"text": "c > 0"
},
{
"math_id": 116,
"text": "\\delta > 0"
},
{
"math_id": 117,
"text": "d \\ge 1/\\delta"
},
{
"math_id": 118,
"text": "\\Delta_1(H) \\le c d, \\Delta_2(H) \\le c\\sqrt{d}"
},
{
"math_id": 119,
"text": "|\\mathcal{C}| \\le {|V(H)| \\choose |V(H)|/\\sqrt{d}}"
},
{
"math_id": 120,
"text": "I \\subset C \\in \\mathcal{C}"
},
{
"math_id": 121,
"text": "|C| \\le (1 - \\delta)|V(H)|"
},
{
"math_id": 122,
"text": "\\epsilon > 0"
},
{
"math_id": 123,
"text": "C>0"
},
{
"math_id": 124,
"text": "\\mathcal{G}"
},
{
"math_id": 125,
"text": "|\\mathcal{G}| \\le n^{Cn^{3/2}}"
},
{
"math_id": 126,
"text": "G \\in \\mathcal{G}"
},
{
"math_id": 127,
"text": "\\epsilon n^3"
},
{
"math_id": 128,
"text": "|V(H)| = {n \\choose 2}, \\Delta_2(H) = 1, d(v) = n - 2"
},
{
"math_id": 129,
"text": "v \\in V(H)"
},
{
"math_id": 130,
"text": "c=1"
},
{
"math_id": 131,
"text": "\\mathcal{C}"
},
{
"math_id": 132,
"text": "E(K_n)"
},
{
"math_id": 133,
"text": "(1 - \\delta) {n \\choose 2}"
},
{
"math_id": 134,
"text": "H[C]"
},
{
"math_id": 135,
"text": "6 \\epsilon n"
},
{
"math_id": 136,
"text": "C"
},
{
"math_id": 137,
"text": "{n \\choose 2}"
},
{
"math_id": 138,
"text": "c = 1/\\epsilon"
},
{
"math_id": 139,
"text": "\\mathcal{I}(H[C])"
},
{
"math_id": 140,
"text": "1-\\delta"
},
{
"math_id": 141,
"text": "\\epsilon"
}
]
| https://en.wikipedia.org/wiki?curid=69513035 |
695167 | Cost of capital | Cost of a company's funds
In economics and accounting, the cost of capital is the cost of a company's funds (both debt and equity), or from an investor's point of view is "the required rate of return on a portfolio company's existing securities". It is used to evaluate new projects of a company. It is the minimum return that investors expect for providing capital to the company, thus setting a benchmark that a new project has to meet.
Basic concept.
For an investment to be worthwhile, the expected return on capital has to be higher than the cost of capital. Given a number of competing investment opportunities, investors are expected to put their capital to work in order to maximize the return. In other words, the cost of capital is the rate of return that capital could be expected to earn in the best alternative investment of equivalent risk; this is the opportunity cost of capital. If a project is of similar risk to a company's average business activities it is reasonable to use the company's average cost of capital as a basis for the evaluation or cost of capital is a firm's cost of raising funds. However, for projects outside the core business of the company, the current cost of capital may not be the appropriate yardstick to use, as the risks of the businesses are not the same.
A company's securities typically include both debt and equity; one must therefore calculate both the cost of debt and the cost of equity to determine a company's cost of capital. Importantly, both cost of debt and equity must be forward looking, and reflect the expectations of risk and return in the future. This means, for instance, that the past cost of debt is not a good indicator of the actual forward looking cost of debt.
Once cost of debt and cost of equity have been determined, their blend, the weighted average cost of capital (WACC), can be calculated. This WACC can then be used as a discount rate for a project's projected free cash flows to the firm.
Example.
Suppose a company considers taking on a project or investment of some kind, for example installing a new piece of machinery in one of their factories. Installing this new machinery will cost money; paying the technicians to install the machinery, transporting the machinery, buying the parts and so on. This new machinery is also expected to generate new profit (otherwise, assuming the company is interested in profit, the company would not consider the project in the first place). So the company will finance the project with two broad categories of finance: issuing debt, by taking out a loan or other debt instrument such as a bond; and issuing equity, usually by issuing new shares.
The new debt-holders and shareholders who have decided to invest in the company to fund this new machinery will expect a return on their investment: debt-holders require interest payments and shareholders require dividends (or capital gain from selling the shares after their value increases). The idea is that some of the profit generated by this new project will be used to repay the debt and satisfy the new shareholders.
Suppose that one of the sources of finance for this new project was a bond (issued at par value) of $ with an interest rate of 5%. This means that the company would issue the bond to some willing investor, who would give the $ to the company which it could then use, for a specified period of time (the term of the bond) to finance its project. The company would also make regular payments to the investor of 5% of the original amount they invested ($10,000), at a yearly or monthly rate depending on the specifics of the bond (these are called coupon payments). At the end of the lifetime of the bond (when the bond matures), the company would return the $ they borrowed.
Suppose the bond had a lifetime of ten years and coupon payments were made yearly. This means that the investor would receive $ every year for ten years, and then finally their $ back at the end of the ten years. From the investor's point of view, their investment of $ would be regained at the end of the ten years (entailing zero gain or loss), but they would have "also" gained from the coupon payments; the $ per year for ten years would amount to a net gain of $ to the investor. This is the amount that compensates the investor for taking the risk of investing in the company (since, if it happens that the project fails completely and the company goes bankrupt, there is a chance that the investor does not get their money back).
This net gain of $ was paid by the company to the investor as a reward for investing their money in the company. In essence, this is how much the company paid to borrow $. It was the "cost" of raising $ of new capital. So to raise $ the company had to pay $ out of their profits; thus we say that the "cost of debt" in this case was 50%.
Theoretically, if the company were to raise further capital by issuing more of the same bonds, the new investors would also expect a 50% return on their investment (although in practice the required return varies depending on the size of the investment, the lifetime of the loan, the risk of the project and so on).
The cost of equity follows the same principle: the investors expect a certain return from their investment, and the company must pay this amount in order for the investors to be willing to invest in the company. (Although the cost of equity is calculated differently since dividends, unlike interest payments, are not necessarily a fixed payment or a legal requirement.)
Cost of debt.
When companies borrow funds from outside lenders, the interest paid on these funds is called the cost of debt. The cost of debt is computed by taking the rate on a risk-free bond whose duration matches the term structure of the corporate debt, then adding a default premium. This default premium will rise as the amount of debt increases (since, all other things being equal, the risk rises as the cost of debt rises). Since in most cases debt expense is a deductible expense, the cost of debt is computed on an after-tax basis to make it comparable with the cost of equity (earnings are taxed as well). Thus, for profitable firms, debt is discounted by the tax rate. The formula can be written as
formula_0,
where formula_1 is the corporate tax rate and formula_2 is the risk free rate.
Cost of equity.
The cost of equity is "inferred" by comparing the investment to other investments (comparable) with similar risk profiles. It is commonly computed using the capital asset pricing model formula:
Cost of equity = Risk free rate of return + Premium expected for risk
Cost of equity = Risk free rate of return + Beta × (market rate of return – risk free rate of return)
where Beta = sensitivity to movements in the relevant market. Thus in symbols we have
formula_3
where:
"Es" is the expected return for a security;
"Rf" is the expected risk-free return in that market (government bond yield);
"βs" is the sensitivity to market risk for the security;
"Rm" is the historical return of the stock market; and
"(Rm – Rf)" is the risk premium of market assets over risk free assets.
The risk free rate is the yield on long term bonds in the particular market, such as government bonds.
An alternative to the estimation of the required return by the capital asset pricing model as above, is the use of the Fama–French three-factor model.
Expected return.
The expected return (or required rate of return for investors) can be calculated with the "dividend capitalization model", which is
formula_4.
Comments.
The models state that investors will expect a return that is the risk-free return plus the security's sensitivity to market risk (β) times the market risk premium.
The risk premium varies over time and place, but in some developed countries during the twentieth century it has averaged around 5% whereas in the emerging markets, it can be as high as 7%. The equity market real capital gain return has been about the same as annual real GDP growth. The capital gains on the Dow Jones Industrial Average have been 1.6% per year over the period 1910–2005. The dividends have increased the total "real" return on average equity to the double, about 3.2%.
The sensitivity to market risk (β) is unique for each firm and depends on everything from management to its business and capital structure. This value cannot be known "ex ante" (beforehand), but can be estimated from "ex post" (past) returns and past experience with similar firms.
Cost of retained earnings/cost of internal equity.
Note that retained earnings are a component of equity, and, therefore, the cost of retained earnings (internal equity) is equal to the cost of equity as explained above. Dividends (earnings that are paid to investors and not retained) are a component of the return on capital to equity holders, and influence the cost of capital through that mechanism.
Cost of internal equity = [(next year's dividend per share/(current market price per share - flotation costs)] + growth rate of dividends)]
Weighted average cost of capital.
The weighted cost of capital (WACC) is used in finance to measure a firm's cost of capital. WACC is not dictated by management. Rather, it represents the minimum return that a company must earn on an existing asset base to satisfy its creditors, owners, and other providers of capital, or they will invest elsewhere.
The total capital for a firm is the value of its equity (for a firm without outstanding warrants and options, this is the same as the company's market capitalization) plus the cost of its debt (the cost of debt should be continually updated as the cost of debt changes as a result of interest rate changes). Notice that the "equity" in the debt to equity ratio is the market value of all equity, not the shareholders' equity on the balance sheet. To calculate the firm's weighted cost of capital, we must first calculate the costs of the individual financing sources: Cost of Debt, Cost of Preference Capital, and Cost of Equity Cap.
Calculation of WACC is an iterative procedure which requires estimation of the fair market value of equity capital if the company is not listed. The Adjusted Present Value method (APV) is much easier to use in this case as it separates the value of the project from the value of its financing program.
Factors that can affect cost of capital.
Below are a list of factors that might affect the cost of capital.
Capital structure.
Because of tax advantages on debt issuance, it will be cheaper to issue debt rather than new equity (this is only true for profitable firms, tax breaks are available only to profitable firms). At some point, however, the cost of issuing new debt will be greater than the cost of issuing new equity. This is because adding debt increases the default risk – and thus the interest rate that the company must pay in order to borrow money. By utilizing too much debt in its capital structure, this increased default risk can also drive up the costs for other sources (such as retained earnings and preferred stock) as well. Management must identify the "optimal mix" of financing – the capital structure where the cost of capital is minimized so that the firm's value can be maximized.
The Thomson Financial league tables show that global debt issuance exceeds equity issuance with a 90 to 10 margin.
The structure of capital should be determined considering the weighted average cost of capital.
Current dividend policy.
Accounting information.
Lambert, Leuz and Verrecchia (2007) have found that the quality of accounting information can affect a firm's cost of capital, both directly and indirectly.
Modigliani–Miller theorem.
If there were no tax advantages for issuing debt, and equity could be freely issued, Miller and Modigliani showed that, under certain assumptions (no tax, no possibility of bankruptcy), the value of a levered firm and the value of an unlevered firm should be the same.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K_D = (R_f + \\text{credit risk rate})(1-T)"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "R_f"
},
{
"math_id": 3,
"text": "E_s = R_f + \\beta_s(R_m - R_f)"
},
{
"math_id": 4,
"text": "K_{cs} = \\frac{\\text{Dividend}_{\\text{Payment/Share}}(1+\\text{Growth})} {\\text{Price}_\\text{Market}} + \\text{Growth}_\\text{rate}"
}
]
| https://en.wikipedia.org/wiki?curid=695167 |
69520270 | Transpositions matrix | Transpositions matrix (Tr matrix) is square formula_0 matrix, formula_1, formula_2, which elements are obtained from the elements of given n-dimensional vector formula_3 as follows: formula_4, where formula_5 denotes operation "bitwise Exclusive or" (XOR). The rows and columns of Transpositions matrix consists permutation of elements of vector X, as there are "n"/2 transpositions between every two rows or columns of the matrix
Example.
The figure below shows Transpositions matrix formula_6 of order 8, created from arbitrary vector formula_7
formula_8
Properties.
The figure on the right shows some fours of elements in formula_9 matrix.
Transpositions matrix with mutually orthogonal rows (Trs matrix).
The property of fours of formula_9 matrices gives the possibility to create matrix with mutually orthogonal rows and columns (formula_17 matrix ) by changing the sign to an odd number of elements in every one of fours formula_14, formula_18. In [5] is offered algorithm for creating formula_17 matrix using Hadamard product, (denoted by formula_19) of Tr matrix and n-dimensional Hadamard matrix whose rows (except the first one) are rearranged relative to the rows of Sylvester-Hadamard matrix in order formula_20, for which the rows of the resulting Trs matrix are mutually orthogonal.
formula_21
formula_22
where:
Orderings R of Hadamard matrix’s rows were obtained experimentally for formula_17 matrices of sizes 2, 4 and 8. It is important to note, that the ordering R of Hadamard matrix’s rows (against the Sylvester-Hadamard matrix) does not depend on the vector formula_10. Has been proven[5] that, if formula_10 is unit vector (i.e. formula_26), then formula_17 matrix (obtained as it was described above) is matrix of reflection.
Example of obtaining Trs matrix.
Transpositions matrix with mutually orthogonal rows (formula_17 matrix) of order 4 for vector formula_27 is obtained as:
formula_28
where formula_6 is formula_9 matrix, obtained from vector formula_10, and "formula_23" denotes operation Hadamard product and formula_25 is Hadamard matrix, which rows are interchanged in given order formula_29 for which the rows of the resulting formula_17 matrix are mutually orthogonal.
As can be seen from the figure above, the first row of the resulting formula_17 matrix contains the elements of the vector formula_10 without transpositions and sign change. Taking into consideration that the rows of the formula_17 matrix are mutually orthogonal, we get
formula_30
which means that the formula_17 matrix rotates the vector formula_10, from which it is derived, in the direction of the coordinate axis formula_31
In [5] are given as examples code of a Matlab functions that creates formula_9 and formula_17 matrices for vector formula_10 of size "n" = 2, 4, or, 8. Stay open question is it possible to create formula_17 matrices of size, greater than 8. | [
{
"math_id": 0,
"text": "n \\times n"
},
{
"math_id": 1,
"text": "n=2^{m}"
},
{
"math_id": 2,
"text": "m \\in N "
},
{
"math_id": 3,
"text": "X=(x_i)_{\\begin{smallmatrix} i={1,n} \\end{smallmatrix}}"
},
{
"math_id": 4,
"text": "Tr_{i,j} = x_{(i-1) \\oplus (j-1)+1}"
},
{
"math_id": 5,
"text": "\\oplus"
},
{
"math_id": 6,
"text": "Tr(X)"
},
{
"math_id": 7,
"text": "X=\\begin{pmatrix}x_1,x_2,x_3,x_4,x_5,x_6,x_7,x_8 \\\\\\end{pmatrix}"
},
{
"math_id": 8,
"text": "Tr(X) = \n\\left[\\begin{array} {cccc|ccccc}\nx_1 & x_2 & x_3 & x_4 & x_5 & x_6 & x_7 & x_8 \\\\\nx_2 & x_1 & x_4 & x_3 & x_6 & x_5 & x_8 & x_7 \\\\\nx_3 & x_4 & x_1 & x_2 & x_7 & x_8 & x_5 & x_6 \\\\\nx_4 & x_3 & x_2 & x_1 & x_8 & x_7 & x_6 & x_5 \\\\\n\\hline\nx_5 & x_6 & x_7 & x_8 & x_1 & x_2 & x_3 & x_4 \\\\\nx_6 & x_5 & x_8 & x_7 & x_2 & x_1 & x_4 & x_3 \\\\\nx_7 & x_8 & x_5 & x_6 & x_3 & x_4 & x_1 & x_2 \\\\\nx_8 & x_7 & x_6 & x_5 & x_4 & x_3 & x_2 & x_1\n\\end{array}\\right]\n"
},
{
"math_id": 9,
"text": "Tr"
},
{
"math_id": 10,
"text": "X"
},
{
"math_id": 11,
"text": "n/2"
},
{
"math_id": 12,
"text": " Tr_{p,q}"
},
{
"math_id": 13,
"text": " Tr_{u,q}"
},
{
"math_id": 14,
"text": "( Tr_{p,q}, Tr_{u,q}, Tr_{p,v}, Tr_{u,v})"
},
{
"math_id": 15,
"text": " Tr_{p,q}=Tr_{u,v}"
},
{
"math_id": 16,
"text": " Tr_{u,q} = Tr_{p,v}"
},
{
"math_id": 17,
"text": "Trs"
},
{
"math_id": 18,
"text": "p,q,u,v \\in [1,n] "
},
{
"math_id": 19,
"text": " \\circ "
},
{
"math_id": 20,
"text": "R=[1, r_2, \\dots, r_n]^T , r_2, \\dots, r_n \\in [2,n]"
},
{
"math_id": 21,
"text": "Trs(X) = Tr(X)\\circ H(R) "
},
{
"math_id": 22,
"text": "Trs.{Trs}^T=\\parallel X\\parallel^2.I_n "
},
{
"math_id": 23,
"text": "\\circ"
},
{
"math_id": 24,
"text": "I_n"
},
{
"math_id": 25,
"text": "H(R)"
},
{
"math_id": 26,
"text": "\\parallel X\\parallel=1"
},
{
"math_id": 27,
"text": "X = \\begin{pmatrix} x_1, x_2, x_3, x_4 \\end{pmatrix}^T"
},
{
"math_id": 28,
"text": "Trs(X) = H(R) \\circ Tr(X) = \n\\begin{pmatrix} \n1 & 1 & 1 & 1 \\\\\n1 &-1 & 1 &-1 \\\\\n1 &-1 &-1 & 1 \\\\\n1 & 1 &-1 &-1 \\\\\n\\end{pmatrix}\\circ\n\\begin{pmatrix} \nx_1 & x_2 & x_3 & x_4 \\\\\nx_2 & x_1 & x_4 & x_3 \\\\\nx_3 & x_4 & x_1 & x_2 \\\\\nx_4 & x_3 & x_2 & x_1 \\\\\n\\end{pmatrix}=\n\\begin{pmatrix} \nx_1 & x_2 & x_3 & x_4 \\\\\nx_2 &-x_1 & x_4 &-x_3 \\\\\nx_3 &-x_4 &-x_1 & x_2 \\\\\nx_4 & x_3 &-x_2 &-x_1 \\\\\n\\end{pmatrix}\n"
},
{
"math_id": 29,
"text": "R"
},
{
"math_id": 30,
"text": "Trs(X).X = \\left\\| X \\right\\|^2 \\begin{bmatrix}1 \\\\ 0 \\\\ 0 \\\\ 0\\end{bmatrix}"
},
{
"math_id": 31,
"text": "x_1"
}
]
| https://en.wikipedia.org/wiki?curid=69520270 |
69520552 | Polarization gradient cooling | Laser cooling technique
Polarization gradient cooling (PG cooling) is a technique in laser cooling of atoms. It was proposed to explain the experimental observation of cooling below the doppler limit. Shortly after the theory was introduced experiments were performed that verified the theoretical predictions. While Doppler cooling allows atoms to be cooled to hundreds of microkelvin, PG cooling allows atoms to be cooled to a few microkelvin or less.
The superposition of two counterpropagating beams of light with orthogonal polarizations creates a gradient where the polarization varies in space. The gradient depends on which type of polarization is used. Orthogonal linear polarizations (the lin⊥lin configuration) results in the polarization varying between linear and circular polarization in the range of half a wavelength. However, if orthogonal circular polarizations (the σ+σ− configuration) are used, the result is a linear polarization that rotates along the axis of propagation. Both configurations can be used for cooling and yield similar results, however, the physical mechanisms involved are very different. For the lin⊥lin case, the polarization gradient causes periodic light shifts in Zeeman sublevels of the atomic ground state that allows for a Sisyphus effect to occur. In the σ+-σ− configuration, the rotating polarization creates a motion-induced population imbalance in the Zeeman sublevels of the atomic ground state resulting in an imbalance in the radiation pressure that opposes the motion of the atom. Both configurations achieve sub-Doppler cooling and instead reach the recoil limit. While the limit of PG cooling is lower than that of Doppler cooling, the capture range of PG cooling is lower and thus an atomic gas must be pre-cooled before PG cooling.
Observation of Cooling Below the Doppler Limit.
When laser cooling of atoms was first proposed in 1975, the only cooling mechanism considered was Doppler cooling. As such the limit on the temperature was predicted to be the Doppler limit:
formula_0
Here kb is the Boltzmann constant, T is the temperature of the atoms, and Γ is the inverse of the excited state's radiative lifetime.
Early experiments seemed to be in agreement with this limit. However, in 1988 experiments began to report temperatures below the Doppler limit. These observations would take the theory of PG cooling to explain.
Theory.
There are two different configurations that form polarization gradients: lin⊥lin and σ+σ−. Both configurations provide cooling, however, the type of polarization gradient and the physical mechanism for cooling are different between the two.
The lin⊥lin Configuration.
In the lin⊥lin configuration cooling is achieved via a Sisyphus effect. Consider two counterpropagating electromagnetic waves with equal amplitude and orthogonal linear polarizations formula_1 and formula_2, where k is the wavenumber formula_3. The superposition of formula_4 and formula_5 is given as:
formula_6
Introducing a new pair of coordinates formula_7 and formula_8 the field can be written as:
formula_9
The polarization of the total field changes with z. For example: we see that at formula_10 the field is linearly polarized along formula_11, at formula_12 the field has left circular polarization, at formula_13 the field is linearly polarized along formula_14, at formula_15 the field has right circular polarization, and at formula_16 the field is again linearly polarized along formula_11.
Consider an atom interacting with the field detuned below the transition from atomic states formula_18 and formula_19 (formula_20). The variation of the polarization along z results in a variation in the light shifts of the atomic Zeeman sublevels with z. The Clebsch-Gordan coefficient connecting the formula_21 state to the formula_22 state is 3 times larger than connecting the formula_23 state to the formula_24 state. Thus for formula_25 polarization the light shift is three times larger for the formula_21 state than for the formula_26 state. The situation is reversed for formula_27 polarization, with the light shift being three times larger for the formula_23 state than the formula_24 state. When the polarization is linear, there is no difference in the light shifts between the two states. Thus the energies of the states will oscillate in z with period formula_17.
As an atom moves along z, it will be optically pumped to the state with the largest negative light shift. However, the optical pumping process takes some finite time formula_28. For field wavenumber k and atomic velocity v such that formula_29, the atom will travel mostly uphill as it moves along z before being pumped back down to the lowest state. In this velocity range, the atom travels more uphill than downhill and gradually loses kinetic energy, lowering its temperature. This is called the Sisyphus effect after the mythological Greek character. Note that this initial condition for velocity requires the atom to be cooled already, for example through Doppler cooling.
The σ+σ− Configuration.
For the case of counterpropagating waves with orthogonal circular polarizations the resulting polarization is linear everywhere, but rotates about formula_30 at an angle formula_31. As a result, there is no Sisyphus effect. The rotating polarization instead leads to motion-induced population imbalances in the Zeeman levels that cause imbalances in radiation pressure leading to a damping of the atomic motion. These population imbalances are only present for states with formula_32 or higher.
Consider two EM waves detuned from an atomic transition formula_33 with equal amplitudes: formula_34 and formula_35. The superposition of these two waves is:
formula_36
As previously stated, the polarization of the total field is linear, but rotated around formula_30 by an angle formula_31 with respect to formula_37.
Consider an atom moving along z with some velocity v. The atom sees the polarization rotating with a frequency of formula_38. In the rotating frame, the polarization is fixed, however, there is an inertial field due to the frame rotating. This inertial term appears in the Hamiltonian as follows.
formula_39
Here we see the inertial term looks like a magnetic field along formula_30 with an amplitude such that the Larmor precession frequency is equal to rotation frequency in the lab frame. For small v, this term in Hamiltonian can be treated using perturbation theory.
Choosing the polarization in the rotating frame to be fixed along formula_37, the unperturbed atomic eigenstates are the eigenstates of formula_40. The rotating term in the Hamiltonian causes perturbations in the atomic eigenstates such that the Zeeman sublevels become contaminated by each other. For formula_41 the formula_42 is light shifted more than the formula_43 states. Thus the steady state population of the formula_42 is higher than that of the other states. The populations are equal for the formula_43 states. Thus states are balanced with formula_44. However, when we change basis we see that populations are not balanced in the z-basis and there is a non-zero value of formula_45 proportional to the atom's velocity:
formula_46
Where formula_47 is the light shift for the formula_48 state. There is a motion induced population imbalance in the Zeeman sublevels in the z basis. For red detuned light, formula_47 is negative, and thus there will be a higher population in the formula_49 state when the atom is moving to the right (positive velocity) and a higher population in the formula_50 state when the atom is moving to the left (negative velocity). From the Clebsch-Gordan coefficients, we see that the formula_49 state has a six times greater probability of absorbing a formula_25 photon moving to the left than a formula_27 photon moving to the right. The opposite is true for the formula_51 state. When the atom moves to the right it is more likely to absorb a photon moving to the left and likewise when the atom moves to the left it is more likely to absorb a photon moving to the right. Thus there is an unbalanced radiation pressure when the atom moves which dampens the motion of the atom, lowering its velocity and therefore its temperature.
Note the similarity to Doppler cooling in the unbalanced radiation pressures due to the atomic motion. The unbalanced pressure in PG cooling is not due to a Doppler shift but an induced population imbalance. Doppler cooling depends on the parameter formula_52 where formula_53 is the scattering rate, whereas PG cooling depends on formula_54. At low intensity formula_55 and thus PG cooling works at lower atomic velocities (temperatures) than Doppler Cooling.
Limits and Scaling.
Both methods of PG cooling surpass the Doppler limit and instead are limited by the one-photon recoil limit:
formula_56
Where M is the atomic mass.
For a given detuning formula_57 and Rabi frequency formula_58, dependent on the light intensity, both configurations display a similar scaling at low intensity (formula_59) and large detuning (formula_60):
formula_61
Where formula_62 is a dimensionless constant dependent on the configuration and atomic species. See ref for a full derivation of these results.
Experiment.
PG cooling is typically performed using a 3D optical setup with three pairs of perpendicular laser beams with an atomic ensemble in the center. Each beam is prepared with an orthogonal polarization to its counterpropagating beam. The laser frequency detuned from a selected transition between the ground and excited states of the atom. Since the cooling processes rely on multiple transitions between care must be taken such that the atomic does not fall out of these two states. This is done by using a second, "repumping", laser to pump any atoms that fall out back into the ground state of the transition. For example: in cesium cooling experiments, the cooling laser is typically chosen to be detuned from the formula_63 to formula_64 transition and a repumping laser tuned to the formula_65 to formula_66 transition is also used to prevent the Cs atoms from being pumped into the formula_65 state.
The atoms must be cooled before the PG cooling, this can be done using the same setup via Doppler cooling. If the atoms are precooled with Doppler cooling, the laser intensity must be lowered and the detuning increased for PG cooling to be achieved.
The atomic temperature can be measured using the time of flight (ToF) technique. In this technique, the laser beams are suddenly turned off and the atomic ensemble is allowed to expand. After a set time delay t, a probe beam is turned on to image the ensemble and obtain the spatial extent of the ensemble at time t. By imaging the ensemble at several time delays, the rate of expansion is found. By measuring the rate of expansion of the ensemble the velocity distribution is measured and from this, the temperature is inferred.
An important theoretical result is that in the regime where PG cooling functions, the temperature only depends on the ratio of formula_67 to formula_68 and that the cooling approaches the recoil limit. These predictions were confirmed experimentally in 1990 when W.D. Phillips et al. observed such scaling in their cesium atoms as well as a temperature of 2.5formula_69K, 12 times the recoil temperature of 0.198formula_69K for the D2 line of cesium used in the experiment.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " k_BT= \\frac {\\hbar\\Gamma}{2} "
},
{
"math_id": 1,
"text": " \\vec{E_1} = E_0e^{ikz}\\hat{x} "
},
{
"math_id": 2,
"text": " \\vec{E_2} = E_0e^{-ikz}\\hat{y} "
},
{
"math_id": 3,
"text": " k = \\textstyle\\frac{2\\pi}{\\lambda} "
},
{
"math_id": 4,
"text": " \\vec{E_1} "
},
{
"math_id": 5,
"text": " \\vec{E_2} "
},
{
"math_id": 6,
"text": " \\vec{E}_{tot} = \\frac{E_0}{\\sqrt2}\\left(\\cos(kz)\\frac{\\hat{x}+\\hat{y}}{\\sqrt2} - i\\sin(kz)\\frac{-\\hat{x}+\\hat{y}}{\\sqrt2}\\right) "
},
{
"math_id": 7,
"text": " \\hat{x}'=\\textstyle\\frac{\\hat{x}+\\hat{y}}{\\sqrt2} "
},
{
"math_id": 8,
"text": " \\hat{y}'=\\textstyle\\frac{-\\hat{x}+\\hat{y}}{\\sqrt2} "
},
{
"math_id": 9,
"text": " \\vec{E}_{tot} = \\frac{E_0}{\\sqrt2}\\left(\\cos(kz)\\hat{x}' - i\\sin(kz)\\hat{y}'\\right) "
},
{
"math_id": 10,
"text": " z=0 "
},
{
"math_id": 11,
"text": " \\hat{x}' "
},
{
"math_id": 12,
"text": " z=\\textstyle\\frac{\\lambda}{8} "
},
{
"math_id": 13,
"text": " z=\\textstyle\\frac{\\lambda}{4} "
},
{
"math_id": 14,
"text": " \\hat{y}' "
},
{
"math_id": 15,
"text": " z=\\textstyle\\frac{3\\lambda}{8} "
},
{
"math_id": 16,
"text": " z=\\textstyle\\frac{\\lambda}{2} "
},
{
"math_id": 17,
"text": " \\textstyle\\frac{\\lambda}{2} "
},
{
"math_id": 18,
"text": "F_g = \\textstyle\\frac{1}{2} "
},
{
"math_id": 19,
"text": " F_e = \\textstyle\\frac{3}{2} "
},
{
"math_id": 20,
"text": "\\hbar{}\\omega{}_{field} < E_{eg}"
},
{
"math_id": 21,
"text": " |g,m_F=-\\textstyle\\frac{1}{2}\\rangle "
},
{
"math_id": 22,
"text": " |e,m_F=-\\textstyle\\frac{3}{2}\\rangle "
},
{
"math_id": 23,
"text": " |g,m_F=\\textstyle\\frac{1}{2}\\rangle "
},
{
"math_id": 24,
"text": " |e,m_F=-\\textstyle\\frac{1}{2}\\rangle "
},
{
"math_id": 25,
"text": " \\sigma^- "
},
{
"math_id": 26,
"text": " |e,m_F=\\textstyle\\frac{1}{2}\\rangle "
},
{
"math_id": 27,
"text": " \\sigma^+ "
},
{
"math_id": 28,
"text": " \\tau "
},
{
"math_id": 29,
"text": " kv \\approx \\tau{}^{-1} "
},
{
"math_id": 30,
"text": " \\hat{z} "
},
{
"math_id": 31,
"text": " -kz "
},
{
"math_id": 32,
"text": " F=1 "
},
{
"math_id": 33,
"text": " F_g=1 \\rightarrow F_e=2 "
},
{
"math_id": 34,
"text": " \\vec{E_1} = E_0e^{ikz}\\textstyle\\frac{-\\hat{x}-i\\hat{y}}{\\sqrt{2}} "
},
{
"math_id": 35,
"text": " \\vec{E_2} = E_0e^{-ikz}\\textstyle\\frac{\\hat{x}-i\\hat{y}}{\\sqrt{2}} "
},
{
"math_id": 36,
"text": " \\vec{E_{tot}} = -i\\sqrt{2}E_0(\\sin(kz)\\hat{x}+\\cos(kz)\\hat{y}) "
},
{
"math_id": 37,
"text": " \\hat{y} "
},
{
"math_id": 38,
"text": " kv "
},
{
"math_id": 39,
"text": " \\hat{H}_{rot} = kvF_z "
},
{
"math_id": 40,
"text": " \\hat{F}_y "
},
{
"math_id": 41,
"text": " F_g=1 "
},
{
"math_id": 42,
"text": " |g,m_f=0\\rangle_y "
},
{
"math_id": 43,
"text": " |g,m_f=\\pm{1}\\rangle_y "
},
{
"math_id": 44,
"text": " \\langle\\hat{F}_y\\rangle = 0 "
},
{
"math_id": 45,
"text": " \\langle\\hat{F}_z\\rangle "
},
{
"math_id": 46,
"text": " \\langle\\hat{F}_z\\rangle = \\frac{40\\hbar{}kv}{17\\Delta_0^'} "
},
{
"math_id": 47,
"text": " \\Delta_0^' "
},
{
"math_id": 48,
"text": " m_F=0 "
},
{
"math_id": 49,
"text": " |g,m_f=-1\\rangle "
},
{
"math_id": 50,
"text": " |g,m_f=1\\rangle "
},
{
"math_id": 51,
"text": " |g,m_f=1> "
},
{
"math_id": 52,
"text": " \\textstyle\\frac{kv}{\\Gamma} "
},
{
"math_id": 53,
"text": " \\Gamma "
},
{
"math_id": 54,
"text": " \\textstyle\\frac{kv}{\\Delta_0^'} "
},
{
"math_id": 55,
"text": " \\Delta_0^' \\ll \\Gamma "
},
{
"math_id": 56,
"text": " kT_{recoil} = \\frac{\\hbar{}^2k^2}{2M} "
},
{
"math_id": 57,
"text": " \\delta "
},
{
"math_id": 58,
"text": " \\Omega "
},
{
"math_id": 59,
"text": " \\Omega \\ll |\\delta| "
},
{
"math_id": 60,
"text": " \\delta \\gg \\Gamma "
},
{
"math_id": 61,
"text": " kT = \\alpha{}\\frac{\\hbar{}\\Omega^2}{|\\delta|} "
},
{
"math_id": 62,
"text": " \\alpha "
},
{
"math_id": 63,
"text": " |6^2S_{1/2} F=4 \\rangle "
},
{
"math_id": 64,
"text": " |6^2P_{3/2} F^'=5 \\rangle "
},
{
"math_id": 65,
"text": " |6^2S_{1/2} F=3 \\rangle "
},
{
"math_id": 66,
"text": " |6^2P_{3/2} F^'=4\\rangle "
},
{
"math_id": 67,
"text": " \\Omega^2 "
},
{
"math_id": 68,
"text": " |\\gamma| "
},
{
"math_id": 69,
"text": "\\mu"
}
]
| https://en.wikipedia.org/wiki?curid=69520552 |
69521086 | Dedekind–Kummer theorem | Theorem in algebraic number theory
In algebraic number theory, the Dedekind–Kummer theorem describes how a prime ideal in a Dedekind domain factors over the domain's integral closure.
Statement for number fields.
Let formula_0 be a number field such that formula_1 for formula_2 and let formula_3 be the minimal polynomial for formula_4 over formula_5. For any prime formula_6 not dividing formula_7, write
formula_8
where formula_9 are monic irreducible polynomials in formula_10. Then formula_11 factors into prime ideals as
formula_12 such that formula_13.
Statement for Dedekind Domains.
The Dedekind-Kummer theorem holds more generally than in the situation of number fields: Let formula_14 be a Dedekind domain contained in its quotient field formula_15, formula_16 a finite, separable field extension with formula_17 for a suitable generator formula_18 and formula_19 the integral closure of formula_14. The above situation is just a special case as one can choose formula_20).
If formula_21 is a prime ideal coprime to the conductor formula_22 (i.e. their sum is formula_19). Consider the minimal polynomial formula_23 of formula_18. The polynomial formula_24 has the decomposition
formula_25
with pairwise distinct irreducible polynomials formula_26.
The factorization of formula_27 into prime ideals over formula_19 is then given by formula_28 where formula_29 and the formula_30 are the polynomials formula_26 lifted to formula_31. | [
{
"math_id": 0,
"text": "K\n"
},
{
"math_id": 1,
"text": "K = \\Q(\\alpha)"
},
{
"math_id": 2,
"text": "\\alpha \\in \\mathcal O_K"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "\\alpha"
},
{
"math_id": 5,
"text": "\\Z[x]"
},
{
"math_id": 6,
"text": "p"
},
{
"math_id": 7,
"text": "[\\mathcal O_K : \\Z[\\alpha]]"
},
{
"math_id": 8,
"text": "f(x) \\equiv \\pi_1 (x)^{e_1} \\cdots \\pi_g(x)^{e_g} \\mod p"
},
{
"math_id": 9,
"text": "\\pi_i (x)"
},
{
"math_id": 10,
"text": "\\mathbb F_p[x]"
},
{
"math_id": 11,
"text": "(p) = p \\mathcal O_K"
},
{
"math_id": 12,
"text": "(p) = \\mathfrak p_1^{e_1} \\cdots \\mathfrak p_g^{e_g}"
},
{
"math_id": 13,
"text": "N(\\mathfrak p_i) = p^{\\deg \\pi_i}"
},
{
"math_id": 14,
"text": "\\mathcal o"
},
{
"math_id": 15,
"text": "K"
},
{
"math_id": 16,
"text": "L/K"
},
{
"math_id": 17,
"text": "L=K[\\theta]"
},
{
"math_id": 18,
"text": "\\theta"
},
{
"math_id": 19,
"text": "\\mathcal O"
},
{
"math_id": 20,
"text": "\\mathcal o = \\Z, K=\\Q, \\mathcal O = \\mathcal O_L"
},
{
"math_id": 21,
"text": "(0)\\neq\\mathfrak p\\subseteq\\mathcal o"
},
{
"math_id": 22,
"text": "\\mathfrak F=\\{a\\in \\mathcal O\\mid a\\mathcal O\\subseteq\\mathcal o[\\theta]\\}"
},
{
"math_id": 23,
"text": "f\\in \\mathcal o[x]"
},
{
"math_id": 24,
"text": "\\overline f\\in(\\mathcal o / \\mathfrak p)[x]"
},
{
"math_id": 25,
"text": "\\overline f=\\overline{f_1}^{e_1}\\cdots \\overline{f_r}^{e_r}"
},
{
"math_id": 26,
"text": "\\overline{f_i}"
},
{
"math_id": 27,
"text": "\\mathfrak p"
},
{
"math_id": 28,
"text": "\\mathfrak p=\\mathfrak P_1^{e_1}\\cdots \\mathfrak P_r^{e_r}"
},
{
"math_id": 29,
"text": "\\mathfrak P_i=\\mathfrak p\\mathcal O+(f_i(\\theta)\\mathcal O)"
},
{
"math_id": 30,
"text": "f_i"
},
{
"math_id": 31,
"text": "\\mathcal o[x]"
}
]
| https://en.wikipedia.org/wiki?curid=69521086 |
695215 | Horizon problem | Cosmological fine-tuning problem
The horizon problem (also known as the homogeneity problem) is a cosmological fine-tuning problem within the Big Bang model of the universe. It arises due to the difficulty in explaining the observed homogeneity of causally disconnected regions of space in the absence of a mechanism that sets the same initial conditions everywhere. It was first pointed out by Wolfgang Rindler in 1956.
The most commonly accepted solution is cosmic inflation. Different solutions propose a cyclic universe or a variable speed of light.
Background.
Astronomical distances and particle horizons.
The distances of observable objects in the night sky correspond to times in the past. We use the light-year (the distance light can travel in the time of one Earth year) to describe these cosmological distances. A galaxy measured at ten billion light-years appears to us as it was ten billion years ago, because the light has taken that long to travel to the observer. If one were to look at a galaxy ten billion light-years away in one direction and another in the opposite direction, the total distance between them is twenty billion light-years. This means that the light from the first has not yet reached the second because the universe is only about 13.8 billion years old. In a more general sense, there are portions of the universe that are visible to us, but invisible to each other, outside each other's respective particle horizons.
Causal information propagation.
In accepted relativistic physical theories, no information can travel faster than the speed of light. In this context, "information" means "any sort of physical interaction". For instance, heat will naturally flow from a hotter area to a cooler one, and in physics terms, this is one example of information exchange. Given the example above, the two galaxies in question cannot have shared any sort of information; they are not in causal contact. In the absence of common initial conditions, one would expect, then, that their physical properties would be different, and more generally, that the universe as a whole would have varying properties in causally disconnected regions.
Horizon problem.
Contrary to this expectation, the observations of the cosmic microwave background (CMB) and galaxy surveys show that the observable universe is nearly isotropic, which, through the Copernican principle, also implies homogeneity. CMB sky surveys show that the temperatures of the CMB are coordinated to a level of formula_0 where formula_1 is the difference between the observed temperature in a region of the sky and the average temperature of the sky formula_2. This coordination implies that the entire sky, and thus the entire observable universe, must have been causally connected long enough for the universe to come into thermal equilibrium.
According to the Big Bang model, as the density of the expanding universe dropped, it eventually reached a temperature where photons fell out of thermal equilibrium with matter; they decoupled from the electron-proton plasma and began free-streaming across the universe. This moment in time is referred to as the epoch of Recombination, when electrons and protons became bound to form electrically neutral hydrogen; without free electrons to scatter the photons, the photons began free-streaming. This epoch is observed through the CMB. Since we observe the CMB as a background to objects at a smaller redshift, we describe this epoch as the transition of the universe from opaque to transparent. The CMB physically describes the ‘surface of last scattering’ as it appears to us as a surface, or a background, as shown in the figure below.
Note we use conformal time in the following diagrams. Conformal time describes the amount of time it would take a photon to travel from the location of the observer to the farthest observable distance (if the universe stopped expanding right now).
The decoupling, or the last scattering, is thought to have occurred about 300,000 years after the Big Bang, or at a redshift of about formula_3. We can determine both the approximate angular diameter of the universe and the physical size of the particle horizon that had existed at this time.
The angular diameter distance, in terms of redshift formula_4, is described by formula_5. If we assume a flat cosmology then,
formula_6
The epoch of recombination occurred during a matter dominated era of the universe, so we can approximate formula_7 as formula_8. Putting these together, we see that the angular diameter distance, or the size of the observable universe for a redshift formula_3 is
formula_9
Since formula_10, we can approximate the above equation as
formula_11
Substituting this into our definition of angular diameter distance, we obtain
formula_12
From this formula, we obtain the angular diameter distance of the cosmic microwave background as formula_13.
The particle horizon describes the maximum distance light particles could have traveled to the observer given the age of the universe. We can determine the comoving distance for the age of the universe at the time of recombination using formula_14 from earlier,
formula_15
To get the physical size of the particle horizon formula_16,
formula_17
formula_18
We would expect any region of the CMB within 2 degrees of angular separation to have been in causal contact, but at any scale larger than 2° there should have been no exchange of information.
CMB regions that are separated by more than 2° lie outside one another's particle horizons and are causally disconnected. The horizon problem describes the fact that we see isotropy in the CMB temperature across the entire sky, despite the entire sky not being in causal contact to establish thermal equilibrium. Refer to the timespace diagram to the right for a visualization of this problem.
If the universe started with even slightly different temperatures in different places, the CMB should not be isotropic unless there is a mechanism that evens out the temperature by the time of decoupling. In reality, the CMB has the same temperature in the entire sky, 2.726 ± 0.001 K.
Inflationary model.
The theory of cosmic inflation has attempted to address the problem by positing a 10−32-second period of exponential expansion in the first second of the history of the universe due to a scalar field interaction. According to the inflationary model, the universe increased in size by a factor of more than 1022, from a small and causally connected region in near equilibrium. Inflation then expanded the universe rapidly, isolating nearby regions of spacetime by growing them beyond the limits of causal contact, effectively "locking in" the uniformity at large distances. Essentially, the inflationary model suggests that the universe was entirely in causal contact in the very early universe. Inflation then expands this universe by approximately 60 e-foldings (the scale factor increases by factor formula_19). We observe the CMB after inflation has occurred at a very large scale. It maintained thermal equilibrium to this large size because of the rapid expansion from inflation.
One consequence of cosmic inflation is that the anisotropies in the Big Bang due to quantum fluctuations are reduced but not eliminated. Differences in the temperature of the cosmic background are smoothed by cosmic inflation, but they still exist. The theory predicts a spectrum for the anisotropies in the microwave background which is mostly consistent with observations from WMAP and COBE.
However, gravity alone may be sufficient to explain this homogeneity.
Variable-speed-of-light theories.
Cosmological models employing a variable speed of light have been proposed to resolve the horizon problem of and provide an alternative to cosmic inflation. In the VSL models, the fundamental constant "c", denoting the speed of light in vacuum, is greater in the early universe than its present value, effectively increasing the particle horizon at the time of decoupling sufficiently to account for the observed isotropy of the CMB.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta T/T \\approx 10^{-5},"
},
{
"math_id": 1,
"text": "\\Delta T"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "z_{rec} \\approx 1100"
},
{
"math_id": 4,
"text": "z"
},
{
"math_id": 5,
"text": "d_{A}(z)=r(z) / (1+z)"
},
{
"math_id": 6,
"text": "r(z) = \\int_{t_{em}}^{t_0} \\frac{dt}{a(t)}\n = \\int_{a_{em}}^{1} \\frac{da}{a^2 H(a)} = \\int_{0}^{z} \\frac{dz}{H(z)}."
},
{
"math_id": 7,
"text": "H(z)"
},
{
"math_id": 8,
"text": "H^2(z) \\approx \\Omega_m H_0^2 (1+z)^3"
},
{
"math_id": 9,
"text": "r(z)=\\int_{0}^{z} \\frac{dz}{H(z)} = \\frac{1}{\\sqrt{\\Omega_m} H_0}\n\\int_{0}^{z} \\frac{dz}{(1+z)^{3/2}} =\n\\frac{2}{\\sqrt{\\Omega_m} H_0}\\left(1-\\frac{1}{\\sqrt{1+z}}\\right)."
},
{
"math_id": 10,
"text": "z \\gg 1"
},
{
"math_id": 11,
"text": "r(z) \\approx \\frac{2}{\\sqrt{\\Omega_m}H_0}."
},
{
"math_id": 12,
"text": "d_A(z) \\approx \\frac{2}{\\sqrt{\\Omega_m}H_0}\\frac{1}{1+z}."
},
{
"math_id": 13,
"text": "d_A(1100) \\approx 14\\ \\mathrm{Mpc}"
},
{
"math_id": 14,
"text": "r(z)"
},
{
"math_id": 15,
"text": "d_\\text{hor,rec}(z)=\\int_{0}^{t(z)} \\frac{dt}{a(t)}= \\int_{z}^{\\infin} \\frac{dz}{H(z)}\n\\approx \\frac{2}{\\sqrt{\\Omega_m}H_0}\\left [ \\frac{1}{\\sqrt{1+z}} \\right ]_z^\\infin \n\\approx \\frac{2}{\\sqrt{\\Omega_m}H_0}\\frac{1}{\\sqrt{1+z}}\n"
},
{
"math_id": 16,
"text": "D"
},
{
"math_id": 17,
"text": "D(z)=a(z)d_\\text{hor,rec}= \\frac{d_\\text{hor,rec}(z)}{1+z}\n"
},
{
"math_id": 18,
"text": "D(1100) \\approx 0.03~\\text{radians} \\approx 2^\\circ\n"
},
{
"math_id": 19,
"text": "e^{60}"
}
]
| https://en.wikipedia.org/wiki?curid=695215 |
69522235 | 1 Samuel 12 | First Book of Samuel chapter
1 Samuel 12 is the twelfth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains Samuel's address to the people of Israel after Saul's coronation. This is within a section comprising 1 Samuel 7–15 which records the rise of the monarchy in Israel and the account of the first years of King Saul.
Text.
This chapter was originally written in the Hebrew language. It is divided into 25 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 7–8, 10–19 and 4Q52 (4QSamb; 250 BCE) with extant verses 3, 5–6.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century).
Analysis.
This chapter closes the period of the Israel Judges, concluding the cycle of alternative pro- and antimonarchical strands. It began with an antimonarchial stance of Samuel which is a repetition of statements in 8:1–22, but with a new element— a contrast between the old prophetic regime and the new royal one. Although the request for a king was regarded as a wicked act (verse 17), there is a way for people and king to be good before YHWH, that is, by showing faithfulness. Covenantal language and a historical summary were common in covenant ceremonies, as also notable in Joshua 24, consisting of 'introduction, antecedent history, transition to the present, requirements, blessings, and curses'. Samuel was confirmed to be true to the prophetic office and had acted according to God's will, so he would continue to serve the people as intercessor and instructor (verse 23), exhorting them to obey God, so they would not perish for their sins
Samuel's clean record of service (12:1–5).
After stating that the kingship was a 'concession in response to popular demand' (verse 1), Samuel admitted that this was a departure from the kind of leadership exercised by himself, and posed a number of questions with the aim of justifying his ruling thus far. The verb 'take' became a key to compare his just leadership, as the prophet had 'taken' nothing from the people, to the future 'ways of the king' (cf. 1 Samuel 8:11-18), where a number of things will be 'taken' from the people by the king, therefore the people had taken a step backwards in requesting for a king.
"Now Samuel said to all Israel: "Indeed I have heeded your voice in all that you said to me, and have made a king over you.""
"And now, behold, the king walks before you, and I am old and gray; and behold, my sons are with you. I have walked before you from my youth until this day."
Recitation of salvation history (12:6–15).
After confirming his spotless record of service with the people, Samuel recited how YHWH had saved Israel in the past, again to show that asking for a king was an unnecessary step, because God 'in all his saving deeds' had always provided saviors or judges who successfully delivered the people from their enemies, from the time of Moses and Aaron to liberate the people out of Egypt (verses 6, 8), until the period of judges, with the examples of the victories over three different oppressors: Sisera (Judges 4–5), the Philistines (Judges 13–16), and the Moabites (Judges 3), within a skeletal pattern of 'apostasy-oppression-repentance-deliverance', using some saviors: Jerubaal (Gideon), Barak, Jephthah, and Samson (cf. ). Verses 14–15 state the blessing and curse of the covenant: all will be well if the people remain faithful, but if not, they will be wiped away (cf. verse 25).
" And now behold the king whom you have chosen, for whom you have asked; behold, the Lord has set a king over you."
Sign of thunderstorm and closing words (12:16–25).
Even in his old age, Samuel still possessed supernatural powers that he could call upon God to bring thunder and rain that day (verses 17–18), a rare occurrence during the period of wheat harvest and if severely happened, it would destroy the ripe crops. This evoked awe and repentance from the people, setting up for closing words from Samuel that he would continue to pray for the people and instruct them "the way that is good and right", definitely not a sign of retirement.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69522235 |
695230 | Resistance distance | Graph metric of electrical resistance between nodes
In graph theory, the resistance distance between two vertices of a simple, connected graph, G, is equal to the resistance between two equivalent points on an electrical network, constructed so as to correspond to G, with each edge being replaced by a resistance of one ohm. It is a metric on graphs.
Definition.
On a graph G, the resistance distance Ω"i","j" between two vertices vi and vj is
formula_0
where formula_1
with + denotes the Moore–Penrose inverse, L the Laplacian matrix of G, is the number of vertices in G, and Φ is the matrix containing all 1s.
Properties of resistance distance.
If "i" = "j" then Ω"i","j" = 0. For an undirected graph
formula_2
General sum rule.
For any N-vertex simple connected graph "G" = ("V", "E") and arbitrary "N"×"N" matrix M:
formula_3
From this generalized sum rule a number of relationships can be derived depending on the choice of M. Two of note are;
formula_4
where the λk are the non-zero eigenvalues of the Laplacian matrix. This unordered sum
formula_5
is called the Kirchhoff index of the graph.
Relationship to the number of spanning trees of a graph.
For a simple connected graph "G" = ("V", "E"), the resistance distance between two vertices may be expressed as a function of the set of spanning trees, T, of G as follows:
formula_6
where T' is the set of spanning trees for the graph "G' "= ("V", "E" + "e""i","j"). In other words, for an edge formula_7, the resistance distance between a pair of nodes formula_8 and formula_9 is the probability that the edge formula_10 is in a random spanning tree of formula_11.
Relationship to random walks.
The resistance distance between vertices formula_12 and formula_12 is proportional to the commute time formula_13 of a random walk between formula_12 and formula_14. The commute time is the expected number of steps in a random walk that starts at formula_12, visits formula_14, and returns to formula_12. For a graph with formula_15 edges, the resistance distance and commute time are related as formula_16.
As a squared Euclidean distance.
Since the Laplacian L is symmetric and positive semi-definite, so is
formula_17
thus its pseudo-inverse Γ is also symmetric and positive semi-definite. Thus, there is a K such that formula_18 and we can write:
formula_19
showing that the square root of the resistance distance corresponds to the Euclidean distance in the space spanned by K.
Connection with Fibonacci numbers.
A fan graph is a graph on "n" + 1 vertices where there is an edge between vertex i and "n" + 1 for all "i" = 1, 2, 3, …, "n", and there is an edge between vertex i and "i" + 1 for all i = 1, 2, 3, …, "n" – 1.
The resistance distance between vertex "n" + 1 and vertex "i" ∈ {1, 2, 3, …, "n"} is
formula_20
where Fj is the j-th Fibonacci number, for "j" ≥ 0.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\Omega_{i,j}:=\\Gamma_{i,i}+\\Gamma_{j,j}-\\Gamma_{i,j}-\\Gamma_{j,i},\n"
},
{
"math_id": 1,
"text": "\\Gamma = \\left(L + \\frac{1}{|V|}\\Phi\\right)^+,"
},
{
"math_id": 2,
"text": "\\Omega_{i,j}=\\Omega_{j,i}=\\Gamma_{i,i}+\\Gamma_{j,j}-2\\Gamma_{i,j}"
},
{
"math_id": 3,
"text": "\\sum_{i,j \\in V}(LML)_{i,j}\\Omega_{i,j} = -2\\operatorname{tr}(ML)"
},
{
"math_id": 4,
"text": "\\begin{align}\n \\sum_{(i,j) \\in E}\\Omega_{i,j} &= N - 1 \\\\\n \\sum_{i<j \\in V}\\Omega_{i,j} &= N\\sum_{k=1}^{N-1} \\lambda_k^{-1}\n\\end{align}"
},
{
"math_id": 5,
"text": "\\sum_{i<j} \\Omega_{i,j}"
},
{
"math_id": 6,
"text": "\n\\Omega_{i,j}=\\begin{cases}\n\\frac{\\left | \\{t:t \\in T,\\, e_{i,j} \\in t\\} \\right \\vert}{\\left | T \\right \\vert}, & (i,j) \\in E\\\\ \\frac{\\left | T'-T \\right \\vert}{\\left | T \\right \\vert}, &(i,j) \\not \\in E \n\\end{cases}\n"
},
{
"math_id": 7,
"text": "(i,j)\\in E"
},
{
"math_id": 8,
"text": "i"
},
{
"math_id": 9,
"text": "j"
},
{
"math_id": 10,
"text": "(i,j)"
},
{
"math_id": 11,
"text": "G"
},
{
"math_id": 12,
"text": "u"
},
{
"math_id": 13,
"text": "C_{u,v}"
},
{
"math_id": 14,
"text": "v"
},
{
"math_id": 15,
"text": "m"
},
{
"math_id": 16,
"text": "C_{u,v}=2m\\Omega_{u,v}"
},
{
"math_id": 17,
"text": "\\left(L+\\frac{1}{|V|}\\Phi\\right),"
},
{
"math_id": 18,
"text": "\\Gamma = KK^\\textsf{T}"
},
{
"math_id": 19,
"text": "\\Omega_{i,j} = \\Gamma_{i,i} + \\Gamma_{j,j} - \\Gamma_{i,j} - \\Gamma_{j,i} = K_iK_i^\\textsf{T} + K_j K_j^\\textsf{T} - K_i K_j^\\textsf{T} - K_j K_i^\\textsf{T} = \\left(K_i - K_j\\right)^2"
},
{
"math_id": 20,
"text": "\\frac{ F_{2(n-i)+1} F_{2i-1} }{ F_{2n} }"
}
]
| https://en.wikipedia.org/wiki?curid=695230 |
695241 | Scale invariance | Features that do not change if length or energy scales are multiplied by a common factor
In physics, mathematics and statistics, scale invariance is a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor, and thus represent a universality.
The technical term for this transformation is a dilatation (also known as dilation). Dilatations can form part of a larger conformal symmetry.
Scale-invariant curves and self-similarity.
In mathematics, one can consider the scaling properties of a function or curve "f" ("x") under rescalings of the variable x. That is, one is interested in the shape of "f" ("λx") for some scale factor λ, which can be taken to be a length or size rescaling. The requirement for "f" ("x") to be invariant under all rescalings is usually taken to be
formula_0
for some choice of exponent Δ, and for all dilations λ. This is equivalent to f being a homogeneous function of degree Δ.
Examples of scale-invariant functions are the monomials formula_1, for which Δ
"n", in that clearly
formula_2
An example of a scale-invariant curve is the logarithmic spiral, a kind of curve that often appears in nature. In polar coordinates ("r", "θ"), the spiral can be written as
formula_3
Allowing for rotations of the curve, it is invariant under all rescalings λ; that is, "θ"("λr") is identical to a rotated version of "θ"("r").
Projective geometry.
The idea of scale invariance of a monomial generalizes in higher dimensions to the idea of a homogeneous polynomial, and more generally to a homogeneous function. Homogeneous functions are the natural denizens of projective space, and homogeneous polynomials are studied as projective varieties in projective geometry. Projective geometry is a particularly rich field of mathematics; in its most abstract forms, the geometry of schemes, it has connections to various topics in string theory.
Fractals.
It is sometimes said that fractals are scale-invariant, although more precisely, one should say that they are self-similar. A fractal is equal to itself typically for only a discrete set of values λ, and even then a translation and rotation may have to be applied to match the fractal up to itself.
Thus, for example, the Koch curve scales with ∆
1, but the scaling holds only for values of "λ"
1/3"n" for integer n. In addition, the Koch curve scales not only at the origin, but, in a certain sense, "everywhere": miniature copies of itself can be found all along the curve.
Some fractals may have multiple scaling factors at play at once; such scaling is studied with multi-fractal analysis.
Periodic external and internal rays are invariant curves .
Scale invariance in stochastic processes.
If "P"("f" ) is the average, expected power at frequency f, then noise scales as
formula_4
with Δ = 0 for white noise, Δ = −1 for pink noise, and Δ = −2 for Brownian noise (and more generally, Brownian motion).
More precisely, scaling in stochastic systems concerns itself with the likelihood of choosing a particular configuration out of the set of all possible random configurations. This likelihood is given by the probability distribution.
Examples of scale-invariant distributions are the Pareto distribution and the Zipfian distribution.
Scale-invariant Tweedie distributions.
Tweedie distributions are a special case of exponential dispersion models, a class of statistical models used to describe error distributions for the generalized linear model and characterized by closure under additive and reproductive convolution as well as under scale transformation. These include a number of common distributions: the normal distribution, Poisson distribution and gamma distribution, as well as more unusual distributions like the compound Poisson-gamma distribution, positive stable distributions, and extreme stable distributions.
Consequent to their inherent scale invariance Tweedie random variables "Y" demonstrate a variance var("Y") to mean E("Y") power law:
formula_5,
where "a" and "p" are positive constants. This variance to mean power law is known in the physics literature as fluctuation scaling, and in the ecology literature as Taylor's law.
Random sequences, governed by the Tweedie distributions and evaluated by the method of expanding bins exhibit a biconditional relationship between the variance to mean power law and power law autocorrelations. The Wiener–Khinchin theorem further implies that for any sequence that exhibits a variance to mean power law under these conditions will also manifest "1/f" noise.
The Tweedie convergence theorem provides a hypothetical explanation for the wide manifestation of fluctuation scaling and "1/f" noise. It requires, in essence, that any exponential dispersion model that asymptotically manifests a variance to mean power law will be required express a variance function that comes within the domain of attraction of a Tweedie model. Almost all distribution functions with finite cumulant generating functions qualify as exponential dispersion models and most exponential dispersion models manifest variance functions of this form. Hence many probability distributions have variance functions that express this asymptotic behavior, and the Tweedie distributions become foci of convergence for a wide range of data types.
Much as the central limit theorem requires certain kinds of random variables to have as a focus of convergence the Gaussian distribution and express white noise, the Tweedie convergence theorem requires certain non-Gaussian random variables to express "1/f" noise and fluctuation scaling.
Cosmology.
In physical cosmology, the power spectrum of the spatial distribution of the cosmic microwave background is near to being a scale-invariant function. Although in mathematics this means that the spectrum is a power-law, in cosmology the term "scale-invariant" indicates that the amplitude, "P"("k"), of primordial fluctuations as a function of wave number, k, is approximately constant, i.e. a flat spectrum. This pattern is consistent with the proposal of cosmic inflation.
Scale invariance in classical field theory.
Classical field theory is generically described by a field, or set of fields, "φ", that depend on coordinates, "x". Valid field configurations are then determined by solving differential equations for "φ", and these equations are known as field equations.
For a theory to be scale-invariant, its field equations should be invariant under a rescaling of the coordinates, combined with some specified rescaling of the fields,
formula_6
formula_7
The parameter Δ is known as the scaling dimension of the field, and its value depends on the theory under consideration. Scale invariance will typically hold provided that no fixed length scale appears in the theory. Conversely, the presence of a fixed length scale indicates that a theory is not scale-invariant.
A consequence of scale invariance is that given a solution of a scale-invariant field equation, we can automatically find other solutions by rescaling both the coordinates and the fields appropriately. In technical terms, given a solution, "φ"("x"), one always has other solutions of the form
formula_8
Scale invariance of field configurations.
For a particular field configuration, "φ"("x"), to be scale-invariant, we require that
formula_9
where Δ is, again, the scaling dimension of the field.
We note that this condition is rather restrictive. In general, solutions even of scale-invariant field equations will not be scale-invariant, and in such cases the symmetry is said to be spontaneously broken.
Classical electromagnetism.
An example of a scale-invariant classical field theory is electromagnetism with no charges or currents. The fields are the electric and magnetic fields, E(x,"t") and B(x,"t"), while their field equations are Maxwell's equations.
With no charges or currents, these field equations take the form of wave equations
formula_10
where "c" is the speed of light.
These field equations are invariant under the transformation
formula_11
Moreover, given solutions of Maxwell's equations, E(x, "t") and B(x, "t"), it holds that
E("λx, "λt") and B("λx, "λt") are also solutions.
Massless scalar field theory.
Another example of a scale-invariant classical field theory is the massless scalar field (note that the name scalar is unrelated to scale invariance). The scalar field, "φ"(x, "t") is a function of a set of spatial variables, x, and a time variable, t.
Consider first the linear theory. Like the electromagnetic field equations above, the equation of motion for this theory is also a wave equation,
formula_12
and is invariant under the transformation
formula_13
formula_14
The name massless refers to the absence of a term formula_15 in the field equation. Such a term is often referred to as a `mass' term, and would break the invariance under the above transformation. In relativistic field theories, a mass-scale, m is physically equivalent to a fixed length scale through
formula_16
and so it should not be surprising that massive scalar field theory is "not" scale-invariant.
"φ"4 theory.
The field equations in the examples above are all linear in the fields, which has meant that the scaling dimension, Δ, has not been so important. However, one usually requires that the scalar field action is dimensionless, and this fixes the scaling dimension of φ. In particular,
formula_17
where D is the combined number of spatial and time dimensions.
Given this scaling dimension for φ, there are certain nonlinear modifications of massless scalar field theory which are also scale-invariant. One example is massless φ4 theory for D = 4. The field equation is
formula_18
When D = 4 (e.g. three spatial dimensions and one time dimension), the scalar field scaling dimension is Δ = 1. The field equation is then invariant under the transformation
formula_13
formula_19
formula_20
The key point is that the parameter g must be dimensionless, otherwise one introduces a fixed length scale into the theory: For φ4 theory, this is only the case in D = 4.
Note that under these transformations the argument of the function φ is unchanged.
Scale invariance in quantum field theory.
The scale-dependence of a quantum field theory (QFT) is characterised by the way its coupling parameters depend on the energy-scale of a given physical process. This energy dependence is described by the renormalization group, and is encoded in the beta-functions of the theory.
For a QFT to be scale-invariant, its coupling parameters must be independent of the energy-scale, and this is indicated by the vanishing of the beta-functions of the theory. Such theories are also known as fixed points of the corresponding renormalization group flow.
Quantum electrodynamics.
A simple example of a scale-invariant QFT is the quantized electromagnetic field without charged particles. This theory actually has no coupling parameters (since photons are massless and non-interacting) and is therefore scale-invariant, much like the classical theory.
However, in nature the electromagnetic field is coupled to charged particles, such as electrons. The QFT describing the interactions of photons and charged particles is quantum electrodynamics (QED), and this theory is not scale-invariant. We can see this from the QED beta-function. This tells us that the electric charge (which is the coupling parameter in the theory) increases with increasing energy. Therefore, while the quantized electromagnetic field without charged particles is scale-invariant, QED is not scale-invariant.
Massless scalar field theory.
Free, massless quantized scalar field theory has no coupling parameters. Therefore, like the classical version, it is scale-invariant. In the language of the renormalization group, this theory is known as the Gaussian fixed point.
However, even though the classical massless "φ"4 theory is scale-invariant in "D" = 4, the quantized version is not scale-invariant. We can see this from the beta-function for the coupling parameter, "g".
Even though the quantized massless "φ"4 is not scale-invariant, there do exist scale-invariant quantized scalar field theories other than the Gaussian fixed point. One example is the Wilson–Fisher fixed point, below.
Conformal field theory.
Scale-invariant QFTs are almost always invariant under the full conformal symmetry, and the study of such QFTs is conformal field theory (CFT). Operators in a CFT have a well-defined scaling dimension, analogous to the scaling dimension, "∆", of a classical field discussed above. However, the scaling dimensions of operators in a CFT typically differ from those of the fields in the corresponding classical theory. The additional contributions appearing in the CFT are known as anomalous scaling dimensions.
Scale and conformal anomalies.
The φ4 theory example above demonstrates that the coupling parameters of a quantum field theory can be scale-dependent even if the corresponding classical field theory is scale-invariant (or conformally invariant). If this is the case, the classical scale (or conformal) invariance is said to be anomalous. A classically scale-invariant field theory, where scale invariance is broken by quantum effects, provides an explication of the nearly exponential expansion of the early universe called cosmic inflation, as long as the theory can be studied through perturbation theory.
Phase transitions.
In statistical mechanics, as a system undergoes a phase transition, its fluctuations are described by a scale-invariant statistical field theory. For a system in equilibrium (i.e. time-independent) in D spatial dimensions, the corresponding statistical field theory is formally similar to a D-dimensional CFT. The scaling dimensions in such problems are usually referred to as critical exponents, and one can in principle compute these exponents in the appropriate CFT.
The Ising model.
An example that links together many of the ideas in this article is the phase transition of the Ising model, a simple model of ferromagnetic substances. This is a statistical mechanics model, which also has a description in terms of conformal field theory. The system consists of an array of lattice sites, which form a D-dimensional periodic lattice. Associated with each lattice site is a magnetic moment, or spin, and this spin can take either the value +1 or −1. (These states are also called up and down, respectively.)
The key point is that the Ising model has a spin-spin interaction, making it energetically favourable for two adjacent spins to be aligned. On the other hand, thermal fluctuations typically introduce a randomness into the alignment of spins. At some critical temperature, "Tc" , spontaneous magnetization is said to occur. This means that below "Tc" the spin-spin interaction will begin to dominate, and there is some net alignment of spins in one of the two directions.
An example of the kind of physical quantities one would like to calculate at this critical temperature is the correlation between spins separated by a distance r. This has the generic behaviour:
formula_21
for some particular value of formula_22, which is an example of a critical exponent.
CFT description.
The fluctuations at temperature "Tc" are scale-invariant, and so the Ising model at this phase transition is expected to be described by a scale-invariant statistical field theory. In fact, this theory is the Wilson–Fisher fixed point, a particular scale-invariant scalar field theory.
In this context, "G"("r") is understood as a correlation function of scalar fields,
formula_23
Now we can fit together a number of the ideas seen already.
From the above, one sees that the critical exponent, η, for this phase transition, is also an anomalous dimension. This is because the classical dimension of the scalar field,
formula_24
is modified to become
formula_25
where D is the number of dimensions of the Ising model lattice.
So this anomalous dimension in the conformal field theory is the "same" as a particular critical exponent of the Ising model phase transition.
Note that for dimension "D" ≡ 4−"ε", η can be calculated approximately, using the epsilon expansion, and one finds that
formula_26.
In the physically interesting case of three spatial dimensions, we have ε=1, and so this expansion is not strictly reliable. However, a semi-quantitative prediction is that η is numerically small in three dimensions.
On the other hand, in the two-dimensional case the Ising model is exactly soluble. In particular, it is equivalent to one of the minimal models, a family of well-understood CFTs, and it is possible to compute η (and the other critical exponents) exactly,
formula_27.
Schramm–Loewner evolution.
The anomalous dimensions in certain two-dimensional CFTs can be related to the typical fractal dimensions of random walks, where the random walks are defined via Schramm–Loewner evolution (SLE). As we have seen above, CFTs describe the physics of phase transitions, and so one can relate the critical exponents of certain phase transitions to these fractal dimensions. Examples include the 2"d" critical Ising model and the more general 2"d" critical Potts model. Relating other 2"d" CFTs to SLE is an active area of research.
Universality.
A phenomenon known as universality is seen in a large variety of physical systems. It expresses the idea that different microscopic physics can give rise to the same scaling behaviour at a phase transition. A canonical example of universality involves the following two systems:
Even though the microscopic physics of these two systems is completely different, their critical exponents turn out to be the same. Moreover, one can calculate these exponents using the same statistical field theory. The key observation is that at a phase transition or critical point, fluctuations occur at all length scales, and thus one should look for a scale-invariant statistical field theory to describe the phenomena. In a sense, universality is the observation that there are relatively few such scale-invariant theories.
The set of different microscopic theories described by the same scale-invariant theory is known as a universality class. Other examples of systems which belong to a universality class are:
The key observation is that, for all of these different systems, the behaviour resembles a phase transition, and that the language of statistical mechanics and scale-invariant statistical field theory may be applied to describe them.
Other examples of scale invariance.
Newtonian fluid mechanics with no applied forces.
Under certain circumstances, fluid mechanics is a scale-invariant classical field theory. The fields are the velocity of the fluid flow, formula_28, the fluid density, formula_29, and the fluid pressure, formula_30. These fields must satisfy both the Navier–Stokes equation and the continuity equation. For a Newtonian fluid these take the respective forms
formula_31
formula_32
where formula_33 is the .
In order to deduce the scale invariance of these equations we specify an equation of state, relating the fluid pressure to the fluid density. The equation of state depends on the type of fluid and the conditions to which it is subjected. For example, we consider the isothermal ideal gas, which satisfies
formula_34
where formula_35 is the speed of sound in the fluid. Given this equation of state, Navier–Stokes and the continuity equation are invariant under the transformations
formula_13
formula_36
formula_37
formula_38
Given the solutions formula_28 and formula_29, we automatically have that
formula_39 and formula_40 are also solutions.
Computer vision.
In computer vision and biological vision, scaling transformations arise because of the perspective image mapping and because of objects having different physical size in the world. In these areas, scale invariance refers to local image descriptors or visual representations of the image data that remain invariant when the local scale in the image domain is changed.
Detecting local maxima over scales of normalized derivative responses provides a general framework for obtaining scale invariance from image data.
Examples of applications include blob detection, corner detection, ridge detection, and object recognition via the scale-invariant feature transform.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(\\lambda x)=\\lambda^\\Delta f(x)"
},
{
"math_id": 1,
"text": "f(x)=x^n"
},
{
"math_id": 2,
"text": "f(\\lambda x) = (\\lambda x)^n = \\lambda^n f(x)~."
},
{
"math_id": 3,
"text": "\\theta = \\frac{1}{b} \\ln(r/a)~."
},
{
"math_id": 4,
"text": "P(f) = \\lambda^{-\\Delta} P(\\lambda f)"
},
{
"math_id": 5,
"text": "\\text{var}\\,(Y) = a[\\text{E}\\,(Y)]^p"
},
{
"math_id": 6,
"text": "x\\rightarrow\\lambda x~,"
},
{
"math_id": 7,
"text": "\\varphi\\rightarrow\\lambda^{-\\Delta}\\varphi~."
},
{
"math_id": 8,
"text": "\\lambda^\\Delta \\varphi(\\lambda x)."
},
{
"math_id": 9,
"text": "\\varphi(x)=\\lambda^{-\\Delta}\\varphi(\\lambda x)"
},
{
"math_id": 10,
"text": "\n\\begin{align}\n\\nabla^2 \\mathbf{E} = \\frac{1}{c^2} \\frac{\\partial^2 \\mathbf{E}}{\\partial t^2} \\\\[6pt]\n\\nabla^2\\mathbf{B} = \\frac{1}{c^2} \\frac{\\partial^2 \\mathbf{B}}{\\partial t^2}\n\\end{align}\n"
},
{
"math_id": 11,
"text": "\n\\begin{align}\nx\\rightarrow\\lambda x, \\\\[6pt]\nt\\rightarrow\\lambda t.\n\\end{align}\n"
},
{
"math_id": 12,
"text": "\\frac{1}{c^2} \\frac{\\partial^2 \\varphi}{\\partial t^2}-\\nabla^2 \\varphi = 0,"
},
{
"math_id": 13,
"text": "x\\rightarrow\\lambda x,"
},
{
"math_id": 14,
"text": "t\\rightarrow\\lambda t."
},
{
"math_id": 15,
"text": "\\propto m^2\\varphi"
},
{
"math_id": 16,
"text": "L=\\frac{\\hbar}{mc},"
},
{
"math_id": 17,
"text": "\\Delta=\\frac{D-2}{2},"
},
{
"math_id": 18,
"text": "\\frac{1}{c^2} \\frac{\\partial^2 \\varphi}{\\partial t^2}-\\nabla^2 \\varphi+g\\varphi^3=0."
},
{
"math_id": 19,
"text": "t\\rightarrow\\lambda t,"
},
{
"math_id": 20,
"text": "\\varphi (x)\\rightarrow\\lambda^{-1}\\varphi(x)."
},
{
"math_id": 21,
"text": "G(r)\\propto\\frac{1}{r^{D-2+\\eta}},"
},
{
"math_id": 22,
"text": "\\eta"
},
{
"math_id": 23,
"text": "\\langle\\phi(0)\\phi(r)\\rangle\\propto\\frac{1}{r^{D-2+\\eta}}."
},
{
"math_id": 24,
"text": "\\Delta=\\frac{D-2}{2}"
},
{
"math_id": 25,
"text": "\\Delta=\\frac{D-2+\\eta}{2},"
},
{
"math_id": 26,
"text": "\\eta=\\frac{\\epsilon^2}{54}+O(\\epsilon^3)"
},
{
"math_id": 27,
"text": "\\eta_{_{D=2}}=\\frac{1}{4}"
},
{
"math_id": 28,
"text": "\\mathbf{u}(\\mathbf{x},t)"
},
{
"math_id": 29,
"text": "\\rho(\\mathbf{x},t)"
},
{
"math_id": 30,
"text": "P(\\mathbf{x},t)"
},
{
"math_id": 31,
"text": "\\rho\\frac{\\partial \\mathbf{u}}{\\partial t}+\\rho\\mathbf{u}\\cdot\\nabla \\mathbf{u} = -\\nabla P+\\mu \\left(\\nabla^2 \\mathbf{u}+\\frac{1}{3}\\nabla\\left(\\nabla\\cdot\\mathbf{u}\\right)\\right)"
},
{
"math_id": 32,
"text": "\\frac{\\partial \\rho}{\\partial t}+\\nabla\\cdot \\left(\\rho\\mathbf{u}\\right)=0"
},
{
"math_id": 33,
"text": "\\mu"
},
{
"math_id": 34,
"text": "P=c_s^2\\rho,"
},
{
"math_id": 35,
"text": "c_s"
},
{
"math_id": 36,
"text": "t\\rightarrow\\lambda^2 t,"
},
{
"math_id": 37,
"text": "\\rho\\rightarrow\\lambda^{-1} \\rho,"
},
{
"math_id": 38,
"text": "\\mathbf{u}\\rightarrow\\lambda^{-1}\\mathbf{u}."
},
{
"math_id": 39,
"text": "\\lambda\\mathbf{u}(\\lambda\\mathbf{x},\\lambda^2 t)"
},
{
"math_id": 40,
"text": "\\lambda\\rho(\\lambda\\mathbf{x},\\lambda^2 t)"
}
]
| https://en.wikipedia.org/wiki?curid=695241 |
69524998 | Spectral submanifold | In dynamical systems, a spectral submanifold (SSM) is the unique smoothest invariant manifold serving as the nonlinear extension of a spectral subspace of a linear dynamical system under the addition of nonlinearities. SSM theory provides conditions for when invariant properties of eigenspaces of a linear dynamical system can be extended to a nonlinear system, and therefore motivates the use of SSMs in nonlinear dimensionality reduction.
Definition.
Consider a nonlinear ordinary differential equation of the form
formula_1
with constant matrix formula_2 and the nonlinearities contained in the smooth function formula_3.
Assume that formula_4 for all eigenvalues formula_5 of formula_6, that is, the origin is an asymptotically stable fixed point. Now select a span formula_7 of formula_8 eigenvectors formula_9 of formula_6. Then, the eigenspace formula_0 is an invariant subspace of the linearized system
formula_10
Under addition of the nonlinearity formula_11 to the linear system, formula_0 generally perturbs into infinitely many invariant manifolds. Among these invariant manifolds, the unique smoothest one is referred to as the spectral submanifold.
An equivalent result for unstable SSMs holds for formula_12.
Existence.
The spectral submanifold tangent to formula_0 at the origin is guaranteed to exist provided that certain non-resonance conditions are satisfied by the eigenvalues formula_13 in the spectrum of formula_0. In particular, there can be no linear combination of formula_13 equal to one of the eigenvalues of formula_6 outside of the spectral subspace. If there is such an outer resonance, one can include the resonant mode into formula_0 and extend the analysis to a higher-dimensional SSM pertaining to the extended spectral subspace.
Non-autonomous extension.
The theory on spectral submanifolds extends to nonlinear non-autonomous systems of the form
formula_14
with formula_15 a quasiperiodic forcing term.
Significance.
Spectral submanifolds are useful for rigorous nonlinear dimensionality reduction in dynamical systems. The reduction of a high-dimensional phase space to a lower-dimensional manifold can lead to major simplifications by allowing for an accurate description of the system's main asymptotic behaviour. For a known dynamical system, SSMs can be computed analytically by solving the invariance equations, and reduced models on SSMs may be employed for prediction of the response to forcing.
Furthermore these manifolds may also be extracted directly from trajectory data of a dynamical system with the use of machine learning algorithms.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "\\frac{dx}{dt} = Ax + f_0(x),\\quad x\\in \\R^n,"
},
{
"math_id": 2,
"text": "\\ A\\in \\R^{n\\times n}"
},
{
"math_id": 3,
"text": "f_0 = \\mathcal{O}(|x|^2)"
},
{
"math_id": 4,
"text": "\\text{Re} \\lambda_j < 0"
},
{
"math_id": 5,
"text": "\\lambda_j,\\ j = 1,\\ldots, n"
},
{
"math_id": 6,
"text": "A"
},
{
"math_id": 7,
"text": "E = \\text{span}\\, \\{v^E_{1},\\ldots v^E_{m}\\}"
},
{
"math_id": 8,
"text": "m"
},
{
"math_id": 9,
"text": "v^E_{i}"
},
{
"math_id": 10,
"text": "\\frac{dx}{dt} = Ax,\\quad x\\in \\R^n."
},
{
"math_id": 11,
"text": "f_0"
},
{
"math_id": 12,
"text": "\\text{Re} \\lambda_j > 0"
},
{
"math_id": 13,
"text": "\\lambda^E_i"
},
{
"math_id": 14,
"text": "\\frac{dx}{dt} = Ax + f_0(x) + \\epsilon f_1(x, \\Omega t),\\quad \\Omega\\in \\mathbb{T}^k,\\ 0\\le \\epsilon \\ll 1,"
},
{
"math_id": 15,
"text": "f_1 : \\R^n \\times \\mathbb{T}^k \\to \\R^n"
}
]
| https://en.wikipedia.org/wiki?curid=69524998 |
69529762 | 1 Samuel 13 | First Book of Samuel chapter
1 Samuel 13 is the thirteenth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains Saul's act of disobedience after his coronation. This is within a section comprising 1 Samuel 7–15 which records the rise of the monarchy in Israel and the account of the first years of King Saul.
Text.
This chapter was originally written in the Hebrew language. It is divided into 23 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008).
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century).
Analysis.
Saul was appointed as a king to save his people 'from the hand of their enemies' (10:1), specifically the Philistines (9:16), that had a strong presence in the central hill country of Israel, were able to send out bands of raiders into different territories of Israel and controlled the manufacture of metal equipments for agricultural and weapons. He had to establish a standing army (verse 2), not just a militia, and achieved an initial success under the leadership of Jonathan, his son, against Philistine garrisons, despite with less force and inferior weapons. The Philistines mustered their large and powerful army causing the Israelites to flee eastwards to hide in the hills and some even went into Transjordan area (verses 6–7), even some in the Israelite army went away (verse 8). This became the setting for the battle of Michmash, where a Philistine garrison had been placed to guard a pass in the area (verse 23).
One major emphasis in this chapter is on the disobedience of Saul, which affects the future of his kingship (verse 13, cf. 12:14). Saul's failure to follow God's instruction through Samuel had doomed his dynasty and God chose another king who would obey. Thus, this chapter contains the first prediction of David to be the king of Israel.
War with the Philistines (13:1–7).
Even as the Philistines had decisively been defeated by the Israelites led by Samuel (1 Samuel 7), they still pose a threat to Israel (cf. 1 Samuel 9:16), and would be a problem for Saul throughout his reign. Jonathan's successful attack on the Philistine outpost in Geba incited bigger conflicts. Saul assembled an army but now the Philistines had big military advantage, which caused many Israelites to flee to another place (verse 6) and leave the army (verse 7), leaving Saul in Gilgal with a dim prospect for the next battle.
"Saul reigned one year; and when he had reigned two years over Israel,"
Verse 1.
This verse is absent in the Greek Septuagint version.
Some Bible versions assume that some words are corrupted, so the numbers depicting Saul's age when he began to reign, and the length of his reign are missing. In the Hexapla version, Origin inserted the word "thirty" for Saul's age (now used in NIV, NLT, CSB, etc.). However, it may not be correct because, at that time, Jonathan was old enough to command an army (verse 2) and capable of performing heroic acts (1 Samuel 14:14). Josephus states that Saul reigned eighteen years in the lifetime of Samuel and twenty-two years after his death, for a total of forty years, which agrees with . Saul's grandson, Mephibosheth, was five years old at Saul's death (2 Samuel 4:4).
"Saul chose him three thousand men of Israel; whereof two thousand were with Saul in Michmash and in mount Bethel, and a thousand were with Jonathan in Gibeah of Benjamin: and the rest of the people he sent every man to his tent."
A late prophet and a premature sacrifice (13:8–14).
Saul waited seven days in Gilgal for Samuel to come performing the offerings before God (verse 13:8), in reference to the specific instruction in 1 Samuel 10:8, but when his army began to scatter, he decided to act on Samuel's advice in 1 Samuel 10:7 ("do whatever your hands find to do for God is with you") by offering the sacrifice without waiting for Samuel. Ironically, Samuel showed up just when Saul finished the burnt offering, before offered fellowship offerings. Saul's defense of his actions reveals his superstitious character, that his movitation of the offerings was to seek 'the Lord's favor' for the battle as a kind of "good luck" charm, a beginning move towards superstition and witchcraft as noticed by early church father, John Chrysostom. Saul's major sin is perhaps his attempt to 'usurp Samuel's role of religious leadership'. Because of this act, Samuel told Saul that his kingdom would not endure (verse 14), although Saul was still king and apparently God would still give him one more chance to show his obedience in the case of fighting the Amalekites, which Saul again failed and so was declared publicly that God had rejected Saul as king of Israel (15:26) and given his kingdom to 'one of his neighbors' (15:28).
"But now your kingdom shall not continue. The LORD has sought for Himself a man after His own heart, and the LORD has commanded him to be commander over His people, because you have not kept what the LORD commanded you.""
Troop movement and Philistine's metal monopoly (13:15–23).
After giving his rebuke, Samuel left for Gibeah, then Saul, after counting his remaining army, decided to go to the same place. Saul's men decreased greatly from the 300,000 men against the Ammonites to 3000 men now down to 600 soldiers. Apparently his action to burnt the offerings did not help to encourage more soldiers and furthermore, the full-scale battle did not start immediately. Verses 19–22 explain how the Philistines monopolized metalworking in the area, so only Saul and Jonathan had sword among the Israelites (13:22). That the Philistines had much better weapons caused the people of Israel to fear them even more.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69529762 |
695319 | Supermultiplet | A representation of the supersymmetry algebra
In theoretical physics, a supermultiplet is a representation of a supersymmetry algebra, possibly with extended supersymmetry.
Then a superfield is a field on superspace which is valued in such a representation. Naïvely, or when considering flat superspace, a superfield can simply be viewed as a function on superspace. Formally, it is a section of an associated supermultiplet bundle.
Phenomenologically, superfields are used to describe particles. It is a feature of supersymmetric field theories that particles form pairs, called superpartners where bosons are paired with fermions.
These supersymmetric fields are used to build supersymmetric quantum field theories, where the fields are promoted to operators.
History.
Superfields were introduced by Abdus Salam and J. A. Strathdee in a 1974 article. Operations on superfields and a partial classification were presented a few months later by Sergio Ferrara, Julius Wess and Bruno Zumino.
Naming and classification.
The most commonly used supermultiplets are vector multiplets, chiral multiplets (in formula_0 supersymmetry for example), hypermultiplets (in formula_1 supersymmetry for example), tensor multiplets and gravity multiplets. The highest component of a vector multiplet is a gauge boson, the highest component of a chiral or hypermultiplet is a spinor, the highest component of a gravity multiplet is a graviton. The names are defined so as to be invariant under dimensional reduction, although the organization of the fields as representations of the Lorentz group changes.
The use of these names for the different multiplets can vary in literature. A chiral multiplet (whose highest component is a spinor) may sometimes be referred to as a "scalar multiplet", and in formula_1 SUSY, a vector multiplet (whose highest component is a vector) can sometimes be referred to as a chiral multiplet.
Superfields in d = 4, N = 1 supersymmetry.
Conventions in this section follow the notes by Figueroa-O'Farrill (2001).
A general complex superfield formula_2 in formula_3 supersymmetry can be expanded as
formula_4,
where formula_5 are different complex fields. This is not an irreducible supermultiplet, and so different constraints are needed to isolate irreducible representations.
Chiral superfield.
A (anti-)chiral superfield is a supermultiplet of formula_6 supersymmetry.
In four dimensions, the minimal formula_7 supersymmetry may be written using the notion of superspace. Superspace contains the usual space-time coordinates formula_8, formula_9, and four extra fermionic coordinates formula_10 with formula_11, transforming as a two-component (Weyl) spinor and its conjugate.
In formula_0 supersymmetry, a chiral superfield is a function over chiral superspace. There exists a projection from the (full) superspace to chiral superspace. So, a function over chiral
superspace can be pulled back to the full superspace. Such a function formula_12 satisfies the covariant constraint formula_13, where formula_14 is the covariant derivative, given in index notation as
formula_15
A chiral superfield formula_12 can then be expanded as
formula_16
where formula_17. The superfield is independent of the 'conjugate spin coordinates' formula_18 in the sense that it depends on formula_18 only through formula_19. It can be checked that formula_20
The expansion has the interpretation that formula_21 is a complex scalar field, formula_22 is a Weyl spinor. There is also the auxiliary complex scalar field formula_23, named formula_23 by convention: this is the F-term which plays an important role in some theories.
The field can then be expressed in terms of the original coordinates formula_24 by substituting the expression for formula_25:
formula_26
Antichiral superfields.
Similarly, there is also antichiral superspace, which is the complex conjugate of chiral superspace, and antichiral superfields.
An antichiral superfield formula_27 satisfies formula_28 where
formula_29
An antichiral superfield can be constructed as the complex conjugate of a chiral superfield.
Actions from chiral superfields.
For an action which can be defined from a single chiral superfield, see Wess–Zumino model.
Vector superfield.
The vector superfield is a supermultiplet of formula_30 supersymmetry.
A vector superfield (also known as a real superfield) is a function formula_31 which satisfies the reality condition formula_32. Such a field admits the expansion
formula_33
The constituent fields are
Their transformation properties and uses are further discussed in supersymmetric gauge theory.
Using gauge transformations, the fields formula_40 and formula_36 can be set to zero. This is known as Wess-Zumino gauge. In this gauge, the expansion takes on the much simpler form
formula_41
Then formula_42 is the superpartner of formula_39, while formula_35 is an auxiliary scalar field. It is conventionally called formula_35, and is known as the D-term.
Scalars.
A scalar is never the highest component of a superfield; whether it appears in a superfield at all depends on the dimension of the spacetime. For example, in a 10-dimensional N=1 theory the vector multiplet contains only a vector and a Majorana–Weyl spinor, while its dimensional reduction on a d-dimensional torus is a vector multiplet containing d real scalars. Similarly, in an 11-dimensional theory there is only one supermultiplet with a finite number of fields, the gravity multiplet, and it contains no scalars. However again its dimensional reduction on a d-torus to a maximal gravity multiplet does contain scalars.
Hypermultiplet.
A hypermultiplet is a type of representation of an extended supersymmetry algebra, in particular the matter multiplet of formula_43 supersymmetry in 4 dimensions, containing two complex scalars "A""i", a Dirac spinor ψ, and two further auxiliary complex scalars "F""i".
The name "hypermultiplet" comes from old term "hypersymmetry" for "N"=2 supersymmetry used by ; this term has been abandoned, but the name "hypermultiplet" for some of its representations is still used.
Extended supersymmetry (N > 1).
This section records some commonly used irreducible supermultiplets in extended supersymmetry in the formula_44 case. These are constructed by a highest-weight representation construction in the sense that there is a vacuum vector annihilated by the supercharges formula_45. The irreps have dimension formula_46. For supermultiplets representing massless particles, on physical grounds the maximum allowed formula_47 is formula_48, while for renormalizability, the maximum allowed formula_47 is formula_49.
N = 2.
The formula_43 vector or chiral multiplet formula_50 contains a gauge field formula_39, two Weyl fermions formula_51, and a scalar formula_21 (which also transform in the adjoint representation of a gauge group). These can also be organised into a pair of formula_30 multiplets, an formula_30 vector multiplet formula_52 and chiral multiplet formula_53. Such a multiplet can be used to define Seiberg–Witten theory concisely.
The formula_43 hypermultiplet or scalar multiplet consists of two Weyl fermions and two complex scalars, or two formula_30 chiral multiplets.
N = 4.
The formula_49 vector multiplet contains one gauge field, four Weyl fermions, six scalars, and CPT conjugates. This appears in N = 4 supersymmetric Yang–Mills theory.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d = 4,\\mathcal{N} = 1"
},
{
"math_id": 1,
"text": "d = 4,\\mathcal{N} = 2"
},
{
"math_id": 2,
"text": "\\Phi(x, \\theta, \\bar \\theta)"
},
{
"math_id": 3,
"text": "d = 4, \\mathcal{N} = 1"
},
{
"math_id": 4,
"text": "\\Phi(x, \\theta, \\bar\\theta) = \\phi(x) + \\theta\\chi(x) + \\bar\\theta \\bar\\chi'(x) + \\bar \\theta \\sigma^\\mu \\theta V_\\mu(x) + \\theta^2 F(x) + \\bar \\theta^2 \\bar F'(x) + \\bar\\theta^2 \\theta\\xi(x) + \\theta^2 \\bar\\theta \\bar \\xi' (x) + \\theta^2 \\bar\\theta^2 D(x)"
},
{
"math_id": 5,
"text": "\\phi, \\chi, \\bar \\chi' , V_\\mu, F, \\bar F', \\xi, \\bar \\xi', D"
},
{
"math_id": 6,
"text": "d=4, \\mathcal{N} = 1"
},
{
"math_id": 7,
"text": "\\mathcal{N}=1"
},
{
"math_id": 8,
"text": "x^{\\mu}"
},
{
"math_id": 9,
"text": "\\mu=0,\\ldots,3"
},
{
"math_id": 10,
"text": "\\theta_\\alpha,\\bar\\theta^\\dot\\alpha"
},
{
"math_id": 11,
"text": "\\alpha, \\dot\\alpha = 1,2"
},
{
"math_id": 12,
"text": "\\Phi(x, \\theta, \\bar\\theta)"
},
{
"math_id": 13,
"text": "\\overline{D}\\Phi=0"
},
{
"math_id": 14,
"text": "\\bar D"
},
{
"math_id": 15,
"text": "\\bar D_\\dot\\alpha = -\\bar\\partial_\\dot\\alpha - i\\theta^\\alpha \\sigma^\\mu_{\\alpha\\dot\\alpha}\\partial_\\mu."
},
{
"math_id": 16,
"text": " \\Phi (y , \\theta ) = \\phi(y) + \\sqrt{2} \\theta \\psi (y) + \\theta^2 F(y),"
},
{
"math_id": 17,
"text": " y^\\mu = x^\\mu + i \\theta \\sigma^\\mu \\bar{\\theta} "
},
{
"math_id": 18,
"text": "\\bar\\theta"
},
{
"math_id": 19,
"text": "y^\\mu"
},
{
"math_id": 20,
"text": "\\bar D_\\dot\\alpha y^\\mu = 0."
},
{
"math_id": 21,
"text": "\\phi"
},
{
"math_id": 22,
"text": "\\psi"
},
{
"math_id": 23,
"text": "F"
},
{
"math_id": 24,
"text": "(x,\\theta, \\bar \\theta)"
},
{
"math_id": 25,
"text": "y"
},
{
"math_id": 26,
"text": "\\Phi(x, \\theta, \\bar\\theta) = \\phi(x) + \\sqrt{2} \\theta \\psi (x) + \\theta^2 F(x) + i\\theta\\sigma^\\mu\\bar\\theta\\partial_\\mu\\phi(x) - \\frac{i}{\\sqrt{2}}\\theta^2\\partial_\\mu\\psi(x)\\sigma^\\mu\\bar\\theta - \\frac{1}{4}\\theta^2\\bar\\theta^2\\square\\phi(x)."
},
{
"math_id": 27,
"text": "\\Phi^\\dagger"
},
{
"math_id": 28,
"text": "D \\Phi^\\dagger = 0,"
},
{
"math_id": 29,
"text": "D_\\alpha = \\partial_\\alpha + i\\sigma^\\mu_{\\alpha\\dot\\alpha}\\bar\\theta^\\dot\\alpha\\partial_\\mu."
},
{
"math_id": 30,
"text": "\\mathcal{N} = 1"
},
{
"math_id": 31,
"text": "V(x,\\theta,\\bar\\theta)"
},
{
"math_id": 32,
"text": "V = V^\\dagger"
},
{
"math_id": 33,
"text": "V = C + i\\theta\\chi - i \\overline{\\theta}\\overline{\\chi} + \\tfrac{i}{2}\\theta^2(M+iN)-\\tfrac{i}{2}\\overline{\\theta^2}(M-iN) - \\theta \\sigma^\\mu \\overline{\\theta} A_\\mu +i\\theta^2 \\overline{\\theta} \\left( \\overline{\\lambda} + \\tfrac{i}{2}\\overline{\\sigma}^\\mu \\partial_\\mu \\chi \\right) -i\\overline{\\theta}^2 \\theta \\left(\\lambda + \\tfrac{i}{2}\\sigma^\\mu \\partial_\\mu \\overline{\\chi} \\right) + \\tfrac{1}{2}\\theta^2 \\overline{\\theta}^2 \\left(D + \\tfrac{1}{2}\\Box C\\right)."
},
{
"math_id": 34,
"text": "C"
},
{
"math_id": 35,
"text": "D"
},
{
"math_id": 36,
"text": "M + iN"
},
{
"math_id": 37,
"text": "\\chi_\\alpha"
},
{
"math_id": 38,
"text": "\\lambda^\\alpha"
},
{
"math_id": 39,
"text": "A_\\mu"
},
{
"math_id": 40,
"text": "C, \\chi"
},
{
"math_id": 41,
"text": " V_{\\text{WZ}} = \\theta\\sigma^\\mu\\bar\\theta A_\\mu + \\theta^2 \\bar\\theta \\bar\\lambda + \\bar\\theta^2 \\theta \\lambda + \\frac{1}{2}\\theta^2\\bar\\theta^2 D. "
},
{
"math_id": 42,
"text": "\\lambda"
},
{
"math_id": 43,
"text": "\\mathcal{N} = 2"
},
{
"math_id": 44,
"text": "d = 4"
},
{
"math_id": 45,
"text": "Q^A, A = 1, \\cdots, \\mathcal{N}"
},
{
"math_id": 46,
"text": "2^\\mathcal{N}"
},
{
"math_id": 47,
"text": "\\mathcal{N}"
},
{
"math_id": 48,
"text": "\\mathcal{N} = 8"
},
{
"math_id": 49,
"text": "\\mathcal{N} = 4"
},
{
"math_id": 50,
"text": "\\Psi"
},
{
"math_id": 51,
"text": "\\lambda, \\psi"
},
{
"math_id": 52,
"text": "W = (A_\\mu, \\lambda)"
},
{
"math_id": 53,
"text": "\\Phi = (\\phi, \\psi)"
}
]
| https://en.wikipedia.org/wiki?curid=695319 |
69531997 | Magnetic skyrmionium | In magnetic systems, excitations can be found that are characterized by the orientation of the local magnetic moments of atomic cores. A magnetic skyrmionium is a ring-shaped topological spin texture and is closely related to the magnetic skyrmion.
Topological charge.
The topological charge can be defined as follows.
formula_0
With this definition, the topological charge of a skyrmion can be calculated to be ±1. A magnetic skyrmionium is a topological quasi particle that is composed of a superposition of two magnetic skyrmions of opposite topological charge adding up to zero total topological charge. On this basis one can view the core of a skyrmionium as a skyrmion (yellow central disk in figure) with opposite charge compared to a bigger skyrmion (green disk) in which it is situated.
Different to magnetic skyrmions, that experience a transverse deflection under current driven motion known as the skyrmion Hall effect (similar to the Hall effect), magnetic skyrmioniums are expected to move parallel to electrical-drive currents. The current-driven motion of magnetic excitations is one example of the direct link between topological charge and a physical observable.
Theoretical predictions.
Skyrmioniums have been the subject of numerous theoretical investigations. Besides theoretical predictions concerning the existence of skyrmioniums such as in the 2D Janus mono layer CrGe(Se,Te)3, a lot of research concentrated on their manipulation by electrical currents, spin currents or spin waves. So far, there is only little experimental evidence for the existence of magnetic skyrmioniums. One example is the observation of skyrmionium in a NiFe-CrSb2Te3 hetero-structure.
Potential applications.
Magnetic excitations such as skyrmions or skyrmioniums are potential building blocks of next generation spintronic devices, which enable for instance neuromorphic computing. | [
{
"math_id": 0,
"text": "Q=\\int \\vec{m}(\\vec{r})\\cdot (\\partial_x \\vec{m}(\\vec{r}) \\times \\partial_y \\vec{m}(\\vec{r})) dr^2/4\\pi"
}
]
| https://en.wikipedia.org/wiki?curid=69531997 |
695329 | Braid statistics | Possible statistical behavior of particles in quantum statistical mechanics
In mathematics and theoretical physics, braid statistics is a generalization of the spin statistics of bosons and fermions based on the concept of braid group. While for fermions (Bosons) the corresponding statistics is associated to a phase gain of formula_0 (formula_1) under the exchange of identical particles, a particle with braid statistics leads to a rational fraction of formula_0 under such exchange or even a non-trivial unitary transformation in the Hilbert space (see non-Abelian anyons). A similar notion exists using a loop braid group.
Plektons.
Braid statistics are applicable to theoretical particles such as the two-dimensional anyons and plektons.
A plekton is a hypothetical type of particle that obeys a different style of statistics with respect to the interchange of identical particles. It obeys the causality rules of algebraic quantum field theory, where only observable quantities need to commute at spacelike separation, where anyons follow the stronger rules of traditional quantum field theory; this leads, for example, to (2+1)D anyons being massless.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pi"
},
{
"math_id": 1,
"text": "2 \\pi"
}
]
| https://en.wikipedia.org/wiki?curid=695329 |
6953458 | Hadley cell | Tropical atmospheric circulation feature
The Hadley cell, also known as the Hadley circulation, is a global-scale tropical atmospheric circulation that features air rising near the equator, flowing poleward near the tropopause at a height of above the Earth's surface, cooling and descending in the subtropics at around 25 degrees latitude, and then returning equatorward near the surface. It is a thermally direct circulation within the troposphere that emerges due to differences in insolation and heating between the tropics and the subtropics. On a yearly average, the circulation is characterized by a circulation cell on each side of the equator. The Southern Hemisphere Hadley cell is slightly stronger on average than its northern counterpart, extending slightly beyond the equator into the Northern Hemisphere. During the summer and winter months, the Hadley circulation is dominated by a single, cross-equatorial cell with air rising in the summer hemisphere and sinking in the winter hemisphere. Analogous circulations may occur in extraterrestrial atmospheres, such as on Venus and Mars.
Global climate is greatly influenced by the structure and behavior of the Hadley circulation. The prevailing trade winds are a manifestation of the lower branches of the Hadley circulation, converging air and moisture in the tropics to form the Intertropical Convergence Zone (ITCZ) where the Earth's heaviest rains are located. Shifts in the ITCZ associated with the seasonal variability of the Hadley circulation cause monsoons. The sinking branches of the Hadley cells give rise to the oceanic subtropical ridges and suppress rainfall; many of the Earth's deserts and arid regions are located in the subtropics coincident with the position of the sinking branches. The Hadley circulation is also a key mechanism for the meridional transport of heat, angular momentum, and moisture, contributing to the subtropical jet stream, the moist tropics, and maintaining a global thermal equilibrium.
The Hadley circulation is named after George Hadley, who in 1735 postulated the existence of hemisphere-spanning circulation cells driven by differences in heating to explain the trade winds. Other scientists later developed similar arguments or critiqued Hadley's qualitative theory, providing more rigorous explanations and formalism. The existence of a broad meridional circulation of the type suggested by Hadley was confirmed in the mid-20th century once routine observations of the upper troposphere became available via radiosondes. Observations and climate modelling indicate that the Hadley circulation has expanded poleward since at least the 1980s as a result of climate change, with an accompanying but less certain intensification of the circulation; these changes have been associated with trends in regional weather patterns. Model projections suggest that the circulation will widen and weaken throughout the 21st century due to climate change.
Mechanism and characteristics.
The Hadley circulation describes the broad, thermally direct, and meridional overturning of air within the troposphere over the low latitudes. Within the global atmospheric circulation, the meridional flow of air averaged along lines of latitude are organized into circulations of rising and sinking motions coupled with the equatorward or poleward movement of air called meridional cells. These include the prominent "Hadley cells" centered over the tropics and the weaker "Ferrell cells" centered over the mid-latitudes. The Hadley cells result from the contrast of insolation between the warm equatorial regions and the cooler subtropical regions. The uneven heating of Earth's surface results in regions of rising and descending air. Over the course of a year, the equatorial regions absorb more radiation from the Sun than they radiate away. At higher latitudes, the Earth emits more radiation than it receives from the Sun. Without a mechanism to exchange heat meridionally, the equatorial regions would warm and the higher latitudes would cool progressively in disequilibrium. The broad ascent and descent of air results in a pressure gradient force that drives the Hadley circulation and other large-scale flows in both the atmosphere and the ocean, distributing heat and maintaining a global long-term and subseasonal thermal equilibrium.
The Hadley circulation covers almost half of the Earth's surface area, spanning from roughly the Tropic of Cancer to the Tropic of Capricorn. Vertically, the circulation occupies the entire depth of the troposphere. The Hadley cells comprising the circulation consist of air carried equatorward by the trade winds in the lower troposphere that ascends when heated near the equator, along with air moving poleward in the upper troposphere. Air that is moved into the subtropics cools and then sinks before returning equatorward to the tropics; the position of the sinking air associated with the Hadley cell is often used as a measure of the meridional width of the global tropics. The equatorward return of air and the strong influence of heating make the Hadley cell a thermally-driven and enclosed circulation. Due to the buoyant rise of air near the equator and the sinking of air at higher latitudes, a pressure gradient develops near the surface with lower pressures near the equator and higher pressures in the subtropics; this provides the motive force for the equatorward flow in the lower troposphere. However, the release of latent heat associated with condensation in the tropics also relaxes the decrease in pressure with height, resulting in higher pressures aloft in the tropics compared to the subtropics for a given height in the upper troposphere; this pressure gradient is stronger than its near-surface counterpart and provides the motive force for the poleward flow in the upper troposphere. Hadley cells are most commonly identified using the mass-weighted, zonally-averaged stream function of meridional winds, but they can also be identified by other measurable or derivable physical parameters such as velocity potential or the vertical component of wind at a particular pressure level.
Given the latitude formula_0 and the pressure level formula_1, the Stokes stream function characterizing the Hadley circulation is given by
formula_2
where formula_3 is the radius of Earth, formula_4 is the acceleration due to the gravity of Earth, and formula_5 is the zonally averaged meridional wind at the prescribed latitude and pressure level. The value of formula_6 gives the integrated meridional mass flux between the specified pressure level and the top of the Earth's atmosphere, with positive values indicating northward mass transport. The strength of the Hadley cells can be quantified based on formula_6 including the maximum and minimum values or averages of the stream function both overall and at various pressure levels. Hadley cell intensity can also be assessed using other physical quantities such as the velocity potential, vertical component of wind, transport of water vapor, or total energy of the circulation.
Structure and components.
The structure of the Hadley circulation and its components can be inferred by graphing zonal and temporal averages of global winds throughout the troposphere. At shorter timescales, individual weather systems perturb wind flow. Although the structure of the Hadley circulation varies seasonally, when winds are averaged annually (from an Eulerian perspective) the Hadley circulation is roughly symmetric and composed of two similar Hadley cells with one in each of the northern and southern hemispheres, sharing a common region of ascending air near the equator; however, the Southern Hemisphere Hadley cell is stronger. The winds associated with the annually-averaged Hadley circulation are on the order of . However, when averaging the motions of air parcels as opposed to the winds at fixed locations (a Lagrangian perspective), the Hadley circulation manifests as a broader circulation that extends farther poleward. Each Hadley cell can be described by four primary branches of airflow within the tropics:
The trade winds in the low-latitudes of both Earth's northern and southern hemispheres converge air towards the equator, producing a belt of low atmospheric pressure exhibiting abundant storms and heavy rainfall known as the Intertropical Convergence Zone (ITCZ). This equatorward movement of air near the Earth's surface constitutes the lower branch of the Hadley cell. The position of the ITCZ is influenced by the warmth of sea surface temperatures (SST) near the equator and the strength of cross-equatorial pressure gradients. In general, the ITCZ is located near the equator or is offset towards the summer hemisphere where the warmest SSTs are located. On an annual average, the rising branch of the Hadley circulation is slightly offset towards the Northern Hemisphere, away from the equator. Due to the Coriolis force, the trade winds deflect opposite the direction of Earth's rotation, blowing partially westward rather than directly equatorward in both hemispheres. The lower branch accrues moisture resulting from evaporation across Earth's tropical oceans. A warmer environment and converging winds force the moistened air to ascend near the equator, resulting in the rising branch of the Hadley cell. The upward motion is further enhanced by the release of latent heat as the uplift of moist air results in an equatorial band of condensation and precipitation. The Hadley circulation's upward branch largely occurs in thunderstorms occupying only around one percent of the surface area of the tropics. The transport of heat in the Hadley circulation's ascending branch is accomplished most efficiently by hot towers – cumulonimbus clouds bearing strong updrafts that do not mix in drier air commonly found in the middle troposphere and thus allow the movement of air from the highly moist tropical lower troposphere into the upper troposphere. Approximately 1,500–5,000 hot towers daily near the ITCZ region are required to sustain the vertical heat transport exhibited by the Hadley circulation.
The ascent of air rises into the upper troposphere to a height of , after which air diverges outward from the ITCZ and towards the poles. The top of the Hadley cell is set by the height of the tropopause as the stable stratosphere above prevents the continued ascent of air. Air arising from the low latitudes has higher absolute angular momentum about Earth's axis of rotation. The distance between the atmosphere and Earth's axis decreases poleward; to conserve angular momentum, poleward-moving air parcels must accelerate eastward. The Coriolis effect limits the poleward extent of the Hadley circulation, accelerating air in the direction of the Earth's rotation and forming a jet stream directed zonally rather than continuing the poleward flow of air at each Hadley cell's poleward boundary. Considering only the conservation of angular momentum, a parcel of air at rest along the equator would accelerate to a zonal speed of by the time it reached 30° latitude. However, small-scale turbulence along the parcel's poleward trek and large-scale eddies in the mid-latitude dissipate angular momentum. The jet associated with the Southern Hemisphere Hadley cell is stronger than its northern counterpart due to the stronger intensity of the Southern Hemisphere cell. The cooler, higher-latitudes leads to cooling of air parcels, which causes the poleward air to eventually descend. When the movement of air is averaged annually, the descending branch of the Hadley cell is located roughly over the 25th parallel north and the 25th parallel south. The moisture in the subtropics is then partly advected poleward by eddies and partly advected equatorward by the lower branch of the Hadley cell, where it is later brought towards the ITCZ. Although the zonally-averaged Hadley cell is organized into four main branches, these branches are aggregations of more concentrated air flows and regions of mass transport.
Several theories and physical models have attempted to explain the latitudinal width of the Hadley cell. The Held–Hou model provides one theoretical constraint on the meridional extent of the Hadley cells. By assuming a simplified atmosphere composed of a lower layer subject to friction from the Earth's surface and an upper layer free from friction, the model predicts that the Hadley circulation would be restricted to within of the equator if parcels do not have any net heating within the circulation. According to the Held–Hou model, the latitude of the Hadley cell's poleward edge formula_0 scales according to
formula_7
where formula_8 is the difference in potential temperature between the equator and the pole in radiative equilibrium, formula_9 is the height of the tropopause, formula_10 is the Earth's rotation rate, and formula_11 is a reference potential temperature. Other compatible models posit that the width of the Hadley cell may scale with other physical parameters such as the vertically-averaged Brunt–Väisälä frequency in the tropopshere or the growth rate of baroclinic waves shed by the cell.
Seasonality and variability.
The Hadley circulation varies considerably with seasonal changes. Around the equinox during the spring and autumn for either the northern or southern hemisphere, the Hadley circulation takes the form of two relatively weaker Hadley cells in both hemispheres, sharing a common region of ascent over the ITCZ and moving air aloft towards each cell's respective hemisphere. However, closer to the solstices, the Hadley circulation transitions into a more singular and stronger cross-equatorial Hadley cell with air rising in the summer hemisphere and broadly descending in the winter hemisphere. The transition between the two-cell and single-cell configuration is abrupt, and during most of the year the Hadley circulation is characterized by a single dominant Hadley cell that transports air across the equator. In this configuration, the ascending branch is located in the tropical latitudes of the warmer summer hemisphere and the descending branch is positioned in the subtropics of the cooler winter hemisphere. Two cells are still present in each hemisphere, though the winter hemisphere's cell becomes much more prominent while the summer hemisphere's cell becomes displaced poleward. The intensification of the winter hemisphere's cell is associated with a steepening of gradients in geopotential height, leading to an acceleration of trade winds and stronger meridional flows. The presence of continents relaxes temperature gradients in the summer hemisphere, accentuating the contrast between the hemispheric Hadley cells. Reanalysis data from 1979–2001 indicated that the dominant Hadley cell in boreal summer extended from 13°S to 31°N on average. In both boreal and austral winters, the Indian Ocean and the western Pacific Ocean contribute most to the rising and sinking motions in the zonally-averaged Hadley circulation. However, vertical flows over Africa and the Americas are more marked in boreal winter.
At longer interannual timescales, variations in the Hadley circulation are associated with variations in the El Niño–Southern Oscillation (ENSO), which impacts the positioning of the ascending branch; the response of the circulation to ENSO is non-linear, with a more marked response to El Niño events than La Niña events. During El Niño, the Hadley circulation strengthens due to the increased warmth of the upper troposphere over the tropical Pacific and the resultant intensification of poleward flow. However, these changes are not asymmetric, during the same events, the Hadley cells over the western Pacific and the Atlantic are weakened. During the Atlantic Niño, the circulation over the Atlantic is intensified. The Atlantic circulation is also enhanced during periods when the North Atlantic oscillation is strongly positive. The variation in the seasonally-averaged and annually-averaged Hadley circulation from year to year is largely accounted for by two juxtaposed modes of oscillation: an equatorial symmetric mode characterized by single cell straddling the equator and an equatorial symmetric mode characterized by two cells on either side of the equator.
Energetics and transport.
The Hadley cell is an important mechanism by which moisture and energy are transported both between the tropics and subtropics and between the northern and southern hemispheres. However, it is not an efficient transporter of energy due to the opposing flows of the lower and upper branch, with the lower branch transporting sensible and latent heat equatorward and the upper branch transporting potential energy poleward. The resulting net energy transport poleward represents around 10 percent of the overall energy transport involved in the Hadley cell. The descending branch of the Hadley cell generates clear skies and a surplus of evaporation relative to precipitation in the subtropics. The lower branch of the Hadley circulation accomplishes most of the transport of the excess water vapor accumulated in the subtropical atmosphere towards the equatorial region. The strong Southern Hemisphere Hadley cell relative to its northern counterpart leads to a small net energy transport from the northern to the southern hemisphere; as a result, the transport of energy at the equator is directed southward on average, with an annual net transport of around 0.1 PW. In contrast to the higher latitudes where eddies are the dominant mechanism for transporting energy poleward, the meridional flows imposed by the Hadley circulation are the primary mechanism for poleward energy transport in the tropics. As a thermally direct circulation, the Hadley circulation converts available potential energy to the kinetic energy of horizontal winds. Based on data from January 1979 and December 2010, the Hadley circulation has an average power output of 198 TW, with maxima in January and August and minima in May and October. Although the stability of the tropopause largely limits the movement of air from the troposphere to the stratosphere, some tropospheric air penetrates into the stratosphere via the Hadley cells.
The Hadley circulation may be idealized as a heat engine converting heat energy into mechanical energy. As air moves towards the equator near the Earth's surface, it accumulates entropy from the surface either by direct heating or the flux of sensible or latent heat. In the ascending branch of a Hadley cell, the ascent of air is approximately an adiabatic process with respect to the surrounding environment. However, as parcels of air move equatorward in the cell's upper branch, they lose entropy by radiating heat to space at infrared wavelengths and descend in response. This radiative cooling occurs at a rate of at least 60 W m−2 and may exceed 100 W m−2 in winter. The heat accumulated during the equatorward branch of the circulation is greater than the heat lost in the upper poleward branch; the excess heat is converted into the mechanical energy that drives the movement of air. This difference in heating also results in the Hadley circulation transporting heat poleward as the air supplying the Hadley cell's upper branch has greater moist static energy than the air supplying the cell's lower branch. Within the Earth's atmosphere, the timescale at which air parcels lose heat due to radiative cooling and the timescale at which air moves along the Hadley circulation are at similar orders of magnitude, allowing the Hadley circulation to transport heat despite cooling in the circulation's upper branch. Air with high potential temperature is ultimately moved poleward in the upper troposphere while air with lower potential temperature is brought equatorward near the surface. As a result, the Hadley circulation is one mechanism by which the disequilibrium produced by uneven heating of the Earth is brought towards equilibrium. When considered as a heat engine, the thermodynamic efficiency of the Hadley circulation averaged around 2.6 percent between 1979–2010, with small seasonal variability.
The Hadley circulation also transports planetary angular momentum poleward due to Earth's rotation. Because the trade winds are directed opposite the Earth's rotation, eastward angular momentum is transferred to the atmosphere via frictional interaction between the winds and topography. The Hadley cell then transfers this angular momentum through its upward and poleward branches. The poleward branch accelerates and is deflected east in both the northern and southern hemispheres due to the Coriolis force and the conservation of angular momentum, resulting in a zonal jet stream above the descending branch of the Hadley cell. The formation of such a jet implies the existence of a thermal wind balance supported by the amplification of temperature gradients in the jet's vicinity resulting from the Hadley circulation's poleward heat advection. The subtropical jet in the upper troposphere coincides with where the Hadley cell meets the Ferrell cell. The strong wind shear accompanying the jet presents a significant source of baroclinic instability from which waves grow; the growth of these waves transfers heat and momentum polewards. Atmospheric eddies extract westerly angular momentum from the Hadley cell and transport it downward, resulting in the mid-latitude westerly winds.
Formulation and discovery.
The broad structure and mechanism of the Hadley circulation – comprising convective cells moving air due to temperature differences in a manner influenced by the Earth's rotation – was first proposed by Edmund Halley in 1685 and George Hadley in 1735. Hadley had sought to explain the physical mechanism for the trade winds and the westerlies; the Hadley circulation and the Hadley cells are named in honor of his pioneering work. Although Hadley's ideas invoked physical concepts that would not be formalized until well after his death, his model was largely qualitative and without mathematical rigor. Hadley's formulation was later recognized by most meteorologists by the 1920s to be a simplification of more complicated atmospheric processes. The Hadley circulation may have been the first attempt to explain the global distribution of winds in Earth's atmosphere using physical processes. However, Hadley's hypothesis could not be verified without observations of winds in the upper-atmosphere. Data collected by routine radiosondes beginning in the mid-20th century confirmed the existence of the Hadley circulation.
Early explanations of the trade winds.
In the 15th and 16th centuries, observations of maritime weather conditions were of considerable importance to maritime transport. Compilations of these observations showed consistent weather conditions from year to year and significant seasonal variability. The prevalence of dry conditions and weak winds at around 30° latitude and the equatorward trade winds closer to the equator, mirrored in the northern and southern hemispheres, was apparent by 1600. Early efforts by scientists to explain aspects of global wind patterns often focused on the trade winds as the steadiness of the winds was assumed to portend a simple physical mechanism. Galileo Galilei proposed that the trade winds resulted from the atmosphere lagging behind the Earth's faster tangential rotation speed in the low latitudes, resulting in the westward trades directed opposite of Earth's rotation.
In 1685, English polymath Edmund Halley proposed at a debate organized by the Royal Society that the trade winds resulted from east to west temperature differences produced over the course of a day within the tropics. In Halley's model, as the Earth rotated, the location of maximum heating from the Sun moved west across the Earth's surface. This would cause air to rise, and by conservation of mass, Halley argued that air would be moved to the region of evacuated air, generating the trade winds. Halley's hypothesis was criticized by his friends, who noted that his model would lead to changing wind directions throughout the course of a day rather than the steady trade winds. Halley conceded in personal correspondence with John Wallis that "Your questioning my hypothesis for solving the Trade Winds makes me less confident of the truth thereof". Nonetheless, Halley's formulation was incorporated into "Chambers's Encyclopaedia" and "La Grande Encyclopédie", becoming the most widely-known explanation for the trade winds until the early 19th century. Though his explanation of the trade winds was incorrect, Halley correctly predicted that the surface trade winds should be accompanied by an opposing flow aloft following mass conservation.
George Hadley's explanation.
Unsatisfied with preceding explanations for the trade winds, George Hadley proposed an alternate mechanism in 1735. Hadley's hypothesis was published in the paper "On the Cause of the General Trade Winds" in "Philosophical Transactions of the Royal Society". Like Halley, Hadley's explanation viewed the trade winds as a manifestation of air moving to take the place of rising warm air. However, the region of rising air prompting this flow lay along the lower latitudes. Understanding that the tangential rotation speed of the Earth was fastest at the equator and slowed farther poleward, Hadley conjectured that as air with lower momentum from higher latitudes moved equatorward to replace the rising air, it would conserve its momentum and thus curve west. By the same token, the rising air with higher momentum would spread poleward, curving east and then sinking as it cooled to produce westerlies in the mid-latitudes. Hadley's explanation implied the existence of hemisphere-spanning circulation cells in the northern and southern hemispheres extending from the equator to the poles, though he relied on an idealization of Earth's atmosphere that lacked seasonality or the asymmetries of the oceans and continents. His model also predicted rapid easterly trade winds of around , though he argued that the action of surface friction over the course of a few days slowed the air to the observed wind speeds. Colin Maclaurin extended Hadley's model to the ocean in 1740, asserting that meridional ocean currents were subject to similar westward or eastward deflections.
Hadley was not widely associated with his theory due to conflation with his older brother, John Hadley, and Halley; his theory failed to gain much traction in the scientific community for over a century due to its unintuitive explanation and the lack of validating observations. Several other natural philosophers independently forwarded explanations for the global distribution of winds soon after Hadley's 1735 proposal. In 1746, Jean le Rond d'Alembert provided a mathematical formulation for global winds, but disregarded solar heating and attributed the winds to the gravitational effects of the Sun and Moon. Immanuel Kant, also unsatisfied with Halley's explanation for the trade winds, published an explanation for the trade winds and westerlies in 1756 with similar reasoning as Hadley. In the latter part of the 18th century, Pierre-Simon Laplace developed a set of equations establishing a direct influence of Earth's rotation on wind direction. Swiss scientist Jean-André Deluc published an explanation of the trade winds in 1787 similar to Hadley's hypothesis, connecting differential heating and the Earth's rotation with the direction of the winds.
English chemist John Dalton was the first to clearly credit Hadley's explanation of the trade winds to George Hadley, mentioning Hadley's work in his 1793 book "Meteorological Observations and Essays". In 1837, "Philosophical Magazine" published a new theory of wind currents developed by Heinrich Wilhelm Dove without reference to Hadley but similarly explaining the direction of the trade winds as being influenced by the Earth's rotation. In response, Dalton later wrote a letter to the editor to the journal promoting Hadley's work. Dove subsequently credited Hadley so frequently that the overarching theory became known as the "Hadley–Dove principle", popularizing Hadley's explanation for the trade winds in Germany and Great Britain.
Critique of Hadley's explanation.
The work of Gustave Coriolis, William Ferrel, Jean Bernard Foucault, and Henrik Mohn in the 19th century helped establish the Coriolis force as the mechanism for the deflection of winds due to Earth's rotation, emphasizing the conservation of angular momentum in directing flows rather than the conservation of linear momentum as Hadley suggested; Hadley's assumption led to an underestimation of the deflection by a factor of two. The acceptance of the Coriolis force in shaping global winds led to debate among German atmospheric scientists beginning in the 1870s over the completeness and validity of Hadley's explanation, which narrowly explained the behavior of initially meridional motions. Hadley's use of surface friction to explain why the trade winds were much slower than his theory would predict was seen as a key weakness in his ideas. The southwesterly motions observed in cirrus clouds at around 30°N further discounted Hadley's theory as their movement was far slower than the theory would predict when accounting for the conservation of angular momentum. In 1899, William Morris Davis, a professor of physical geography at Harvard University, gave a speech at the Royal Meteorological Society criticizing Hadley's theory for its failure to account for the transition of an initially unbalanced flow to geostrophic balance. Davis and other meteorologists in the 20th century recognized that the movement of air parcels along Hadley's envisaged circulation was sustained by a constant interplay between the pressure gradient and Coriolis forces rather than the conservation of angular momentum alone. Ultimately, while the atmospheric science community considered the general ideas of Hadley's principle valid, his explanation was viewed as a simplification of more complex physical processes.
Hadley's model of the global atmospheric circulation being characterized by hemisphere-wide circulation cells was also challenged by weather observations showing a zone of high pressure in the subtropics and a belt of low pressure at around 60° latitude. This pressure distribution would imply a poleward flow near the surface in the mid-latitudes rather than an equatorward flow implied by Hadley's envisioned cells. Ferrel and James Thomson later reconciled the pressure pattern with Hadley's model by proposing a circulation cell limited to lower altitudes in the mid-latitudes and nestled within the broader, hemisphere-wide Hadley cells. Carl-Gustaf Rossby proposed in 1947 that the Hadley circulation was limited to the tropics, forming one part of a dynamically-driven and multi-celled meridional flow. Rossby's model resembled that of a similar three-celled model developed by Ferrel in 1860.
Direct observation.
The three-celled model of the global atmospheric circulation – with Hadley's conceived circulation forming its tropical component – had been widely accepted by the meteorological community by the early 20th century. However, the Hadley cell's existence was only validated by weather observations near the surface, and its predictions of winds in the upper troposphere remained untested. The routine sampling of the upper troposphere by radiosondes that emerged in the mid-20th century confirmed the existence of meridional overturning cells in the atmosphere.
Influence on climate.
The Hadley circulation is one of the most important influences on global climate and planetary habitability, as well as an important transporter of angular momentum, heat, and water vapor. Hadley cells flatten the temperature gradient between the equator and the poles, making the extratropics milder. The global precipitation pattern of high precipitation in the tropics and a lack of precipitation at higher latitudes is a consequence of the positioning of the rising and sinking branches of Hadley cells, respectively. Near the equator, the ascent of humid air results in the heaviest precipitation on Earth. The periodic movement of the ITCZ and thus the seasonal variation of the Hadley circulation's rising branches produces the world's monsoons. The descending motion of air associating with the sinking branch produces surface divergence consistent with the prominence of subtropical high-pressure areas. These semipermanent regions of high pressure lie primarily over the ocean between 20° and 40° latitude. Arid conditions are associated with the descending branches of the Hadley circulation, with many of the Earth's deserts and semiarid or arid regions underlying the sinking branches of the Hadley circulation.
The cloudy marine boundary layer common in the subtropics may be seeded by cloud condensation nuclei exported out of the tropics by the Hadley circulation.
Effects of climate change.
Natural variability.
Paleoclimate reconstructions of trade winds and rainfall patterns suggest that the Hadley circulation changed in response to natural climate variability. During Heinrich events within the last 100,000 years, the Northern Hemisphere Hadley cell strengthened while the Southern Hemisphere Hadley cell weakened. Variation in insolation during the mid- to late-Holocene resulted in a southward migration of the Northern Hemisphere Hadley cell's ascending and descending branches closer to their present-day positions. Tree rings from the mid-latitudes of the Northern Hemisphere suggest that the historical position of the Hadley cell branches have also shifted in response to shorter oscillations, with the Northern Hemisphere descending branch moving southward during positive phases of the El Niño–Southern Oscillation and Pacific decadal oscillation and northward during the corresponding negative phases. The Hadley cells were displaced southward between 1400–1850, concurrent with drought in parts of the Northern Hemisphere.
Hadley cell expansion and intensity changes.
Observed trends.
According to the IPCC Sixth Assessment Report (AR6), the Hadley circulation has likely expanded since at least the 1980s in response to climate change, with medium confidence in an accompanying intensification of the circulation. An expansion of the overall circulation poleward by about 0.1°–0.5° latitude per decade since the 1980s is largely accounted for by the poleward shift of the Northern Hemisphere Hadley cell, which in atmospheric reanalysis has shown a more marked expansion since 1992. However, the AR6 also reported medium confidence in the expansion of the Northern Hemisphere Hadley cell being within the range of internal variability. In contrast, the AR6 assessed that it was likely that the Southern Hemisphere Hadley cell's poleward expansion was due to anthropogenic influence; this finding was based on CMIP5 and CMIP6 climate models.
Studies have produced a large range of estimates for the rate of widening of the tropics due to the use of different metrics; estimates based on upper-tropospheric properties tend to yield a wider range of values. The degree to which the circulation has expanded varies by season, with trends in summer and autumn being larger and statistically significant in both hemispheres. The widening of the Hadley circulation has also resulted in a likely widening of the ITCZ since the 1970s. Reanalyses also suggest that the summer and autumn Hadley cells in both hemispheres have widened and that the global Hadley circulation has intensified since 1979, with a more pronounced intensification in the Northern Hemisphere. Between 1979–2010, the power generated by the global Hadley circulation increased by an average of 0.54 TW per year, consistent with an increased input of energy into the circulation by warming SSTs over the tropical oceans. (For comparison, the Hadley circulation's overall power ranges from 0.5 TW to 218 TW throughout the year in the Northern Hemisphere and from 32 to 204 TW in the Southern.) In contrast to reanalyses, CMIP5 climate models depict a weakening of the Hadley circulation since 1979. The magnitude of long-term changes in the circulation strength are thus uncertain due to the influence of large interannual variability and the poor representation of the distribution of latent heat release in reanalyses.
The expansion of the Hadley circulation due to climate change is consistent with the Held–Hou model, which predicts that the latitudinal extent of the circulation is proportional to the square root of the height of the tropopause. Warming of the troposphere raises the tropopause height, enabling the upper poleward branch of the Hadley cells to extend farther and leading to an expansion of the cells. Results from climate models suggest that the impact of internal variability (such as from the Pacific decadal oscillation) and the anthropogenic influence on the expansion of the Hadley circulation since the 1980s have been comparable. Human influence is most evident in the expansion of the Southern Hemisphere Hadley cell; the AR6 assessed medium confidence in associating the expansion of the Hadley circulation in both hemispheres with the added radiative forcing of greenhouse gasses.
Physical mechanisms and projected changes.
The physical processes by which the Hadley circulation expands by human influence are unclear but may be linked to the increased warming of the subtropics relative to other latitudes in both the Northern and Southern hemispheres. The enhanced subtropical warmth could enable expansion of the circulation poleward by displacing the subtropical jet and baroclinic eddies poleward. Poleward expansion of the Southern Hemisphere Hadley cell in the austral summer was attributed by the IPCC Fifth Assessment Report (AR5) to stratospheric ozone depletion based on CMIP5 model simulations, while CMIP6 simulations have not shown as clear of a signal. Ozone depletion could plausibly affect the Hadley circulation through the increase of radiative cooling in the lower stratosphere; this would increase the phase speed of baroclinic eddies and displace them poleward, leading to expansion of Hadley cells. Other eddy-driven mechanisms for expanding Hadley cells have been proposed, involving changes in baroclinicity, wave breaking, and other releases of instability. In the extratropics of the Northern Hemisphere, increasing concentrations of black carbon and tropospheric ozone may be a major forcing on that hemisphere's Hadley cell expansion in boreal summer.
Projections from climate models indicate that a continued increase in the concentration of greenhouse gas would result in continued widening of the Hadley circulation. However, simulations using historical data suggest that forcing from greenhouse gasses may account for about 0.1° per decade of expansion of the tropics. Although the widening of the Hadley cells due to climate change has occurred concurrent with an increase in their intensity based on atmospheric reanalyses, climate model projections generally depict a weakening circulation in tandem with a widening circulation by the end of the 21st century. A longer term increase in the concentration of carbon dioxide may lead to a weakening of the Hadley circulation as a result of the reduction of radiative cooling in the troposphere near the circulation's sinking branches. However, changes in the oceanic circulation within the tropics may attenuate changes in the intensity and width of the Hadley cells by reducing thermal contrasts.
Changes to weather patterns.
The expansion of the Hadley circulation due to climate change is connected to changes in regional and global weather patterns. A widening of the tropics could displace the tropical rain belt, expand subtropical deserts, and exacerbate wildfires and drought. The documented shift and expansion of subtropical ridges are associated with changes in the Hadley circulation, including a westward extension of the subtropical high over the northwestern Pacific, changes in the intensity and position of the Azores High, and the poleward displacement and intensification of the subtropical high pressure belt in the Southern Hemisphere. These changes have influenced regional precipitation amounts and variability, including drying trends over southern Australia, northeastern China, and northern South Asia. The AR6 assessed limited evidence that the expansion of the Northern Hemisphere Hadley cell may have led in part to drier conditions in the subtropics and a poleward expansion of aridity during boreal summer. Precipitation changes induced by Hadley circulation changes may lead to changes in regional soil moisture, with modelling showing the most significant declines in the Mediterranean Sea, South Africa, and the Southwestern United States. However, the concurrent effects of changing surface temperature patterns over land lead to uncertainties over the influence of Hadley cell broadening on drying over subtropical land areas.
Climate modelling suggests that the shift in the position of the subtropical highs induced by Hadley cell broadening may reduce oceanic upwelling at low latitudes and enhance oceanic upwelling at high latitudes. The expansion of subtropical highs in tandem with the circulation's expansion may also entail a widening of oceanic regions of high salinity and low marine primary production. A decline in extratropical cyclones in the storm track regions in model projections is partly influenced by Hadley cell expansion. Poleward shifts in the Hadley circulation are associated with shifts in the paths of tropical cyclones in the Northern and Southern hemispheres, including a poleward trend in the locations where storms attained their peak intensity.
Extraterrestrial Hadley circulations.
Outside of Earth, any thermally direct circulation that circulates air meridionally across planetary-scale gradients of insolation may be described as a Hadley circulation. A terrestrial atmosphere subject to excess equatorial heating tends to maintain an axisymmetric Hadley circulation with rising motions near the equator and sinking at higher latitudes. Differential heating is hypothesized to result in Hadley circulations analogous to Earth's on other atmospheres in the Solar System, such as on Venus, Mars, and Titan. As with Earth's atmosphere, the Hadley circulation would be the dominant meridional circulation for these extraterrestrial atmospheres. Though less understood, Hadley circulations may also be present on the gas giants of the Solar System and should in principle materialize on exoplanetary atmospheres. The spatial extent of a Hadley cell on any atmosphere may be dependent on the rotation rate of the planet or moon, with a faster rotation rate leading to more contracted Hadley cells (with a more restrictive poleward extent) and a more cellular global meridional circulation. The slower rotation rate reduces the Coriolis effect, thus reducing the meridional temperature gradient needed to sustain a jet at the Hadley cell's poleward boundary and thus allowing the Hadley cell to extend farther poleward.
Venus, which rotates slowly, may have Hadley cells that extend farther poleward than Earth's, spanning from the equator to high latitudes in each of the northern and southern hemispheres. Its broad Hadley circulation would efficiently maintain the nearly isothermal temperature distribution between the planet's pole and equator and vertical velocities of around . Observations of chemical tracers such as carbon monoxide provide indirect evidence for the existence of the Venusian Hadley circulation. The presence of poleward winds with speeds up to around at an altitude of are typically understood to be associated with the upper branch of a Hadley cell, which may be located above the Venusian surface. The slow vertical velocities associated with the Hadley circulation have not been measured, though they may have contributed to the vertical velocities measured by Vega and Venera missions. The Hadley cells may extend to around 60° latitude, equatorward of a mid-latitude jet stream demarcating the boundary between the hypothesized Hadley cell and the polar vortex. The planet's atmosphere may exhibit two Hadley circulations, with one near the surface and the other at the level of the upper cloud deck. The Venusian Hadley circulation may contribute to the superrotation of the planet's atmosphere.
Simulations of the Martian atmosphere suggest that a Hadley circulation is also present in Mars' atmosphere, exhibiting a stronger seasonality compared to Earth's Hadley circulation. This greater seasonality results from diminished thermal inertia resulting from the lack of an ocean and the planet's thinner atmosphere. Additionally, Mars' orbital eccentricity leads to a stronger and wider Hadley cell during its northern winter compared to its southern winter. During most of the Martian year, when a single Hadley cell prevails, its rising and sinking branches are located at 30° and 60° latitude, respectively, in global climate modelling. The tops of the Hadley cells on Mars may reach higher (to around altitude) and be less defined compared to on Earth due to the lack of a strong tropopause on Mars. While latent heating from phase changes associated with water drive much of the ascending motion in Earth's Hadley circulation, ascent in Mars' Hadley circulation may be driven by radiative heating of lofted dust and intensified by the condensation of carbon dioxide near the polar ice cap of Mars' wintertime hemisphere, steepening pressure gradients. Over the course of the Martian year, the mass flux of the Hadley circulation ranges between 109 kg s−1 during the equinoxes and 1010 at the solstices.
A Hadley circulation may also be present in the atmosphere of Saturn's moon Titan. Like Venus, the slow rotation rate of Titan may support a spatially broad Hadley circulation. General circulation modeling of Titan's atmosphere suggests the presence of a cross-equatorial Hadley cell. This configuration is consistent with the meridional winds observed by the Huygens spacecraft when it landed near Titan's equator. During Titan's solstices, its Hadley circulation may take the form of a single Hadley cell that extends from pole to pole, with warm gas rising in the summer hemisphere and sinking in the winter hemisphere. A two-celled configuration with ascent near the equator is present in modelling during a limited transitional period near the equinoxes. The distribution of convective methane clouds on Titan and observations from Huygens spacecraft suggest that the rising branch of its Hadley circulation occurs in the mid-latitudes of its summer hemisphere. Frequent cloud formation occurs at 40° latitude in Titan's summer hemisphere from ascent analogous to Earth's ITCZ.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "\\psi(\\phi, p) = \\frac{2 \\pi a \\cos \\phi}{g}\\int_0^p[v(\\phi,p)] \\, dp"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "g"
},
{
"math_id": 5,
"text": "[v(\\phi, p)]"
},
{
"math_id": 6,
"text": "\\psi"
},
{
"math_id": 7,
"text": "\\phi \\propto \\sqrt{\\frac{g \\Delta \\theta H_t}{\\Omega^2 a^2 \\theta_0}}"
},
{
"math_id": 8,
"text": "\\Delta\\theta"
},
{
"math_id": 9,
"text": "H_t"
},
{
"math_id": 10,
"text": "\\Omega"
},
{
"math_id": 11,
"text": "\\theta_0"
}
]
| https://en.wikipedia.org/wiki?curid=6953458 |
69536750 | Central triangle | Triangle related to a given triangle by two functions
In geometry, a central triangle is a triangle in the plane of the reference triangle. The trilinear coordinates of its vertices relative to the reference triangle are expressible in a certain cyclical way in terms of two functions having the same degree of homogeneity. At least one of the two functions must be a triangle center function. The excentral triangle is an example of a central triangle. The central triangles have been classified into three types based on the properties of the two functions.
Definition.
Triangle center function.
A triangle center function is a real valued function &NoBreak;&NoBreak; of three real variables u, v, w having the following properties:
*Homogeneity property: formula_0 for some constant n and for all "t" > 0. The constant n is the degree of homogeneity of the function &NoBreak;&NoBreak;
*Bisymmetry property: formula_1
Central triangles of Type 1.
Let &NoBreak;&NoBreak; and &NoBreak;&NoBreak; be two triangle center functions, not both identically zero functions, having the same degree of homogeneity. Let a, b, c be the side lengths of the reference triangle △"ABC". An ("f", "g")-central triangle of Type 1 is a triangle △"A'B'C' " the trilinear coordinates of whose vertices have the following form:
formula_2
Central triangles of Type 2.
Let &NoBreak;&NoBreak; be a triangle center function and &NoBreak;&NoBreak; be a function function satisfying the homogeneity property and having the same degree of homogeneity as &NoBreak;&NoBreak; but not satisfying the bisymmetry property. An ("f", "g")-central triangle of Type 2 is a triangle △"A'B'C' " the trilinear coordinates of whose vertices have the following form:
formula_3
Central triangles of Type 3.
Let &NoBreak;&NoBreak; be a triangle center function. An g-central triangle of Type 3 is a triangle △"A'B'C' " the trilinear coordinates of whose vertices have the following form:
formula_4
This is a degenerate triangle in the sense that the points A', B', C' are collinear.
Special cases.
If "f" = "g", the ("f", "g")-central triangle of Type 1 degenerates to the triangle center A'. All central triangles of both Type 1 and Type 2 relative to an equilateral triangle degenerate to a point.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F(tu,tv,tw) = t^n F(u,v,w)"
},
{
"math_id": 1,
"text": "F(u,v,w) = F(u,w,v)."
},
{
"math_id": 2,
"text": "\\begin{array}{rcccccc}\n A' =& f(a,b,c) &:& g(b,c,a) &:& g(c,a,b) \\\\\n B' =& g(a,b,c) &:& f(b,c,a) &:& g(c,a,b) \\\\\n C' =& g(a,b,c) &:& g(b,c,a) &:& f(c,a,b)\n\\end{array}"
},
{
"math_id": 3,
"text": "\\begin{array}{rcccccc}\n A' =& f(a,b,c) &:& g(b,c,a) &:& g(c,b,a) \\\\\n B' =& g(a,c,b) &:& f(b,c,a) &:& g(c,a,b) \\\\\n C' =& g(a,b,c) &:& g(b,a,c) &:& f(c,a,b)\n\\end{array}"
},
{
"math_id": 4,
"text": "\\begin{array}{rrcrcr}\n A' =& 0 \\quad\\ \\ &:& g(b,c,a) &:& - g(c,b,a) \\\\\n B' =& - g(a,c,b) &:& 0 \\quad\\ \\ &:& g(c,a,b) \\\\\n C' =& g(a,b,c) &:& - g(b,a,c) &:& 0 \\quad\\ \\ \n\\end{array}"
},
{
"math_id": 5,
"text": "f(u,v,w) = -1,\\ g(u,v,w) = 1."
},
{
"math_id": 6,
"text": "\nf(a,b,c) = a(2S+S_2), \\quad g(a,b,c) = aS_A, \n"
},
{
"math_id": 7,
"text": "S_A = \\tfrac{1}{2}(b^2 + c^2 - a^2)."
}
]
| https://en.wikipedia.org/wiki?curid=69536750 |
6954092 | Homogeneous differential equation | Type of ordinary differential equation
A differential equation can be homogeneous in either of two respects.
A first order differential equation is said to be homogeneous if it may be written
formula_0
where f and g are homogeneous functions of the same degree of x and y. In this case, the change of variable "y" = "ux" leads to an equation of the form
formula_1
which is easy to solve by integration of the two members.
Otherwise, a differential equation is homogeneous if it is a homogeneous function of the unknown function and its derivatives. In the case of linear differential equations, this means that there are no constant terms. The solutions of any linear ordinary differential equation of any order may be deduced by integration from the solution of the homogeneous equation obtained by removing the constant term.
History.
The term "homogeneous" was first applied to differential equations by Johann Bernoulli in section 9 of his 1726 article "De integraionibus aequationum differentialium" (On the integration of differential equations).
Homogeneous first-order differential equations.
A first-order ordinary differential equation in the form:
formula_2
is a homogeneous type if both functions "M"("x", "y") and "N"("x", "y") are homogeneous functions of the same degree n. That is, multiplying each variable by a parameter λ, we find
formula_3
Thus,
formula_4
Solution method.
In the quotient formula_5, we can let "t" = to simplify this quotient to a function f of the single variable :
formula_6
That is
formula_7
Introduce the change of variables "y" = "ux"; differentiate using the product rule:
formula_8
This transforms the original differential equation into the separable form
formula_9
or
formula_10
which can now be integrated directly: ln "x" equals the antiderivative of the right-hand side (see ordinary differential equation).
Special case.
A first order differential equation of the form (a, b, c, e, f, g are all constants)
formula_11
where "af" ≠ "be"
can be transformed into a homogeneous type by a linear transformation of both variables (α and β are constants):
formula_12
Homogeneous linear differential equations.
A linear differential equation is homogeneous if it is a homogeneous linear equation in the unknown function and its derivatives. It follows that, if "φ"("x") is a solution, so is "cφ"("x"), for any (non-zero) constant c. In order for this condition to hold, each nonzero term of the linear differential equation must depend on the unknown function or any derivative of it. A linear differential equation that fails this condition is called inhomogeneous.
A linear differential equation can be represented as a linear operator acting on "y"("x") where x is usually the independent variable and y is the dependent variable. Therefore, the general form of a linear homogeneous differential equation is
formula_13
where L is a differential operator, a sum of derivatives (defining the "0th derivative" as the original, non-differentiated function), each multiplied by a function "f""i" of x:
formula_14
where "f""i" may be constants, but not all "f""i" may be zero.
For example, the following linear differential equation is homogeneous:
formula_15
whereas the following two are inhomogeneous:
formula_16
formula_17
The existence of a constant term is a sufficient condition for an equation to be inhomogeneous, as in the above example.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x,y) \\, dy = g(x,y) \\, dx,"
},
{
"math_id": 1,
"text": "\\frac{dx}{x} = h(u) \\, du,"
},
{
"math_id": 2,
"text": "M(x,y)\\,dx + N(x,y)\\,dy = 0 "
},
{
"math_id": 3,
"text": "M(\\lambda x, \\lambda y) = \\lambda^n M(x,y) \\quad \\text{and} \\quad N(\\lambda x, \\lambda y) = \\lambda^n N(x,y)\\,. "
},
{
"math_id": 4,
"text": "\\frac{M(\\lambda x, \\lambda y)}{N(\\lambda x, \\lambda y)} = \\frac{M(x,y)}{N(x,y)}\\,. "
},
{
"math_id": 5,
"text": "\\frac{M(tx,ty)}{N(tx,ty)} = \\frac{M(x,y)}{N(x,y)}"
},
{
"math_id": 6,
"text": "\\frac{M(x,y)}{N(x,y)} = \\frac{M(tx,ty)}{N(tx,ty)} = \\frac{M(1,y/x)}{N(1,y/x)}=f(y/x)\\,. "
},
{
"math_id": 7,
"text": "\\frac{dy}{dx} = -f(y/x)."
},
{
"math_id": 8,
"text": "\\frac{dy}{dx}=\\frac{d(ux)}{dx} = x\\frac{du}{dx} + u\\frac{dx}{dx} = x\\frac{du}{dx} + u."
},
{
"math_id": 9,
"text": "x\\frac{du}{dx} = -f(u) - u, "
},
{
"math_id": 10,
"text": "\\frac 1x\\frac{dx}{du} = \\frac {-1}{f(u) + u}, "
},
{
"math_id": 11,
"text": " \\left(ax + by + c\\right) dx + \\left(ex + fy + g\\right) dy = 0"
},
{
"math_id": 12,
"text": "t = x + \\alpha; \\;\\; z = y + \\beta \\,. "
},
{
"math_id": 13,
"text": " L(y) = 0"
},
{
"math_id": 14,
"text": " L = \\sum_{i=0}^n f_i(x)\\frac{d^i}{dx^i} \\, ,"
},
{
"math_id": 15,
"text": " \\sin(x) \\frac{d^2y}{dx^2} + 4 \\frac{dy}{dx} + y = 0 \\,, "
},
{
"math_id": 16,
"text": " 2 x^2 \\frac{d^2y}{dx^2} + 4 x \\frac{dy}{dx} + y = \\cos(x) \\,; "
},
{
"math_id": 17,
"text": " 2 x^2 \\frac{d^2y}{dx^2} - 3 x \\frac{dy}{dx} + y = 2 \\,. "
}
]
| https://en.wikipedia.org/wiki?curid=6954092 |
69543104 | 1 Samuel 14 | First Book of Samuel chapter
1 Samuel 14 is the fourteenth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains Saul's actions against the Philistines. This is within a section comprising 1 Samuel 7–15 which records the rise of the monarchy in Israel and the account of the first years of King Saul.
Text.
This chapter was originally written in the Hebrew language. It is divided into 52 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 24–25, 28–34, 47–51 and 4Q52 (4QSamb; 250 BCE) with extant verses 41–42.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century; only extant verses 10–52).
Analysis.
This chapter gives some detailed narratives on the actions of Saul, 'oscillating between a favorable view and a negative, unfavorable verdict', which in the end 'reinforce the conviction that Saul was not a man after God's heart'. There is a contrast between Saul and his first son, Jonathan, where Saul is depicted as reckless, acting foolishly on one occasion (13:13), interrupted a consultation to rush to battle on another (14:19), and finally endangered the life of his son (14:44), whereas Jonathan is described as 'possessing the characteristics of a charismatic leader, stood in the tradition of those who waged God's battles' and became God's instrument: he held the assumption that 'the LORD will act for us' (verse 6), depended on God's approval of his action (verses 8–12), and attributed the victory to God (verse 23, cf. verse 45).
The Battle of Michmash (14:1–15).
The Philistines camped at Michmash (1 Samuel 13:23) on the north side of the deep ravine, "Wadi es-Suwenit", whereas the Israelites camped in Geba to the south of the ravine. Jonathan and his armour-bearer bravely clambered up from the ravine through hard-to-climb rock formations, as indicated by their names, Bozez ('slippery one') and Seneh ('thorny one'), and succeeded in defeating a group of Philistine soldiers (verses 1–15).
"And Saul was sitting in the outskirts of Gibeah under a pomegranate tree which is in Migron. The people who were with him were about six hundred men."
"And Ahijah, the son of Ahitub, Ichabod’s brother, the son of Phinehas, the son of Eli, the priest of the Lord in Shiloh, was wearing the ephod. But the people did not know that Jonathan had gone."
Saul's actions (14:16–52).
After Jonathan had caused panic in the Philistine garrison (verse 15), Saul finally brought his troops to engage in battle (verse 20). Believing that it will ensure success, Saul placed an oath on his troops to refrain from eating until evening, a rash act (as noted in verse 24 of the Greek Septuagint version, although not found in the Hebrew Masoretic Text), which would make the troops to be too famished to achieve a complete victory, and even become a threat to Jonathan's life (verses 24–26). Jonathan was unaware of the oath, so he ate some of the plentiful honey available and was refreshed ('his eyes brightened'), but he would face death penalty as the consequences from the oath. This led Jonathan to refer Saul as one who 'has troubled the land' and who had prevented a total victory (verse 30). Being very hungry for respecting the oath of refraining from eating the whole day, the Israel troops seized animals from the spoil, and ate them before carefully draining blood from the meat, as they slaughtered on the ground, not on a stone from where the blood could flow away (verses 33–34). 'Eating with blood' (as in NRSV) was forbidden by Torah (Deuteronomy 12:23–27; Leviticus 19:26). Nonetheless, Saul believed the failure to wipe out the Philistines was due to lack of divine support, so investigation was made by means of a sacred lot to find whose fault it was found. The lot fell to the king's family and specifically with Jonathan. Although Jonathan and Saul were willing to accept the verdict, the Israel soldiers insisted to spare Jonathan's life (verse 44). The account closes with a more positive note on Saul as a successful leader (verses 47–48) and the head of a household (verses 49–51).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69543104 |
6954595 | Shear band | A shear band (or, more generally, a 'strain localization') is a narrow zone of intense shearing strain, usually of plastic nature, developing during severe deformation of ductile materials.
As an example, a soil (overconsolidated silty-clay) specimen is shown in Fig. 1, after an axialsymmetric compression test. Initially the sample was cylindrical in shape and, since symmetry was tried to be preserved during the test, the cylindrical shape was maintained for a while during the test and the deformation was homogeneous, but at extreme loading two X-shaped shear bands had formed and the subsequent deformation was strongly localized (see also the sketch on the right of Fig. 1).
Materials in which shear bands are observed.
Although not observable in brittle materials (for instance glass at room temperature), shear bands or, more generally, ‘localized deformations’ usually develop within a broad range of ductile materials (alloys, metals, granular materials, plastics, polymers, and soils) and even in quasi-brittle materials (concrete, ice, rock, and some ceramics).
The relevance of the shear banding phenomena is that they precede failure, since extreme deformations occurring within shear bands lead to intense damage and fracture. Therefore, the formation of shear bands is the key to the understanding of failure in ductile materials, a research topic of great importance for the design of new materials and for the exploiting of existing materials in extreme conditions. As a consequence, localization of deformation has been the focus of an intense research activity since the middle of the 20th century.
Mathematical modeling.
Shear band formation is an example of a material instability, corresponding to an abrupt loss of homogeneity of deformation occurring in a solid sample subject to a loading path compatible with continued uniform deformation. In this sense, it may be interpreted as a deformation mechanism ‘alternative’ to a trivial one and therefore a bifurcation or loss of uniqueness of a ‘perfect’ equilibrium path. The distinctive character of this bifurcation is that it may occur even in an infinite body (or under the extreme constraint of smooth contact with a rigid constraint).
Consider an infinite body made up of a nonlinear material, quasi-statically deformed in a way that stress and strain may remain homogeneous. The incremental response of this nonlinear material is assumed for simplicity linear, so that it can be expressed as a relation between a stress increment formula_0 and a strain increment formula_1, through a fourth-order constitutive tensor formula_2 as
where the fourth-order constitutive tensor formula_3 depends on the current state, i.e. the current stress, the current strain and, possibly, other constitutive parameters (for instance, hardening variables for metals, or density for granular materials).
Conditions are sought for the emergence of a surface of discontinuity (of unit normal vector formula_4) in the incremental stress and strain. These conditions are identified with the conditions for the occurrence of localization of deformation. In particular, incremental equilibrium requires that the incremental tractions (not the stresses!) remain continuous
(where + and - denote the two sides of the surface) and geometrical compatibility imposes a strain compatibility restriction on the form of incremental strain:
where the symbol formula_5 denotes tensor product and formula_6 is a vector defining the deformation discontinuity mode (orthogonal to formula_4 for incompressible materials). A substitution of the incremental constitutive law (1) and of the strain compatibility (3) into the continuity of incremental tractions (2) yields the necessary condition for strain localization:
Since the second-order tensor formula_7 defined for every vector formula_8 as
formula_9
is the so-called 'acoustic tensor', defining the condition of propagation of acceleration waves, we can conclude that the condition for strain localization coincides with the condition of singularity (propagation at null speed) of an acceleration wave. This condition represents the so-called 'loss of ellipticity' of the differential equations governing the rate equilibrium.
State-of-the-art.
The state-of-the-art of the research on shear bands is that the phenomenon is well understood from the theoretical and experimental point of view and available constitutive models give nice qualitative predictions, although quantitative predictions are often poor. Moreover, great progresses have been made on numerical simulations, so that shear band nucleation and propagation in relatively complex situations can be traced numerically with finite element models, although still at the cost of a great computational effort. Of further interest are simulations that reveal the crystallographic orientation dependence of shear banding in single crystal and polycrystals. These simulations show that certain orientations are much more prone to undergo shear localization than others.
Shear banding and crystallographic texture.
Most polycrystalline metals and alloys usually deform via shear caused by dislocations, twins, and / or shear bands. This leads to pronounced plastic anisotropy at the grain scale and to preferred grain orientation distributions, i.e. crystallographic textures. Cold rolling textures of most face centered cubic metals and alloys for instance range between two types, i.e. the brass-type texture and the copper-type texture. The stacking fault energy plays an important role for the prevailing mechanisms of plastic deformation and the resultant textures. For aluminum and other fcc materials with high SFE, dislocation glide is the main mechanism during cold rolling and the {112}<111> (copper) and {123}<634> (S) texture components (copper-type textures) are developed. In contrast, in Cu–30 wt.% Zn (alpha-brass) and related metals and alloys with low SFE, mechanical twinning and shear banding occur together with dislocation glide as main deformation carriers, particularly at large plastic deformations. The resulting rolling textures are characterized by the {011}<211> (brass) and {01 1}<100> (Goss) texture components (brass-type texture). In either case non-crystallographic shear banding plays an essential role for the specific type of deformation texture evolved.
A perturbative approach to analyze shear band emergence.
Closed-form solutions disclosing the shear band emergence can be obtained through the perturbative approach, consisting in the superimposition of a perturbation field upon an unperturbed deformed state.
In particular, an infinite, incompressible, nonlinear elastic material, homogeneously deformed under the plane strain condition can be perturbed through superposition of concentrated forces or by the presence of cracks or rigid line inclusions.
It has been shown that, when the unperturbed state is taken close to the localization condition (4), the perturbed fields self-arrange in the form of localized fields, taking extreme values in the neighbourhood of the introduced perturbation and focussed along the shear bands directions. In particular, in the case of cracks and rigid line inclusions such shear bands emerge from the linear inclusion tips.
Within the perturbative approach, an incremental model for a shear band of finite length has been introduced prescribing the following conditions along its surface:
Employing this model, the following main features of shear banding have been demonstrated: | [
{
"math_id": 0,
"text": "\\dot\\sigma"
},
{
"math_id": 1,
"text": "\\dot\\varepsilon "
},
{
"math_id": 2,
"text": "\\Complex "
},
{
"math_id": 3,
"text": "\\Complex"
},
{
"math_id": 4,
"text": "\\mathbf{n}"
},
{
"math_id": 5,
"text": "\\otimes "
},
{
"math_id": 6,
"text": "\\mathbf{g}"
},
{
"math_id": 7,
"text": "\\mathbb{A} (\\mathbf{n})"
},
{
"math_id": 8,
"text": "\\textbf{g}"
},
{
"math_id": 9,
"text": "\\mathbb{A} (\\textbf{n}) \\textbf{g}=\\Complex\\left(\\textbf{g}\\otimes\\textbf{n}\\right) \\textbf{n}"
}
]
| https://en.wikipedia.org/wiki?curid=6954595 |
69550585 | Run-and-tumble motion | Type of bacterial motion
Run-and-tumble motion is a movement pattern exhibited by certain bacteria and other microscopic agents. It consists of an alternating sequence of "runs" and "tumbles": during a run, the agent propels itself in a fixed (or slowly varying) direction, and during a tumble, it remains stationary while it reorients itself in preparation for the next run.
The tumbling is erratic or "random" in the sense of a stochastic process—that is, the new direction is sampled from a probability density function, which may depend on the organism's local environment (e.g., chemical gradients). The duration of a run is usually random in the same sense. An example is wild-type "E. coli" in a dilute aqueous medium, for which the run duration is exponentially distributed with a mean of about 1 second.
Run-and-tumble motion forms the basis of certain mathematical models of self-propelled particles, in which case the particles themselves may be called run-and-tumble particles.
Description.
Many bacteria swim, propelled by rotation of the flagella outside the cell body. In contrast to protist flagella, bacterial flagella are and—irrespective of species and type of flagellation—they have only two modes of operation: clockwise or counterclockwise rotation. Bacterial swimming is used in (mediated by specific receptors and signal transduction pathways) for the bacterium to move in a directed manner along gradients and reach more favorable conditions for life. The direction of flagellar rotation is controlled by the type of molecules detected by the receptors on the surface of the cell: in the presence of an attractant gradient, the rate of smooth swimming increases, while the presence of a repellent gradient increases the rate of tumbling.
Biological examples.
Run-and-tumble motion is found in many peritrichous bacteria, including "E. coli", "Salmonella typhimurium", and "Bacillus subtilis". It has also been observed in the alga "Chlamydomonas reinhardtii" and the cyanobacterium "Synechocystis".
Directed motility (taxis).
Genetically diverse groups of microorganisms rely upon directed motility (taxis), such as chemotaxis or phototaxis, to optimally navigate through complex environments or colonise host tissues. In the model organisms "Escherichia coli" and "Salmonella", bacteria swim in a random pattern produced by alternating counterclockwise (CCW) and clockwise (CW) flagellar rotation. Chemoreceptors detect attractants or repellents and stimulate responses through a signalling cascade that controls the direction of the flagellar motor. This can result in a chemotaxis, where attractant gradients extend the length of time flagellar motors rotate CCW, resulting in more smooth swimming in a favourable direction, while repellents cause an increase of CW rotations, resulting in more tumbling and changes in direction. The cyanobacterium "Synechocystis" uses run-and-tumbling in a manner which can result in phototaxis.
"Escherichia coli".
An archetype of bacterial swimming is represented by the well-studied model organism "Escherichia coli". With its peritrichous flagellation, "E. coli" performs a run-and-tumble swimming pattern, as shown in the diagrams below. Counterclockwise rotation of the flagellar motors leads to flagellar bundle formation that pushes the cell in a forward run, parallel to the long axis of the cell. Clockwise rotation disassembles the bundle and the cell rotates randomly (tumbling). After the tumbling event, straight swimming is recovered in a new direction. That is, counterclockwise rotation results in steady motion and clockwise rotation in tumbling; counterclockwise rotation in a given direction is maintained longer in the presence of molecules of interest (like sugars or aminoacids).
In a uniform medium, run-and-tumble trajectories appear as a sequence of nearly straight segments interspersed by erratic reorientation events, during which the bacterium remains stationary. The straight segments correspond to the runs, and the reorientation events correspond to the tumbles. Because they exist at low Reynolds number, bacteria starting at rest quickly reach a fixed terminal velocity, so the runs can be approximated as constant velocity motion. The deviation of real-world runs from straight lines is usually attributed to rotational diffusion, which causes small fluctuations in the orientation over the course of a run.
In contrast with the more gradual effect of rotational diffusion, the change in orientation (turn angle) during a tumble is large; for an isolated "E. Coli" in a uniform aqueous medium, the mean turn angle is about 70 degrees, with a relatively broad distribution. In more complex environments, the tumbling distribution and run duration may depend on the agent's local environment, which allows for goal-oriented navigation (taxis). For example, a tumbling distribution that depends on a chemical gradient can guide bacteria toward a food source or away from a repellant, a behavior referred to as chemotaxis. Tumbles are typically faster than runs: tumbling events of "E. Coli" last about 0.1 seconds, compared to ~ 1 second for a run.
"Synechocystis".
Another example is "Synechocystis", a genus of cyanobacterium. Cyanobacterium do not have flagella. Nonetheless, "Synechocystis" species can move in cell suspensions and on moist surfaces and by using retractile type IV pili, displaying an intermittent two phase motion; a high-motility run and a low-motility tumble "(see diagram)". The two phases can be modified under various external stressors. Increasing the light intensity, uniformly over the space, increases the probability of "Synechocystis" being in the run state randomly in all directions. This feature, however, vanishes after a typical characteristic time of about one hour, when the initial probability is recovered. These results were well described by a mathematical model based on a linear response theory proposed by Vourc’h et al.
"Synechocystis" cells can also undergo biased motility under directional illumination. Under directional light flux, "Synehcocystis" cells perform phototactic motility and head toward the light source (in positive phototaxis). Vourc’h et al. (2020) showed that this biased motility stems from the averaged displacements during run periods, which is no longer random (as it was in the uniform illumination). They showed the bias is the result of the number of runs, which is greater toward the light source, and not of longer runs in this direction. Brought together, these results suggest distinct pathways for the recognition of light intensity and light direction in this prokaryotic microorganism. This effect can be used in the active control of bacterial flows.
It has also been observed that very strong local illumination inactivates the motility apparatus. Increasing the light intensity of more than ~475 μmol m−2 s−1 reverses the direction of "Synechocystis" cells to move away from the high levels of radiation source. Moreover, "Synechocystis" cells show a negative phototaxis behavior under ultraviolet radiation as an effective escape mechanism to avoid damage to DNA and other cellular components of "Synechocystis". Contrary to the run phase that can extend from a fraction of a second to several minutes, the tumble lasts only a fraction of a second. The tumbling phase is a clockwise rotation that allows the cell to change the motility direction of the next run.
Chemotaxis is another scheme that allows an organism to move toward or away from gradients of nutrients or other chemical stimuli. Detecting by transmembrane chemoreceptors the microorganism performs a three-dimensional random walk is observed in a homogenous environment, and the direction of each run is identified after a tumble.
Mathematical modeling.
Theoretically and computationally, run-and-tumble motion can be modeled as a stochastic process. One of the simplest models is based on the following assumptions:
With a few other simplifying assumptions, an integro-differential equation can be derived for the probability density function "f" (r, ŝ, "t"), where r is the particle position and ŝ is the unit vector in the direction of its orientation. In d-dimensions, this equation is
formula_0
where Ω"d"
2π"d"/2/Γ("d"/2) is the "d"-dimensional solid angle, "V"(r) is an external potential, "ξ" is the friction, and the function "g" (ŝ - ŝ') is a scattering cross section describing transitions from orientation ŝ' to ŝ. For complete reorientation, "g"
1. The integral is taken over all possible unit vectors, i.e., the d-dimensional unit sphere.
In free space (far from boundaries), the mean squared displacement ⟨r("t")2⟩ generically scales as ⟨r("t")2⟩ ~ "t"2 for small "t" and ⟨r("t")2⟩ ~ "t" for large "t". In two dimensions, the mean squared displacement corresponding to initial condition "f" (r, ŝ, 0)
δ("r")/(2π) is
formula_1
where
formula_2
with ŝ parametrized as ŝ
(cos "θ", sin "θ").
In real-world systems, more complex models may be required. In such cases, specialized analysis methods have been developed to infer model parameters from experimental trajectory data.
The mathematical abstraction of run-and-tumble motion also appears outside of biology—for example, in idealized models of radiative transfer and neutron transport.
Notes.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\frac{\\partial f(\\mathbf{r},\\hat{\\mathbf{s}}, t)}{\\partial t} + v_0 \\, \\hat{\\mathbf{s}} \\cdot \\nabla f(\\mathbf{r},\\hat{\\mathbf{s}}, t) = \\xi^{-1} \\nabla \\cdot (\\nabla V(\\mathbf{r}) f(\\mathbf{r},\\hat{\\mathbf{s}}, t)) -\\alpha f(\\mathbf{r},\\hat{\\mathbf{s}}, t) + \\frac{\\alpha}{\\Omega_d} \\int g(\\hat{\\mathbf{s}} - \\hat{\\mathbf{s}}') f(\\mathbf{r},\\hat{\\mathbf{s}}', t) d\\hat{\\mathbf{s}}'\n"
},
{
"math_id": 1,
"text": "\n\\langle \\mathbf{r}^2 \\rangle = \\frac{2 v_0^2}{\\alpha^2(1 - \\sigma_1)} \\left[\\alpha \\, t + \\frac{e^{-\\alpha(1-\\sigma_1)t} - 1}{1 - \\sigma_1} \\right]\n"
},
{
"math_id": 2,
"text": "\\sigma_1 = \\int_{0}^{2\\pi} g(\\hat{\\mathbf{s}}) (\\cos \\theta) d \\theta"
}
]
| https://en.wikipedia.org/wiki?curid=69550585 |
695523 | Kelvin–Helmholtz mechanism | Process of energy release of a contracting star or planet
The Kelvin–Helmholtz mechanism is an astronomical process that occurs when the surface of a star or a planet cools. The cooling causes the internal pressure to drop, and the star or planet shrinks as a result. This compression, in turn, heats the core of the star/planet. This mechanism is evident on Jupiter and Saturn and on brown dwarfs whose central temperatures are not high enough to undergo hydrogen fusion. It is estimated that Jupiter radiates more energy through this mechanism than it receives from the Sun, but Saturn might not. Jupiter has been estimated to shrink at a rate of approximately 1 mm/year by this process, corresponding to an internal flux of 7.485 W/m2.
The mechanism was originally proposed by Kelvin and Helmholtz in the late nineteenth century to explain the source of energy of the Sun. By the mid-nineteenth century, conservation of energy had been accepted, and one consequence of this law of physics is that the Sun must have some energy source to continue to shine. Because nuclear reactions were unknown, the main candidate for the source of solar energy was gravitational contraction.
However, it soon was recognized by Sir Arthur Eddington and others that the total amount of energy available through this mechanism only allowed the Sun to shine for millions of years rather than the billions of years that the geological and biological evidence suggested for the age of the Earth. (Kelvin himself had argued that the Earth was millions, not billions, of years old.) The true source of the Sun's energy remained uncertain until the 1930s, when it was shown by Hans Bethe to be nuclear fusion.
Power generated by a Kelvin–Helmholtz contraction.
It was theorised that the gravitational potential energy from the contraction of the Sun could be its source of power. To calculate the total amount of energy that would be released by the Sun in such a mechanism (assuming uniform density), it was approximated to a perfect sphere made up of concentric shells. The gravitational potential energy could then be found as the integral over all the shells from the centre to its outer radius.
Gravitational potential energy from Newtonian mechanics is defined as:
formula_0
where "G" is the gravitational constant, and the two masses in this case are that of the thin shells of width "dr", and the contained mass within radius "r" as one integrates between zero and the radius of the total sphere. This gives:
formula_1
where "R" is the outer radius of the sphere, and "m"("r") is the mass contained within the radius "r". Changing "m"("r") into a product of volume and density to satisfy the integral,
formula_2
Recasting in terms of the mass of the sphere gives the total gravitational potential energy as
formula_3
According to the Virial Theorem, the total energy for gravitationally bound systems in equilibrium is one half of the time-averaged potential energy,
formula_4
While uniform density is not correct, one can get a rough order of magnitude estimate of the expected age of our star by inserting known values for the mass and radius of the Sun, and then dividing by the known luminosity of the Sun (note that this will involve another approximation, as the power output of the Sun has not always been constant):
formula_5
where formula_6 is the luminosity of the Sun. While giving enough power for considerably longer than many other physical methods, such as chemical energy, this value was clearly still not long enough due to geological and biological evidence that the Earth was billions of years old. It was eventually discovered that thermonuclear energy was responsible for the power output and long lifetimes of stars.
The flux of internal heat for Jupiter is given by the derivative according to the time of the total energy
formula_7
With a shrinking of formula_8, one gets
formula_9
dividing by the whole area of Jupiter, i.e. formula_10, one gets
formula_11
Of course, one usually calculates this equation in the other direction: the experimental figure of the specific flux of internal heat, 7.485 W/m2, was given from the direct measures made on the spot by the Cassini probe during its flyby on 30 December 2000 and one gets the amount of the shrinking, ~1 mm/year, a minute figure below the boundaries of practical measurement.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U = -\\frac{Gm_1m_2}{r},"
},
{
"math_id": 1,
"text": "U = -G\\int_0^R \\frac{m(r) 4 \\pi r^2 \\rho}{r}\\, dr,"
},
{
"math_id": 2,
"text": "U = -G\\int_0^R \\frac{4 \\pi r^3 \\rho 4 \\pi r^2 \\rho}{3r}\\, dr = -\\frac{16}{15}G \\pi^2 \\rho^2 R^5."
},
{
"math_id": 3,
"text": "U = -\\frac{3GM^2}{5R}."
},
{
"math_id": 4,
"text": "U_r = \\frac{|\\langle U \\rangle|}{2} = \\frac{3GM^2}{10R}."
},
{
"math_id": 5,
"text": "\\frac{U_\\text{r}}{L_\\odot} \\approx \\frac{1.1 \\times 10^{41}~\\text{J}}{3.828 \\times 10^{26}~\\text{W}} = 2.874\\times10^{14}~\\mathrm{s} \\, \\approx 8\\,900\\,000~\\text{years},"
},
{
"math_id": 6,
"text": "L_\\odot"
},
{
"math_id": 7,
"text": "\\frac{dU_r}{dt} = \\frac{-3GM^2}{10R^2} \\frac{dR}{dt} = -1.46 \\times 10^{28}~\\text{[J/m]}~\\times\\frac{dR}{dt}~\\text{[m/s]}."
},
{
"math_id": 8,
"text": "-1\\mathrm\\frac{~mm}{yr} = -0.001\\mathrm\\frac{~m}{yr} = -3.17\\times 10^{-11}~\\mathrm\\frac{m}{s}"
},
{
"math_id": 9,
"text": "\\frac{dU_r}{dt} = 4.63\\times 10^{17}~\\text{W},"
},
{
"math_id": 10,
"text": "S = 6.14\\times 10^{16}~\\mathrm{m^2}"
},
{
"math_id": 11,
"text": "\\frac{1}{S}\\frac{dU_r}{dt} = 7.5~\\mathrm\\frac{W}{m^2}."
}
]
| https://en.wikipedia.org/wiki?curid=695523 |
69557760 | Flight-time equivalent dose | Dose measurement of radiation
Flight-time equivalent dose (FED) is an informal unit of measurement of ionizing radiation exposure. Expressed in units of flight-time (i.e., flight-seconds, flight-minutes, flight-hours), one unit of flight-time is approximately equivalent to the radiological dose received during the same unit of time spent in an airliner at cruising altitude. FED is intended as a general educational unit to enable a better understanding of radiological dose by converting dose typically presented in sieverts into units of time. FED is only meant as an educational exercise and is not a formally adopted dose measurement.
History.
The flight-time equivalent dose concept is the creation of Ulf Stahmer, a Canadian professional engineer working in the field of radioactive materials transport. It was first presented in the poster session at the 18th International Symposium of the Packaging and Transport of Radioactive Materials (PATRAM) held in Kobe, Hyogo, Japan where the poster received an Aoki Award for distinguished poster presentation. In 2018, an article on FED appeared in the peer-reviewed journal The Physics Teacher.
Usage.
Flight-time equivalent dose is an informal measurement, so any equivalences are necessarily approximate. It has been found useful to provide context between radiological doses received from various every-day activities and medical procedures.
Dose calculation.
FED corresponds to the time spent in an airliner flying at altitude required to receive a corresponding radiological dose. FED is calculated by taking a known dose (typically in millisieverts) and dividing it by the average dose rate (typically in millisieverts per hour) at an altitude of 10,000 m, a typical cruising altitude for a commercial airliner.<br>
formula_0
While radiological dose at cruising altitudes varies with latitude, for FED calculations, the radiological dose rate at an altitude of 10,000 m has been standardized to be 0.004 mSv/h, about 15 times greater than the average dose rate at the Earth's surface. Using this technique, the FED received from a 0.01 mSv panoramic dental x-ray is approximately equivalent to 2.5 flight-hours; the FED received from eating one banana is approximately equal to 1.5 flight-minutes; and the FED received each year from naturally occurring background radiation (2.4 mSv/year) is approximately equivalent to 600 flight-hours.
Radiological exposures and limits.
For comparison, a list of activities (including common medical procedures) and their estimated radiological exposures are tabulated below. Regulatory occupational dose limits for the public and radiation workers are also included. Items on this list are represented pictorially in the accompanying illustrations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "FED= \\frac{{mSv}_{dose}}{{0.004 \\frac{mSv}{h}}_{cruising altitude}}"
}
]
| https://en.wikipedia.org/wiki?curid=69557760 |
69557765 | Complex hyperbolic space | In mathematics, hyperbolic complex space is a Hermitian manifold which is the equivalent of the real hyperbolic space in the context of complex manifolds. The complex hyperbolic space is a Kähler manifold, and it is characterised by being the only simply connected Kähler manifold whose holomorphic sectional curvature is constant equal to -1. Its underlying Riemannian manifold has non-constant negative curvature, pinched between -1 and -1/4 (or -4 and -1, according to the choice of a normalization of the metric): in particular, it is a CAT(-1/4) space.
Complex hyperbolic spaces are also the symmetric spaces associated with the Lie groups formula_0. They constitute one of the three families of rank one symmetric spaces of noncompact type, together with real and quaternionic hyperbolic spaces, classification to which must be added one exceptional space, the Cayley plane.
Construction of the complex hyperbolic space.
Projective model.
Let formula_1 be a pseudo-Hermitian form of signature formula_2 in the complex vector space formula_3. The projective model of the complex hyperbolic space is the projectivized space of all negative vectors for this form: formula_4
As an open set of the complex projective space, this space is endowed with the structure of a complex manifold. It is biholomorphic to the unit ball of formula_5, as one can see by noting that a negative vector must have non zero first coordinate, and therefore has a unique representative with first coordinate equal to 1 in the projective space. The condition formula_6 when formula_7 is equivalent to formula_8. The map sending the point formula_9 of the unit ball of formula_5 to the point formula_10 of the projective space thus defines the required biholomorphism.
This model is the equivalent of the Poincaré disk model. Unlike the real hyperbolic space, the complex projective space cannot be defined as a sheet of the hyperboloid formula_11, because the projection of this hyperboloid onto the projective model has connected fiber formula_12 (the fiber being formula_13 in the real case).
A Hermitian metric is defined on formula_14 in the following way: if formula_15 belongs to the cone formula_16, then the restriction of formula_17 to the orthogonal space formula_18 defines a definite positive hermitian product on this space, and because the tangent space of formula_14 at the point formula_19 can be naturally identified with formula_20, this defines a hermitian inner product on formula_21. As can be seen by computation, this inner product does not depend on the choice of the representative formula_22. In order to have holomorphic sectional curvature equal to -1 and not -4, one needs to renormalize this metric by a factor of formula_23. This metric is a Kähler metric.
Siegel model.
The Siegel model of complex hyperbolic space is the subset of formula_24 such that
formula_25
It is biholomorphic to the unit ball in formula_26 via the Cayley transform
formula_27
Boundary at infinity.
In the projective model, the complex hyperbolic space identifies with the complex unit ball of dimension formula_28, and its boundary can be defined as the boundary of the ball, which is diffeomorphic to the sphere of real dimension formula_29. This is equivalent to defining :
formula_30
As a CAT(0) space, the complex hyperbolic space also has a boundary at infinity formula_31. This boundary coincides with the boundary formula_32 just defined.
The boundary of the complex hyperbolic space naturally carries a CR structure. This structure is also the standard contact structure on the (odd dimensional) sphere.
Group of holomorphic isometries and symmetric space.
The group of holomorphic isometries of the complex hyperbolic space is the Lie group formula_0. This group acts transitively on the complex hyperbolic space, and the stabilizer of a point is isomorphic to the unitary group formula_33. The complex hyperbolic space is thus homeomorphic to the homogeneous space formula_34. The stabilizer formula_33 is the maximal compact subgroup of formula_0.
As a consequence, the complex hyperbolic space is the Riemannian symmetric space formula_35, where formula_36 is the pseudo-unitary group.
The group of holomorphic isometries of the complex hyperbolic space also acts on the boundary of this space, and acts thus by homeomorphisms on the closed disk formula_37. By Brouwer's fixed point theorem, any holomorphic isometry of the complex hyperbolic space must fix at least one point in formula_38. There is a classification of isometries into three types:
The Iwasawa decomposition of formula_39 is the decomposition formula_40, where formula_41 is the unitary group, formula_42 is the additive group of real numbers and formula_43 is the Heisenberg group of real dimension formula_29. Such a decomposition depends on the choice of :
For any such decomposition of formula_39, the action of the subgroup formula_52 is free and transitive, hence induces a diffeomorphism formula_53. This diffeomorphism can be seen as a generalization of the Siegel model.
Curvature.
The group of holomorphic isometries formula_0 acts transitively on the tangent complex lines of the hyperbolic complex space. This is why this space has constant holomorphic sectional curvature, that can be computed to be equal to -4 (with the above normalization of the metric). This property characterizes the hyperbolic complex space : up to isometric biholomorphism, there is only one simply connected complete Kähler manifold of given constant holomorphic sectional curvature.
Furthermore, when a Hermitian manifold has constant holomorphic sectional curvature equal to formula_54, the sectional curvature of every real tangent plane formula_55 is completely determined by the formula :
formula_56
where formula_57 is the angle between formula_55 and formula_58, ie the infimum of the angles between a vector in formula_55 and a vector in formula_58. This angle equals 0 if and only if formula_55 is a complex line, and equals formula_59 if and only if formula_55 is totally real. Thus the sectional curvature of the complex hyperbolic space varies from -4 (for complex lines) to -1 (for totally real planes).
In complex dimension 1, every real plane in the tangent space is a complex line: thus the hyperbolic complex space of dimension 1 has constant curvature equal to -1, and by the uniformization theorem, it is isometric to the real hyperbolic plane. Hyperbolic complex spaces can thus be seen as another high-dimensional generalization of the hyperbolic plane, less standard than the real hyperbolic spaces. A third possible generalization is the homogeneous space formula_60, which for formula_61 again coincides with the hyperbolic plane, but becomes a symmetric space of rank greater than 1 when formula_62.
Totally geodesic subspaces.
Every totally geodesic submanifold of the complex hyperbolic space of dimension n is one of the following :
In particular, there is no codimension 1 totally geodesic subspace of the complex hyperbolic space.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "PU(n,1)"
},
{
"math_id": 1,
"text": "\\langle u,v\\rangle := -u_1\\overline{v_1} + u_2\\overline{v_2} + \\dots + u_{n+1}\\overline{v_{n+1}}"
},
{
"math_id": 2,
"text": "(n,1)"
},
{
"math_id": 3,
"text": "\\mathbb{C}^{n+1}"
},
{
"math_id": 4,
"text": "\\mathbb{H}^n_\\mathbb{C} = \\{[\\xi] \\in \\mathbb{CP}^n | \\langle \\xi,\\xi\\rangle <0\\}."
},
{
"math_id": 5,
"text": "\\mathbb{C}^n"
},
{
"math_id": 6,
"text": "\\langle \\xi,\\xi\\rangle<0 "
},
{
"math_id": 7,
"text": "\\xi=(1,x_1,\\dots,x_{n+1}) \\in \\mathbb{C}^{n+1}"
},
{
"math_id": 8,
"text": "\\sum_{i=1}^{n} |x_i|^2 < 1"
},
{
"math_id": 9,
"text": "(x_1,\\dots,x_n)"
},
{
"math_id": 10,
"text": "[1:x_1:\\dots:x_n]"
},
{
"math_id": 11,
"text": "\\langle x,x\\rangle = -1"
},
{
"math_id": 12,
"text": "\\mathbb{S}^1"
},
{
"math_id": 13,
"text": "\\mathbb{Z}/2\\mathbb{Z}"
},
{
"math_id": 14,
"text": "\\mathbb{H}^n_\\mathbb{C}"
},
{
"math_id": 15,
"text": "p\\in \\C^{n+1}"
},
{
"math_id": 16,
"text": "\\langle p,p\\rangle=-1"
},
{
"math_id": 17,
"text": "\\langle\\cdot,\\cdot\\rangle "
},
{
"math_id": 18,
"text": "(\\C p)^{\\perp} \\subset \\C^{n+1}"
},
{
"math_id": 19,
"text": "[p]"
},
{
"math_id": 20,
"text": "(\\C p)^{\\perp}"
},
{
"math_id": 21,
"text": "T_{[p]}\\mathbb{H}^n_\\mathbb{C}"
},
{
"math_id": 22,
"text": "p"
},
{
"math_id": 23,
"text": "1/2"
},
{
"math_id": 24,
"text": "(w,z)\\in\\mathbb C\\times\\mathbb C^{n-1}"
},
{
"math_id": 25,
"text": "i(\\bar w-w) > 2z\\bar z."
},
{
"math_id": 26,
"text": "\\mathbb C^n"
},
{
"math_id": 27,
"text": "(w,z)\\mapsto \\left(\\frac{w-i}{w+i},\\frac{2z}{w+i}\\right)."
},
{
"math_id": 28,
"text": "n"
},
{
"math_id": 29,
"text": "2n-1"
},
{
"math_id": 30,
"text": "\\partial\\mathbb{H}^n_\\mathbb{C} = \\{[\\xi] \\in \\mathbb{CP}^n | \\langle \\xi,\\xi\\rangle =0\\}."
},
{
"math_id": 31,
"text": "\\partial_{\\infty}\\mathbb{H}^n_\\mathbb{C}"
},
{
"math_id": 32,
"text": "\\partial\\mathbb{H}^n_\\mathbb{C}"
},
{
"math_id": 33,
"text": "U(n)"
},
{
"math_id": 34,
"text": "PU(n,1)/U(n)"
},
{
"math_id": 35,
"text": "SU(n,1)/S(U(n)\\times U(1))"
},
{
"math_id": 36,
"text": "SU(n,1)"
},
{
"math_id": 37,
"text": "\\bar{\\mathbb{D}} = \\mathbb{H}^n_{\\mathbb{C}} \\cup \\partial\\mathbb{H}^n_{\\mathbb{C}}"
},
{
"math_id": 38,
"text": "\\bar{\\mathbb{D}}"
},
{
"math_id": 39,
"text": "\\mathrm{PU}(n,1)"
},
{
"math_id": 40,
"text": "\\mathrm{PU}(n,1)=K\\times A\\times N"
},
{
"math_id": 41,
"text": "K=U(n)"
},
{
"math_id": 42,
"text": "A=\\mathbb{R}"
},
{
"math_id": 43,
"text": "N=\\mathcal{H_n}"
},
{
"math_id": 44,
"text": "\\xi"
},
{
"math_id": 45,
"text": "N"
},
{
"math_id": 46,
"text": "\\ell"
},
{
"math_id": 47,
"text": "\\xi "
},
{
"math_id": 48,
"text": "A"
},
{
"math_id": 49,
"text": "\\gamma:\\R\\to \\mathbb{H}^n_{\\mathbb{C}} "
},
{
"math_id": 50,
"text": "K"
},
{
"math_id": 51,
"text": "\\gamma(0)"
},
{
"math_id": 52,
"text": "A\\times N"
},
{
"math_id": 53,
"text": "\\mathrm A\\times N \\to \\mathbb{H}^n_{\\mathbb{C}}"
},
{
"math_id": 54,
"text": "k"
},
{
"math_id": 55,
"text": "\\Pi"
},
{
"math_id": 56,
"text": "K(\\Pi) = \\frac{k}{4}\\left(1+3\\cos^2(\\alpha(\\Pi)\\right)"
},
{
"math_id": 57,
"text": "\\alpha(\\Pi)"
},
{
"math_id": 58,
"text": "J\\Pi"
},
{
"math_id": 59,
"text": "\\pi/2"
},
{
"math_id": 60,
"text": "SL_n(\\mathbb{R})/SO_n(\\mathbb{\\R})"
},
{
"math_id": 61,
"text": "n=2"
},
{
"math_id": 62,
"text": "n\\ge 3"
}
]
| https://en.wikipedia.org/wiki?curid=69557765 |
69558806 | Klein–Kramers equation | In physics and mathematics, the Klein–Kramers equation or sometimes referred as Kramers–Chandrasekhar equation is a partial differential equation that describes the probability density function "f" (r, p, "t") of a Brownian particle in phase space (r, p). It is a special case of the Fokker–Planck equation.
In one spatial dimension, f is a function of three independent variables: the scalars x, p, and t. In this case, the Klein–Kramers equation is
formula_0
where "V"("x") is the external potential, m is the particle mass, ξ is the friction (drag) coefficient, T is the temperature, and "k"B is the Boltzmann constant. In d spatial dimensions, the equation is
formula_1
Here formula_2 and formula_3 are the gradient operator with respect to r and p, and formula_4 is the Laplacian with respect to p.
The fractional Klein-Kramers equation is a generalization that incorporates anomalous diffusion by way of fractional calculus.
Physical basis.
The physical model underlying the Klein–Kramers equation is that of an underdamped Brownian particle. Unlike standard Brownian motion, which is overdamped, underdamped Brownian motion takes the friction to be finite, in which case the momentum remains an independent degree of freedom.
Mathematically, a particle's state is described by its position r and momentum p, which evolve in time according to the Langevin equations
formula_5
Here formula_6 is d-dimensional Gaussian white noise, which models the thermal fluctuations of p in a background medium of temperature T. These equations are analogous to Newton's second law of motion, but due to the noise term formula_6 are stochastic ("random") rather than deterministic.
The dynamics can also be described in terms of a probability density function "f" (r, p, "t"), which gives the probability, at time t, of finding a particle at position r and with momentum p. By averaging over the stochastic trajectories from the Langevin equations, "f" (r, p, "t") can be shown to obey the Klein–Kramers equation.
Solution in free space.
The d-dimensional free-space problem sets the force equal to zero, and considers solutions on formula_7 that decay to 0 at infinity, i.e., "f" (r, p, "t") → 0 as .
For the 1D free-space problem with point-source initial condition, "f" ("x", "p", 0)
"δ"("x" - "x"')"δ"("p" - "p"'), the solution which is a bivariate Gaussian in x and p was solved by Subrahmanyan Chandrasekhar (who also devised a general methodology to solve problems in the presence of a potential) in 1943:
formula_8
where
formula_9
This special solution is also known as the Green's function "G"("x", "x"', "p", "p"', t), and can be used to construct the general solution, i.e., the solution for generic initial conditions "f" ("x", "p", 0):
formula_10
Similarly, the 3D free-space problem with point-source initial condition "f" (r, p, 0)
"δ"(r - r') "δ"(p - p') has solution
formula_11
with formula_12, formula_13, and formula_14 and formula_15 defined as in the 1D solution.
Asymptotic behavior.
Under certain conditions, the solution of the free-space Klein–Kramers equation behaves asymptotically like a diffusion process. For example, if
formula_16
then the density formula_17 satisfies
formula_18
where formula_19 is the free-space Green's function for the diffusion equation.
Solution near boundaries.
The 1D, time-independent, force-free ("F"
0) version of the Klein–Kramers equation can be solved on a semi-infinite or bounded domain by separation of variables. The solution typically develops a boundary layer that varies rapidly in space and is non-analytic at the boundary itself.
A well-posed problem prescribes boundary data on only half of the p domain: the positive half ("p" > 0) at the left boundary and the negative half ("p" < 0) at the right. For a semi-infinite problem defined on 0 < "x" < ∞, boundary conditions may be given as:
formula_20
for some function "g"("p").
For a point-source boundary condition, the solution has an exact expression in terms of infinite sum and products: Here, the result is stated for the non-dimensional version of the Klein–Kramers equation:
formula_21
In this representation, length and time are measured in units of formula_22 and formula_23, such that formula_24 and formula_25 are both dimensionless. If the boundary condition at "z"
0 is "g"("w")
"δ"("w" - "w"0), where "w"0 > 0, then the solution is
formula_26
where
formula_27
This result can be obtained by the Wiener–Hopf method. However, practical use of the expression is limited by slow convergence of the series, particularly for values of w close to 0.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\frac{\\partial f}{\\partial t} + \\frac{p}{m} \\frac{\\partial f}{\\partial x} = \\xi \\frac{\\partial}{\\partial p} \\left( p \\, f \\right) + \\frac{\\partial}{\\partial p} \\left( \\frac{dV}{dx} \\, f \\right) + m\\xi k_{\\mathrm{B}} T \\, \\frac{\\partial^2 f}{\\partial p^2}\n"
},
{
"math_id": 1,
"text": "\n\\frac{\\partial f}{\\partial t} + \\frac{1}{m} \\mathbf{p} \\cdot \\nabla_{\\mathbf{r}} f = \\xi \\nabla_{\\mathbf{p}} \\cdot \\left( \\mathbf{p} \\, f \\right) + \\nabla_{\\mathbf{p}} \\cdot \\left( \\nabla V(\\mathbf{r}) \\, f \\right) + m \\xi k_{\\mathrm{B}} T \\, \\nabla_{\\mathbf{p}}^2 f\n"
},
{
"math_id": 2,
"text": "\\nabla_{\\mathbf{r}}"
},
{
"math_id": 3,
"text": "\\nabla_{\\mathbf{p}}"
},
{
"math_id": 4,
"text": "\\nabla_{\\mathbf{p}}^2"
},
{
"math_id": 5,
"text": "\n\\begin{align}\n\\dot{\\mathbf{r}} &= \\frac{\\mathbf{p}}{m} \\\\\n\\dot{\\mathbf{p}} &= -\\xi \\, \\mathbf{p} - \\nabla V(\\mathbf{r}) + \\sqrt{2 m \\xi k_{\\mathrm{B}} T} \\boldsymbol{\\eta}(t), \\qquad \\langle \\boldsymbol{\\eta}^{\\mathrm{T}}(t) \\boldsymbol{\\eta}(t') \\rangle = \\mathbf{I} \\delta(t-t') \n\\end{align}\n"
},
{
"math_id": 6,
"text": "\\boldsymbol{\\eta}(t)"
},
{
"math_id": 7,
"text": "\\mathbb{R}^{\\mathrm{d}}"
},
{
"math_id": 8,
"text": "\n\\begin{align}\nf(x,p,t) =\n \\frac{1}{2 \\pi \\sigma_X \\sigma_P \\sqrt{1-\\beta^2}}\n \\exp\\left(\n -\\frac{1}{2(1-\\beta^2)}\\left[\n \\frac{(x-\\mu_X)^2}{\\sigma_X^2} +\n \\frac{(p-\\mu_P)^2}{\\sigma_P^2} -\n \\frac{2\\beta(x-\\mu_X)(p-\\mu_P)}{\\sigma_X \\sigma_P}\n \\right]\n \\right),\n\\end{align}\n"
},
{
"math_id": 9,
"text": "\n\\begin{align}\n&\\sigma^2_X = \\frac{k_{\\mathrm{B}} T}{m \\xi^2} \\left[1 + 2 \\xi t - \\left(2 - e^{-\\xi t}\\right)^2 \\right]; \\qquad \\sigma^2_P = m k_{\\mathrm{B}} T \\left(1 - e^{-2 \\xi t} \\right) \\\\[1ex]\n&\\beta = \\frac{k_\\text{B} T}{\\xi \\sigma_X \\sigma_P} \\left(1 - e^{-\\xi t}\\right)^2 \\\\[1ex]\n&\\mu_X = x' + (m \\xi)^{-1} \\left(1 - e^{-\\xi t} \\right) p' ; \\qquad \\mu_P = p' e^{-\\xi t}.\n\\end{align}\n"
},
{
"math_id": 10,
"text": "\nf(x, p, t) = \\iint G(x, x', p, p', t) f(x',p',0) \\, dx' dp'\n"
},
{
"math_id": 11,
"text": "\n\\begin{align}\nf(\\mathbf{r}, \\mathbf{p}, t) = \\frac{1}{\\left(2 \\pi \\sigma_X \\sigma_P \\sqrt{1 - \\beta^2}\\right)^3} \\exp\\left[-\\frac{1}{2(1-\\beta^2)} \\left( \\frac{|\\mathbf{r} - \\boldsymbol{\\mu}_X|^2}{\\sigma_X^2} + \\frac{|\\mathbf{p} - \\boldsymbol{\\mu}_P|^2}{\\sigma_P^2} - \\frac{2 \\beta (\\mathbf{r} - \\boldsymbol{\\mu}_X) \\cdot (\\mathbf{p} - \\boldsymbol{\\mu}_P)}{\\sigma_X \\sigma_P} \\right) \\right]\n\\end{align}\n"
},
{
"math_id": 12,
"text": "\\boldsymbol{\\mu}_X = \\mathbf{r'} + (m \\xi)^{-1}(1-e^{-\\xi t}) \\mathbf{p'}"
},
{
"math_id": 13,
"text": "\\boldsymbol{\\mu}_P = \\mathbf{p'}e^{-\\xi t}"
},
{
"math_id": 14,
"text": "\\sigma_X"
},
{
"math_id": 15,
"text": "\\sigma_P"
},
{
"math_id": 16,
"text": "\n\\int_{-\\infty}^{\\infty} \\int_{-\\infty}^{\\infty} f(x,p,0) \\, dp \\, dx < \\infty\n"
},
{
"math_id": 17,
"text": "\\Phi(x,t) \\equiv \\int_{-\\infty}^{\\infty} f(x,p,t) \\, dp"
},
{
"math_id": 18,
"text": "\n\\frac{\\Phi(x,t) - \\Phi_D(x,t)}{\\Phi_D(x,t)} = \\mathcal{O}\\left(\\frac{1}{t} \\right) \\quad \\text{as } t \\rightarrow \\infty\n"
},
{
"math_id": 19,
"text": "\\Phi_D(x,t) = (\\sqrt{2 \\pi t} \\sigma_X^2)^{-1/2} \\exp \\left[-x^2/(2 \\sigma_X^2 t) \\right]"
},
{
"math_id": 20,
"text": "\n\\begin{align}\n&f(0, p) =\\left\\{\n\\begin{array}{cc}\ng(p) & p > 0 \\\\\n\\text{unspecified} & p < 0\n\\end{array} \\right. \\\\\n&f(x,p) \\rightarrow 0 \\text{ as } x \\rightarrow \\infty\n\\end{align}\n"
},
{
"math_id": 21,
"text": "\nw \\frac{\\partial f(z,w)}{\\partial z} = \\frac{\\partial}{\\partial w}\\left[ w f(z,w) \\right] + \\frac{\\partial^2 f(z,w)}{\\partial w^2}\n"
},
{
"math_id": 22,
"text": "\\ell = \\sqrt{k_B T/(m \\xi^2)}"
},
{
"math_id": 23,
"text": "\\tau = \\xi^{-1}"
},
{
"math_id": 24,
"text": "z \\equiv x/\\ell"
},
{
"math_id": 25,
"text": "w \\equiv p/(m \\ell \\xi)"
},
{
"math_id": 26,
"text": "\nf(x, w) = \\frac{w_0 e^{-w^2/2}}{\\sqrt{2 \\pi}} \\left[w_0 - \\zeta\\left(\\frac{1}{2}\\right) - \\sum_{n=1}^{\\infty} \\frac{G_{-n}(w_0)}{2nQ_n} + \\sum_{n=1}^{\\infty} S_n(w_0) G_n(w) e^{-\\sqrt{n} z} \\right]\n"
},
{
"math_id": 27,
"text": "\n\\begin{align}\nG_{\\pm n}(w) &= (-1)^{n} 2^{-n/2} e^{-n} (n!)^{-1/2} e^{\\pm \\sqrt{n} w} H_n\\left(\\frac{w}{\\sqrt{2}} \\mp \\sqrt{2 n} \\right), \\qquad n = 1, 2, 3, \\ldots \\\\[1ex]\nS_n(w_0) &= \\frac{G_n(w_0)}{2 \\sqrt{2}} - \\frac{1}{2n Q_n} - \\sum_{m=1}^{\\infty} \\frac{G_{-m}(w_0)}{4 \\left(m \\sqrt{n} + \\sqrt{m} n \\right) Q_m Q_n} \\\\[2ex]\nQ_n &= \\lim_{N \\to \\infty} \\sqrt{n!(N-1)!} \\; e^{2\\sqrt{N n}} \\left[\\prod_{r=0}^{N+n-1} \\left(\\sqrt{r} + \\sqrt{n} \\right) \\right]^{-1}\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=69558806 |
6956 | Conservation law | Scientific law regarding conservation of a physical property
In physics, a conservation law states that a particular measurable property of an isolated physical system does not change as the system evolves over time. Exact conservation laws include conservation of mass-energy, conservation of linear momentum, conservation of angular momentum, and conservation of electric charge. There are also many approximate conservation laws, which apply to such quantities as mass, parity, lepton number, baryon number, strangeness, hypercharge, etc. These quantities are conserved in certain classes of physics processes, but not in all.
A local conservation law is usually expressed mathematically as a continuity equation, a partial differential equation which gives a relation between the amount of the quantity and the "transport" of that quantity. It states that the amount of the conserved quantity at a point or within a volume can only change by the amount of the quantity which flows in or out of the volume.
From Noether's theorem, every differentiable symmetry leads to a conservation law. Other conserved quantities can exist as well.
Conservation laws as fundamental laws of nature.
Conservation laws are fundamental to our understanding of the physical world, in that they describe which processes can or cannot occur in nature. For example, the conservation law of energy states that the total quantity of energy in an isolated system does not change, though it may change form. In general, the total quantity of the property governed by that law remains unchanged during physical processes. With respect to classical physics, conservation laws include conservation of energy, mass (or matter), linear momentum, angular momentum, and electric charge. With respect to particle physics, particles cannot be created or destroyed except in pairs, where one is ordinary and the other is an antiparticle. With respect to symmetries and invariance principles, three special conservation laws have been described, associated with inversion or reversal of space, time, and charge.
Conservation laws are considered to be fundamental laws of nature, with broad application in physics, as well as in other fields such as chemistry, biology, geology, and engineering.
Most conservation laws are exact, or absolute, in the sense that they apply to all possible processes. Some conservation laws are partial, in that they hold for some processes but not for others.
One particularly important result concerning conservation laws is Noether's theorem, which states that there is a one-to-one correspondence between each one of them and a differentiable symmetry of nature. For example, the conservation of energy follows from the time-invariance of physical systems, and the conservation of angular momentum arises from the fact that physical systems behave the same regardless of how they are oriented in space.
Exact laws.
A partial listing of physical conservation equations due to symmetry that are said to be exact laws, or more precisely "have never been proven to be violated:"
Another exact symmetry is CPT symmetry, the simultaneous inversion of space and time coordinates, together with swapping all particles with their antiparticles; however being a discrete symmetry Noether's theorem does not apply to it. Accordingly the conserved quantity, CPT parity, can usually not be meaningfully calculated or determined.
Approximate laws.
There are also approximate conservation laws. These are approximately true in particular situations, such as low speeds, short time scales, or certain interactions.
Global and local conservation laws.
The total amount of some conserved quantity in the universe could remain unchanged if an equal amount were to appear at one point "A" and simultaneously disappear from another separate point "B". For example, an amount of energy could appear on Earth without changing the total amount in the Universe if the same amount of energy were to disappear from some other region of the Universe. This weak form of "global" conservation is really not a conservation law because it is not Lorentz invariant, so phenomena like the above do not occur in nature. Due to special relativity, if the appearance of the energy at "A" and disappearance of the energy at "B" are simultaneous in one inertial reference frame, they will not be simultaneous in other inertial reference frames moving with respect to the first. In a moving frame one will occur before the other; either the energy at "A" will appear "before" or "after" the energy at "B" disappears. In both cases, during the interval energy will not be conserved.
A stronger form of conservation law requires that, for the amount of a conserved quantity at a point to change, there must be a flow, or "flux" of the quantity into or out of the point. For example, the amount of electric charge at a point is never found to change without an electric current into or out of the point that carries the difference in charge. Since it only involves continuous "local" changes, this stronger type of conservation law is Lorentz invariant; a quantity conserved in one reference frame is conserved in all moving reference frames. This is called a "local conservation" law. Local conservation also implies global conservation; that the total amount of the conserved quantity in the Universe remains constant. All of the conservation laws listed above are local conservation laws. A local conservation law is expressed mathematically by a "continuity equation", which states that the change in the quantity in a volume is equal to the total net "flux" of the quantity through the surface of the volume. The following sections discuss continuity equations in general.
Differential forms.
In continuum mechanics, the most general form of an exact conservation law is given by a continuity equation. For example, conservation of electric charge "q" is
formula_0
where ∇⋅ is the divergence operator, "ρ" is the density of "q" (amount per unit volume), j is the flux of "q" (amount crossing a unit area in unit time), and t is time.
If we assume that the motion u of the charge is a continuous function of position and time, then
formula_1
In one space dimension this can be put into the form of a homogeneous first-order quasilinear hyperbolic equation:43
formula_2
where the dependent variable "y" is called the "density" of a "conserved quantity", and "A"("y") is called the "current Jacobian", and the subscript notation for partial derivatives has been employed. The more general inhomogeneous case:
formula_3
is not a conservation equation but the general kind of balance equation describing a dissipative system. The dependent variable "y" is called a "nonconserved quantity", and the inhomogeneous term "s"("y","x","t") is the-"source", or dissipation. For example, balance equations of this kind are the momentum and energy Navier-Stokes equations, or the entropy balance for a general isolated system.
In the one-dimensional space a conservation equation is a first-order quasilinear hyperbolic equation that can be put into the "advection" form:
formula_4
where the dependent variable "y"("x","t") is called the density of the "conserved" (scalar) quantity, and "a"("y") is called the current coefficient, usually corresponding to the partial derivative in the conserved quantity of a current density of the conserved quantity "j"("y"):43
formula_5
In this case since the chain rule applies:
formula_6
the conservation equation can be put into the current density form:
formula_7
In a space with more than one dimension the former definition can be extended to an equation that can be put into the form:
formula_8
where the "conserved quantity" is "y"(r,"t"), ⋅ denotes the scalar product, ∇ is the nabla operator, here indicating a gradient, and "a"("y") is a vector of current coefficients, analogously corresponding to the divergence of a vector current density associated to the conserved quantity j("y"):
formula_9
This is the case for the continuity equation:
formula_10
Here the conserved quantity is the mass, with density "ρ"(r,"t") and current density "ρ"u, identical to the momentum density, while u(r, "t") is the flow velocity.
In the general case a conservation equation can be also a system of this kind of equations (a vector equation) in the form:43
formula_11
where y is called the "conserved" (vector) quantity, ∇"y" is its gradient, 0 is the zero vector, and A(y) is called the Jacobian of the current density. In fact as in the former scalar case, also in the vector case A(y) usually corresponding to the Jacobian of a current density matrix J(y):
formula_12
and the conservation equation can be put into the form:
formula_13
For example, this the case for Euler equations (fluid dynamics). In the simple incompressible case they are:
formula_14
where:
It can be shown that the conserved (vector) quantity and the current density matrix for these equations are respectively:
formula_15
where formula_16 denotes the outer product.
Integral and weak forms.
Conservation equations can usually also be expressed in integral form: the advantage of the latter is substantially that it requires less smoothness of the solution, which paves the way to weak form, extending the class of admissible solutions to include discontinuous solutions.62–63 By integrating in any space-time domain the current density form in 1-D space:
formula_17
and by using Green's theorem, the integral form is:
formula_18
In a similar fashion, for the scalar multidimensional space, the integral form is:
formula_19
where the line integration is performed along the boundary of the domain, in an anticlockwise manner.62–63
Moreover, by defining a test function "φ"(r,"t") continuously differentiable both in time and space with compact support, the weak form can be obtained pivoting on the initial condition. In 1-D space it is:
formula_20
In the weak form all the partial derivatives of the density and current density have been passed on to the test function, which with the former hypothesis is sufficiently smooth to admit these derivatives.62–63
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\partial \\rho}{\\partial t} = - \\nabla \\cdot \\mathbf{j} \\,"
},
{
"math_id": 1,
"text": "\\begin{align}\n \\mathbf{j} &= \\rho \\mathbf{u} \\\\\n \\frac{\\partial \\rho}{\\partial t} &= - \\nabla \\cdot (\\rho \\mathbf{u}) \\,.\n\\end{align}"
},
{
"math_id": 2,
"text": " y_t + A(y) y_x = 0 "
},
{
"math_id": 3,
"text": " y_t + A(y) y_x = s "
},
{
"math_id": 4,
"text": " y_t + a(y) y_x = 0 "
},
{
"math_id": 5,
"text": " a(y) = j_y (y)"
},
{
"math_id": 6,
"text": " j_x = j_y (y) y_x = a(y) y_x "
},
{
"math_id": 7,
"text": " y_t + j_x (y) = 0 "
},
{
"math_id": 8,
"text": " y_t + \\mathbf a(y) \\cdot \\nabla y = 0 "
},
{
"math_id": 9,
"text": " y_t + \\nabla \\cdot \\mathbf j(y) = 0 "
},
{
"math_id": 10,
"text": " \\rho_t + \\nabla \\cdot (\\rho \\mathbf u) = 0 "
},
{
"math_id": 11,
"text": " \\mathbf y_t + \\mathbf A(\\mathbf y) \\cdot \\nabla \\mathbf y = \\mathbf 0 "
},
{
"math_id": 12,
"text": " \\mathbf A( \\mathbf y) = \\mathbf J_{\\mathbf y} (\\mathbf y)"
},
{
"math_id": 13,
"text": " \\mathbf y_t + \\nabla \\cdot \\mathbf J (\\mathbf y)= \\mathbf 0 "
},
{
"math_id": 14,
"text": "\n\\nabla\\cdot \\mathbf u = 0 \\, , \\qquad\n\\frac{\\partial \\mathbf u}{\\partial t} + \\mathbf u \\cdot \\nabla \\mathbf u + \\nabla s = \\mathbf{0},\n"
},
{
"math_id": 15,
"text": "\n{\\mathbf y} = \\begin{pmatrix} 1 \\\\ \\mathbf u \\end{pmatrix}; \\qquad\n{\\mathbf J} = \\begin{pmatrix}\\mathbf u\\\\ \\mathbf u \\otimes \\mathbf u + s \\mathbf I\\end{pmatrix};\\qquad\n"
},
{
"math_id": 16,
"text": "\\otimes"
},
{
"math_id": 17,
"text": " y_t + j_x (y)= 0 "
},
{
"math_id": 18,
"text": " \\int_{- \\infty}^\\infty y \\, dx + \\int_0^\\infty j (y) \\, dt = 0 "
},
{
"math_id": 19,
"text": " \\oint \\left[y \\, d^N r + j (y) \\, dt\\right] = 0 "
},
{
"math_id": 20,
"text": " \\int_0^\\infty \\int_{-\\infty}^\\infty \\phi_t y + \\phi_x j(y) \\,dx \\,dt = - \\int_{-\\infty}^\\infty \\phi(x,0) y(x,0) \\, dx "
}
]
| https://en.wikipedia.org/wiki?curid=6956 |
69562926 | Sobolev mapping | In mathematics, a Sobolev mapping is a mapping between manifolds which has smoothness in some sense.
Sobolev mappings appear naturally in manifold-constrained problems in the calculus of variations and partial differential equations, including the theory of harmonic maps.
Definition.
Given Riemannian manifolds formula_0 and formula_1, which is assumed by Nash's smooth embedding theorem without loss of generality to be isometrically embedded into formula_2 as
formula_3
First-order (formula_4) Sobolev mappings can also be defined in the context of metric spaces.
Approximation.
The strong approximation problem consists in determining whether smooth mappings from formula_0 to formula_1 are dense in formula_5 with respect to the norm topology.
When formula_6, Morrey's inequality implies that Sobolev mappings are continuous and can thus be strongly approximated by smooth maps.
When formula_7, Sobolev mappings have vanishing mean oscillation and can thus be approximated by smooth maps.
When formula_8, the question of density is related to obstruction theory:
formula_9 is dense in formula_10 if and only if every continuous mapping on a from a formula_11–dimensional triangulation of formula_0 into formula_1 is the restriction of a continuous map from formula_0 to formula_1.
The problem of finding a sequence of weak approximation of maps in formula_10 is equivalent to the strong approximation when formula_12 is not an integer.
When formula_12 is an integer, a necessary condition is that the restriction to a formula_13-dimensional triangulation of every continuous mapping from a formula_11–dimensional triangulation of formula_0 into formula_1 coincides with the restriction a continuous map from formula_0 to formula_1.
When formula_14, this condition is sufficient.
For formula_15 with formula_16, this condition is not sufficient.
Homotopy.
The homotopy problem consists in describing and classifying the path-connected components of the space formula_17 endowed with the norm topology.
When formula_18 and formula_19, then the path-connected components of formula_5 are essentially the same as the path-connected components of formula_20: two maps in formula_21 are connected by a path in formula_5 if and only if they are connected by a path in formula_20, any path-connected component of formula_5 and any path-connected component of formula_22 intersects formula_21 non trivially.
When formula_23, two maps in formula_10 are connected by a continuous path in formula_10 if and only if their restrictions to a generic formula_13-dimensional triangulation are homotopic.
Extension of traces.
The classical trace theory states that any Sobolev map formula_24 has a trace formula_25 and that when formula_26, the trace operator is onto. The proof of the surjectivity being based on an averaging argument, the result does not readily extend to Sobolev mappings.
The trace operator is known to be onto when formula_27 or when formula_28, formula_29 is finite and formula_30. The surjectivity of the trace operator fails if formula_31 or if formula_32 is infinite for some formula_33.
Lifting.
Given a covering map formula_34, the lifting problem asks whether any map formula_35 can be written as formula_36 for some formula_37, as it is the case for continuous or smooth formula_38 and formula_39 when formula_0 is simply-connected in the classical lifting theory.
If the domain formula_0 is simply connected, any map formula_35 can be written as formula_36 for some formula_40
when formula_41, when formula_42 and formula_43
and when formula_1 is compact, formula_44 and formula_43.
There is a topological obstruction to the lifting when formula_45 and an analytical obstruction when formula_46.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "\\mathbb{R}^\\nu"
},
{
"math_id": 3,
"text": "\n W^{s, p} (M, N)\n :=\n\\{u \\in W^{s, p} (M, \\mathbb{R}^\\nu) \\, \\vert \\, u (x) \\in N \\text{ for almost every } x \\in M\\}.\n"
},
{
"math_id": 4,
"text": "s=1"
},
{
"math_id": 5,
"text": "W^{s, p} (M, N)"
},
{
"math_id": 6,
"text": "sp > \\dim M"
},
{
"math_id": 7,
"text": "sp = \\dim M"
},
{
"math_id": 8,
"text": "sp <\\dim M"
},
{
"math_id": 9,
"text": "C^\\infty (M, N)"
},
{
"math_id": 10,
"text": "W^{1, p} (M, N)"
},
{
"math_id": 11,
"text": "\\lfloor p\\rfloor"
},
{
"math_id": 12,
"text": "p"
},
{
"math_id": 13,
"text": "\\lfloor p - 1\\rfloor"
},
{
"math_id": 14,
"text": "p = 2"
},
{
"math_id": 15,
"text": "W^{1, 3} (M, \\mathbb{S}^2)"
},
{
"math_id": 16,
"text": "\\dim M \\ge 4"
},
{
"math_id": 17,
"text": "W^{s, p}(M, N)"
},
{
"math_id": 18,
"text": "0 < s \\le 1"
},
{
"math_id": 19,
"text": "\\dim M \\le sp"
},
{
"math_id": 20,
"text": "C(M, N)"
},
{
"math_id": 21,
"text": "W^{s, p} (M, N) \\cap C (M, N)"
},
{
"math_id": 22,
"text": "C (M, N)"
},
{
"math_id": 23,
"text": "\\dim M > p"
},
{
"math_id": 24,
"text": "u \\in W^{1, p} (M, N)"
},
{
"math_id": 25,
"text": "Tu \\in W^{1 - 1/p, p} (\\partial M, N)"
},
{
"math_id": 26,
"text": "N = \\mathbb{R}"
},
{
"math_id": 27,
"text": "\\pi_{1} (N) \\simeq \\dotsb \\pi_{\\lfloor p - 1\\rfloor}(N) \\simeq \\{0\\}"
},
{
"math_id": 28,
"text": "p\\ge 3"
},
{
"math_id": 29,
"text": "\\pi_{1} (N)"
},
{
"math_id": 30,
"text": "\\pi_{2} (N) \\simeq \\dotsb \\pi_{\\lfloor p - 1\\rfloor}(N) \\simeq \\{0\\}"
},
{
"math_id": 31,
"text": "\\pi_{\\lfloor p - 1\\rfloor} (N)\\not \\simeq \\{0\\}"
},
{
"math_id": 32,
"text": "\\pi_{\\ell} (N)"
},
{
"math_id": 33,
"text": "\\ell \\in \\{1, \\dotsc, \\lfloor p - 1\\rfloor\\}"
},
{
"math_id": 34,
"text": "\\pi : \\tilde{N} \\to N"
},
{
"math_id": 35,
"text": "u \\in W^{s, p} (M, N)"
},
{
"math_id": 36,
"text": "u = \\pi \\circ \\tilde{u}"
},
{
"math_id": 37,
"text": "\\tilde{u} \\in W^{s, p} (M, \\tilde{N})"
},
{
"math_id": 38,
"text": "u"
},
{
"math_id": 39,
"text": "\\tilde{u}"
},
{
"math_id": 40,
"text": "\\tilde{u} \\in W^{s, p} (M, N)"
},
{
"math_id": 41,
"text": "sp \\ge \\dim M"
},
{
"math_id": 42,
"text": "s\\ge 1"
},
{
"math_id": 43,
"text": "2 \\le sp <\\dim M"
},
{
"math_id": 44,
"text": "0 < s <1"
},
{
"math_id": 45,
"text": "sp < 2 "
},
{
"math_id": 46,
"text": "1 \\le sp < \\dim M"
}
]
| https://en.wikipedia.org/wiki?curid=69562926 |
69562949 | 1 Samuel 15 | First Book of Samuel chapter
1 Samuel 15 is the fifteenth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains Saul disobedience in dealing with the Amalekites. This is within a section comprising 1 Samuel 7–15 which records the rise of the monarchy in Israel and the account of the first years of King Saul.
Text.
This chapter was originally written in the Hebrew language. It is divided into 35 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 20–21, 24–32 and 4Q52 (4QSamb; 250 BCE) with extant verses 16–18.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
God through Samuel commanded Saul to lead genocidal war against the Amalekites, as punishment for their attacks on the Israelites on their way from Egypt (verses 1–3, cf. Exodus 17:8–16; Deuteronomy 25:17–19). When Saul did not completely fulfill the order, Samuel spoke an oracle of judgement to Saul (verses 17–31), a similar prophetic attitude as in 8:1–22 and 13:8–15, but here the rejection of Saul is final and absolute (verses 28–29) and 'parabolically confirmed by the accidental tearing of Samuel's robe when Saul made his last desperate supplication' (verse 27). The rejection is spoken in rhythmic form in verses 21–23, contrasting Saul's sacrifice and obedience (cf. Isaiah 1:11–15; Hosea 6:6; Amos 5:21–24; Micah 6:6–8) and declaring that he who rejected God's word has been rejected, following a preliminary warning in 13:13 (cf.
12:14). Saul's guilt was described in the selection of words for his action: disobedience (verse 19), doing evil (verse 19), rebellion (verse 23), stubbornness (verse 23), rejection of God's word (verse 23), as Saul admitted himself that what he did was a sin and transgression (verse 24). Relations between Samuel and Saul were then broken off (verses 34–35), as the cycle of Samuel-Saul narratives is completed; the next section consists of a Saul–David cycle.
Saul's partial obedience in the mission against the Amalekites (15:1–9).
Saul as God's anointed has been given a clear mission from God, that Samuel spoke in detail, perhaps to avoid 'miscommunications' in previous commandments (cf. 1 Samuel 10:8; 1 Samuel 13), and to assure no misunderstanding in the execution. The mission is to "totally destroy" the Amalekites, a practice called "herem" in Hebrew or "the ban" in English, where no prisoner should be taken and all spoil should be destroyed. This is as divine punishment from God as a vengeance of the attacks by the Amalekites, a descendant of Esau, to the Israelites during wilderness wandering out of Egypt (Exodus 17:8–13) and after the Israelites were in Canaan (Numbers 14:43, 45; Judges 3:13; 6:3–5, 33; 7:10, 12), so that YHWH would "completely blot out the name of Amalek from under heaven" (Exodus 17:14; cf. Deuteronomy 25:17–19). As the things 'devoted to destruction' exclusively belong to YHWH, so the violation of the ban was handled seriously: those who kept something 'under the ban' would themselves put 'under the ban' or to be destroyed (cf. Joshua 7:1, 2:24–26). Against this clear order of YHWH, Saul spared Agag, the king of the Amalek and the best of the animals (verse 9), partially as a 'trophy of war' fitting to his plan for a 'monument in his own honor' in Carmel (verse 12).
"Then Saul said to the Kenites, "Go, depart; go down from among the Amalekites, lest I destroy you with them. For you showed kindness to all the people of Israel when they came up out of Egypt." So the Kenites departed from among the Amalekites."
God rejects Saul as king of Israel (15:10–35).
After Saul disobeyed God's command, God told Samuel of His regret making Saul a king. The Hebrew root word "nhm" for "regret" was used 4 times in this chapter (among English Bible translations, ESV consistently renders it as "regret" whereas others use "change of mind" or "repent"). Samuel reacted with 'anger' to God for changing His mind about Saul and 'cries' out all night long. This has a parallel in the account of Jonah who also wished that God would not change His mind on Nineveh: after Jonah 'preach against' Nineveh (Jonah 1:2), prophesying its destruction due to its wickedness, the people of the city repented, so God 'changed His mind' (Hebrew: "nhm") and did not bring the destruction He had threatened (Jonah 3:10). This made Jonah 'angry' to God for changing His mind (Jonah 4:2) about Nineveh.
Samuel confronted Saul who had gone to Carmel to 'set up a monument in his own honor' (verse 12), not a humble king anymore. Saul preemptively said that he had obeyed God's order before being asked (verse 13), but Samuel was already told by God about the truth and could hear the sound of cattle which were spared from destruction. Saul tried to deflect the blame by first directing it subtly to his soldiers ('the soldiers brought them') and by saying that they would be slaughtered in a sacrifice for YHWH (verse 15). Samuel confronted all excuses by pointing out that 'to obey is better than sacrifice' and disobedience 'is like the sin of divination' and arrogance like 'the evil of idolatry' (verses 22–23), so since Saul rejected the word of God, God now rejected him as king (verse 23), not just that his future dynasty was canceled as previously stated. Saul desperately begged Samuel to 'repent' (Hebrew: "shub"; "come back"/"turning away", could be from God as in Joshua 23:12, Judges 2:17; 8:33, or from sin as in 1 Kings 8:48) with him (verse 25). First, Samuel rejected (verse 26), but when Saul asked again to honor him 'before the elders of his people and before Israel' (verse 30), Samuel decided to 'repent' with Saul, so Saul worshipped the Lord before the people (verse 31), and Samuel righted Saul's wrongdoing by publicly killing Agag (verses 32–35). Following this public show, Samuel and Saul parted ways, never to meet again, although Samuel continued to mourn for Saul (verse 35). At the end, God did give mercy to Saul by not immediately removing him as king.
Uses.
Music.
"1 Samuel 15:23" is a song title in the album "The Life of the World to Come" inspired by this verse that was released by the American band The Mountain Goats in 2009.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69562949 |
69563183 | Area formula (geometric measure theory) | Area formula from geometric measure theory
In geometric measure theory the area formula relates the Hausdorff measure of the image of a Lipschitz map, while accounting for multiplicity, to the integral of the Jacobian of the map. It is one of the fundamental results of the field that has connections, for example, to rectifiability and Sard's theorem.
Definition: Given formula_0 and formula_1, the multiplicity function formula_2, is the (possibly infinite) number of points in the preimage formula_3. The multiplicity function is also called the Banach indicatrix. Note that formula_4. Here, formula_5 denotes the "n"-dimensional Hausdorff measure, and formula_6 will denote the "n"-dimensional Lebesgue measure.
Theorem: If formula_0 is Lipschitz and formula_7, then for any measurable formula_8,
formula_9
where
formula_10
is the Jacobian of formula_11.
The measurability of the multiplicity function is part of the claim. The Jacobian is defined almost everywhere by Rademacher's differentiability theorem.
The theorem was proved first by Herbert Federer .
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f\\colon \\mathbb{R}^n \\to \\mathbb{R}^m"
},
{
"math_id": 1,
"text": "A\\subset \\mathbb{R}^n "
},
{
"math_id": 2,
"text": "N(f,A,y), \\, y\\in \\mathbb{R}^m "
},
{
"math_id": 3,
"text": "f^{-1}(y)\\cap A"
},
{
"math_id": 4,
"text": "N(f,A,y) = \\mathcal{H}^0(f^{-1}(y)\\cap A)"
},
{
"math_id": 5,
"text": "\\mathcal{H}^n"
},
{
"math_id": 6,
"text": "\\mathcal{L}^n"
},
{
"math_id": 7,
"text": "n\\leq m"
},
{
"math_id": 8,
"text": "A\\subset \\mathbb{R}^n"
},
{
"math_id": 9,
"text": "\\int_A {J}(Df(x))\\, d \\mathcal{L}^n(x) = \\int_{\\mathbb{R}^m} N(f,A,y) \\, d\\mathcal{H}^n(y) \\, ,"
},
{
"math_id": 10,
"text": "{J}(Df(x))=\\sqrt{\\det(Df(x)^tDf(x))}"
},
{
"math_id": 11,
"text": "Df(x)"
}
]
| https://en.wikipedia.org/wiki?curid=69563183 |
69566 | Euler's theorem | Theorem on modular exponentiation
In number theory, Euler's theorem (also known as the Fermat–Euler theorem or Euler's totient theorem) states that, if "n" and "a" are coprime positive integers, then formula_0 is congruent to formula_1 modulo "n", where formula_2 denotes Euler's totient function; that is
formula_3
In 1736, Leonhard Euler published a proof of Fermat's little theorem (stated by Fermat without proof), which is the restriction of Euler's theorem to the case where n is a prime number. Subsequently, Euler presented other proofs of the theorem, culminating with his paper of 1763, in which he proved a generalization to the case where n is not prime.
The converse of Euler's theorem is also true: if the above congruence is true, then formula_5 and formula_6 must be coprime.
The theorem is further generalized by some of Carmichael's theorems.
The theorem may be used to easily reduce large powers modulo formula_6. For example, consider finding the ones place decimal digit of formula_7, i.e. formula_8. The integers 7 and 10 are coprime, and formula_9. So Euler's theorem yields formula_10, and we get formula_11.
In general, when reducing a power of formula_5 modulo formula_6 (where formula_5 and formula_6 are coprime), one needs to work modulo formula_4 in the exponent of formula_5:
if formula_12, then formula_13.
Euler's theorem underlies the RSA cryptosystem, which is widely used in Internet communications. In this cryptosystem, Euler's theorem is used with n being a product of two large prime numbers, and the security of the system is based on the difficulty of factoring such an integer.
Proofs.
1. Euler's theorem can be proven using concepts from the theory of groups:
The residue classes modulo n that are coprime to n form a group under multiplication (see the article Multiplicative group of integers modulo "n" for details). The order of that group is "φ"("n"). Lagrange's theorem states that the order of any subgroup of a finite group divides the order of the entire group, in this case "φ"("n"). If a is any number coprime to n then a is in one of these residue classes, and its powers "a", "a"2, ... , "a""k" modulo n form a subgroup of the group of residue classes, with "a""k" ≡ 1 (mod "n"). Lagrange's theorem says k must divide "φ"("n"), i.e. there is an integer M such that "kM"
"φ"("n"). This then implies,
formula_14
2. There is also a direct proof: Let "R"
{"x"1, "x"2, ... , "x""φ"("n")} be a reduced residue system (mod "n") and let a be any integer coprime to n. The proof hinges on the fundamental fact that multiplication by a permutes the xi: in other words if "axj" ≡ "axk" (mod "n") then "j"
"k". (This law of cancellation is proved in the article Multiplicative group of integers modulo "n".) That is, the sets R and "aR"
{"ax"1, "ax"2, ... , "ax""φ"("n")}, considered as sets of congruence classes (mod "n"), are identical (as sets—they may be listed in different orders), so the product of all the numbers in R is congruent (mod "n") to the product of all the numbers in aR:
formula_15 and using the cancellation law to cancel each xi gives Euler's theorem:
formula_16
Notes.
<templatestyles src="Reflist/styles.css" />
References.
The "Disquisitiones Arithmeticae" has been translated from Gauss's Ciceronian Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes. | [
{
"math_id": 0,
"text": "a^{\\varphi(n)}"
},
{
"math_id": 1,
"text": "1"
},
{
"math_id": 2,
"text": "\\varphi"
},
{
"math_id": 3,
"text": "a^{\\varphi (n)} \\equiv 1 \\pmod{n}."
},
{
"math_id": 4,
"text": "\\varphi(n)"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "7^{222}"
},
{
"math_id": 8,
"text": "7^{222} \\pmod{10}"
},
{
"math_id": 9,
"text": "\\varphi(10) = 4"
},
{
"math_id": 10,
"text": "7^4 \\equiv 1 \\pmod{10}"
},
{
"math_id": 11,
"text": "7^{222} \\equiv 7^{4 \\times 55 + 2} \\equiv (7^4)^{55} \\times 7^2 \\equiv 1^{55} \\times 7^2 \\equiv 49 \\equiv 9 \\pmod{10}"
},
{
"math_id": 12,
"text": "x \\equiv y \\pmod{\\varphi(n)}"
},
{
"math_id": 13,
"text": "a^x \\equiv a^y \\pmod{n}"
},
{
"math_id": 14,
"text": "a^{\\varphi(n)} = a^{kM} = (a^{k})^M \\equiv 1^M =1 \\pmod{n}."
},
{
"math_id": 15,
"text": "\n\\prod_{i=1}^{\\varphi(n)} x_i \\equiv \n\\prod_{i=1}^{\\varphi(n)} ax_i =\na^{\\varphi(n)}\\prod_{i=1}^{\\varphi(n)} x_i \\pmod{n},\n"
},
{
"math_id": 16,
"text": "\na^{\\varphi(n)}\\equiv 1 \\pmod{n}.\n"
}
]
| https://en.wikipedia.org/wiki?curid=69566 |
69573115 | Thyroid Feedback Quantile-based Index | The Thyroid Feedback Quantile-based Index (TFQI) is a calculated parameter for thyrotropic pituitary function. It was defined to be more robust to distorted data than established markers including Jostel's TSH index (JTI) and the thyrotroph thyroid hormone sensitivity index (TTSI).
How to determine the TFQI.
The TFQI can be calculated with
formula_0
from quantiles of FT4 and TSH concentration (as determined based on cumulative distribution functions). Per definition the TFQI has a mean of 0 and a standard deviation of 0.37 in a reference population. This explains the reference range of –0.74 to + 0.74.
Clinical significance.
Higher values of TFQI are associated with obesity, metabolic syndrome, impaired renal function, diabetes, and diabetes-related mortality. In a large population of community-dwelling euthyroid subjects the thyroid feedback quantile-based index predicted all-cause mortality, even after adjustment for other established risk factors and comorbidities.
A cross-sectional study from Spain observed increased prevalence of type 2 diabetes, atrial fibrillation, ischemic heart disease and hypertension in persons with elevated PTFQI.
Serum Concentrations of Adipocyte Fatty Acid-Binding Protein (A-FABP) are significantly correlateted to TFQI, suggesting some form of cross-talk between adipose tissue and HPT axis.
TFQI results are also elevated in takotsubo syndrome, potentially reflecting type 2 allostatic load in the situation of psychosocial stress. Reductions have been observed in subjects with schizophrenia after initiation of therapy with oxcarbazepine and quetiapine, potentially reflecting declining allostatic load.
Despite positive association to metabolic syndrome and type 2 allostatic load a large population-based study failed to identify an association to risks of dyslipidemia and non-alcoholic fatty liver disease (NAFLD).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "TFQI = F_{FT4}(FT4) - (1 - F_{TSH}(TSH))"
}
]
| https://en.wikipedia.org/wiki?curid=69573115 |
69574206 | 1 Samuel 16 | First Book of Samuel chapter
1 Samuel 16 is the sixteenth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the anointing of David by Samuel and David's early service for Saul. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 23 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q52 (4QSamb; 250 BCE) with extant verses 1–11.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
The section comprising 1 Samuel 16 to 2 Samuel 5 is known as the "History of David's Rise",
with David as the central character, within which 1 Samuel 16:1 to 2 Samuel 1:27 form an independent unit with a central theme of "the decline of Saul and the rise of David". The part emphasizes that David is God's chosen king (1 Samuel 16:1–13; 'the LORD was with him' ; 18:14), but Saul was still king and David was careful not to take over the kingdom from God's anointed (1 Samuel 24:6; 26:9), even it is shown throughout that David was under blessing, while Saul was under curse. The narrative stresses that David did not come to power by killing Saul's family, and that Saul and his son Jonathan knew that David was the chosen successor; Jonathan even assisted David by his own virtual abdication, while Saul tried to oppress David due to jealousy.
Samuel Anoints David as King of Israel (16:1–13).
The narrative of David's anointing bears some similarities to Saul's own election to the kingship:
Despite the similarities, this narrative the major difference introduced by is that Saul was rejected but David chosen, explicitly shown in verse 13 with the 'transfer of YHWH's spirit from Saul to David and the abandonment of Saul to a malevolent spirit'.
"Now the Lord said to Samuel, "How long will you mourn for Saul, seeing I have rejected him from reigning over Israel? Fill your horn with oil, and go; I am sending you to Jesse the Bethlehemite. For I have provided Myself a king among his sons.""
"And Samuel said, "How can I go? If Saul hears it, he will kill me.""
"And the Lord said, "Take a heifer with you and say, ‘I have come to sacrifice to the Lord.'""
Verse 2.
After Saul's rejection (verse 1), Samuel was fear of Saul's reprisal, so he had to have a pretence of going to Bethlehem to anoint Saul's replacement.
"But the Lord said to Samuel, “Do not look on his appearance or on the height of his stature, because I have rejected him. For the Lord sees not as man sees. For man looks on the outward appearance, but the Lord looks on the heart.”"
Verse 7.
Although David was handsome (verse 12), it is emphasized that God does not look on the 'outward appearance', as it was precisely for that reason that Eliab, who was as tall as Saul, was rejected.
David in Saul's service (16:10–35).
Not long after David was anointed and endowed with YHWH's spirit, Saul became unwell (verse 14), which turned out to be an opportunity for David to enter the court. David was brought in because of his skill in playing music (verse 18), but inside the court he had palace training that would be useful for his future. Apparently David's military prowess also attracted the attention of Saul, whose policy was to enlist all capable men in his fight against the Philistines (1 Samuel 14:52), so David additionally was appointed as Saul's armor-bearer. Furthermore, David was said to have good intellectual judgement, was a man of presence (verse 18), and on top of those, 'YHWH is with him'. Verse 21 even states that 'Saul loved him' ('Saul' was explicitly mentioned in the Greek Septuagint, instead of ambiguous subject in Masoretic Text), which later turned to a love-hate relationship between the two. An important statement in verse 23: Saul was entirely in David's hands, and David took that responsibility seriously.
"Now the Spirit of the Lord had turned away from Saul, and an evil spirit from the Lord tormented him."
"Then one of the servants answered and said,"
"Look, I have seen a son of Jesse the Bethlehemite, who is skillful in playing, a mighty man of valor, a man of war, prudent in speech, and a handsome person; and the Lord is with him."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69574206 |
695746 | Wittgenstein on Rules and Private Language | 1982 exegesis by Saul Kripke
Wittgenstein on Rules and Private Language is a 1982 book by philosopher of language Saul Kripke in which he contends that the central argument of Ludwig Wittgenstein's "Philosophical Investigations" centers on a skeptical rule-following paradox that undermines the possibility of our ever following rules in our use of language. Kripke writes that this paradox is "the most radical and original skeptical problem that philosophy has seen to date" (p. 60). He argues that Wittgenstein does not reject the argument that leads to the rule-following paradox, but accepts it and offers a "skeptical solution" to alleviate the paradox's destructive effects.
Kripkenstein: Kripke's skeptical Wittgenstein.
While most commentators accept that the "Philosophical Investigations" contains the rule-following paradox as Kripke presents it, few have concurred in attributing Kripke's skeptical solution to Wittgenstein. Kripke expresses doubts in "Wittgenstein on Rules and Private Language" as to whether Wittgenstein would endorse his interpretation of the "Philosophical Investigations". He says that his book should not be read as an attempt to give an accurate summary of Wittgenstein's views, but rather as an account of Wittgenstein's argument "as it struck Kripke, as it presented a problem for him" (p. 5). The portmanteau "Kripkenstein" has been coined as a term for a fictional person who holds the views expressed by Kripke's reading of the "Philosophical Investigations"; in this way, it is convenient to speak of Kripke's own views, Wittgenstein's views (as generally understood), and Kripkenstein's views. Wittgenstein scholar David G. Stern considers Kripke's book the most influential and widely discussed work on Wittgenstein since the 1980s.
The rule-following paradox.
In "Philosophical Investigations" §201a Wittgenstein states the rule-following paradox: "This was our paradox: no course of action could be determined by a rule, because any course of action can be made out to accord with the rule".
Kripke gives a mathematical example to illustrate the reasoning that leads to this conclusion. Suppose that you have never added numbers greater than or equal to 57 before. Further, suppose that you are asked to perform the computation 68 + 57. Our natural inclination is that you will apply the addition function as you have before, and calculate that the correct answer is 125. But now imagine that a bizarre skeptic comes along and argues:
After all, the skeptic reasons, by hypothesis you have never added numbers 57 or greater before. It is perfectly consistent with your previous use of "plus" that you actually meant "quus", defined as:
formula_0
Thus under the quus function, if either of the two numbers added is 57 or greater, the sum is 5. The skeptic argues that there is no fact that determines that you ought to answer 125 rather than 5, as all your prior addition is compatible with the quus function instead of the plus function, for you have never added a number greater than or equal to 57 before.
Further, your past usage of the addition function is susceptible to an infinite number of different quus-like interpretations. It appears that every new application of "plus", rather than being governed by a strict, unambiguous rule, is actually a leap in the dark.
Similar skeptical reasoning can be applied to the meaning of any word of any human language. The power of Kripke's example is that in mathematics the rules for the use of expressions appear to be defined clearly for an infinite number of cases. Kripke doesn't question the mathematical validity of the "+" function, but rather the meta-linguistic usage of "plus": what fact can we point to that shows that "plus" refers to the mathematical function "+"?
If we assume for the sake of argument that "plus" refers to the function "+", the skeptical problem simply resurfaces at a higher level. The addition algorithm itself will contain terms susceptible to different and incompatible interpretations. In short, rules for interpreting rules provide no help, because they themselves can be interpreted in different ways. Or, as Wittgenstein puts it, "any interpretation still hangs in the air along with what it interprets, and cannot give it any support. Interpretations by themselves do not determine meaning" ("Philosophical Investigations" §198a).
The skeptical solution.
Following David Hume, Kripke distinguishes between two types of solution to skeptical paradoxes. Straight solutions dissolve paradoxes by rejecting one (or more) of the premises that lead to them. Skeptical solutions accept the truth of the paradox, but argue that it does not undermine our ordinary beliefs and practices in the way it seems to. Because Kripke thinks that Wittgenstein endorses the skeptical paradox, he is committed to the view that Wittgenstein offers a skeptical, and not a straight, solution.
The rule-following paradox threatens our ordinary beliefs and practices concerning meaning because it implies that there is no such thing as meaning something by an expression or sentence. John McDowell explains this as follows. We are inclined to think of meaning in contractual terms: that is, that meanings commit or oblige us to use words in a certain way. When you grasp the meaning of the word "dog", for example, you know that you ought to use that word to refer to dogs, and not cats. But if there cannot be rules governing the uses of words, as the rule-following paradox apparently shows, this intuitive notion of meaning is utterly undermined.
Kripke holds that other commentators on "Philosophical Investigations" have believed that the private language argument is presented in sections occurring after §243. Kripke reacts against this view, noting that the conclusion to the argument is explicitly stated by §202, which reads “Hence it is not possible to obey a rule ‘privately’: otherwise thinking one was obeying a rule would be the same as obeying it.” Further, in this introductory section, Kripke identifies Wittgenstein's interests in the philosophy of mind as related to his interests in the foundations of mathematics, in that both subjects require considerations about rules and rule-following.
Kripke's skeptical solution is this: A language-user's following a rule correctly is not justified by any fact that obtains about the relationship between their candidate application of a rule in a particular case and the putative rule itself (as for Hume the causal link between two events "a" and "b" is not determined by any particular fact obtaining between them "taken in isolation"); rather, the assertion that the rule that is being followed is justified by the fact that the behaviors surrounding the candidate instance of rule-following (by the candidate rule-follower) meet other language users' expectations. That the solution is not based on a fact about "a particular instance" of putative rule-following—as it would be if it were based on some mental state of meaning, interpretation, or intention—shows that this solution is skeptical in the sense Kripke specifies.
The "straight" solution.
In contrast to the kind of solution offered by Kripke (above) and Crispin Wright (elsewhere), McDowell interprets Wittgenstein as correctly (by McDowell's lights) offering a "straight solution". McDowell argues that Wittgenstein does present the paradox (as Kripke argues), but he argues further that Wittgenstein rejects the paradox on the grounds that it assimilates understanding and interpretation. In order to understand something, we must have an interpretation. That is, to understand what is meant by "plus", we must first have an interpretation of what "plus" means. This leads one to either skepticism—how do you know your interpretation is the correct interpretation?—or relativity, whereby our understandings, and thus interpretations, are only so determined insofar as we have used them. On this latter view, endorsed by Wittgenstein in Wright's readings, there are no facts about numerical addition that we have so far not discovered, so when we come upon such situations, we can flesh out our interpretations further. According to McDowell, both of these alternatives are rather unsatisfying, the latter because we want to say that there are facts about numbers that have not yet been added.
McDowell further writes that to understand rule-following we should understand it as resulting from inculcation into a custom or practice. Thus, to understand addition is simply to have been inculcated into a practice of adding. This position is often called "anti-antirealism", meaning that he argues that the result of sceptical arguments, like that of the rule-following paradox, is to tempt philosophical theory into realism, thereby making bold metaphysical claims. Since McDowell offers a straight solution, making the rule-following paradox compatible with realism would be missing Wittgenstein's basic point that the meaning can often be said to be the use. This is in line with quietism, the view that philosophical theory results only in dichotomies and that the notion of a theory of meaning is pointless.
Semantic realism and Kripkenstein.
George M. Wilson argues that there is a way to lay out Kripkenstein as a philosophical position compatible with semantic realism: by differentiating between two sorts of conclusions resulting from the rule-following paradox, illustrated by a speaker S using a term T:
BSC (Basic Sceptical Conclusion): There are no facts about S that fix any set of properties as the standard of correctness for S's use of T.
RSC (Radical Sceptical Conclusion): No one ever means anything by any term.
Wilson argues that Kripke's sceptic is indeed committed to RSC, but that Kripke reads Wittgenstein as embracing BSC but refuting RSC. This, Wilson argues, is done with the concept of familiarity. When S uses T, its correctness is determined neither by a fact about S (hereby accepting the rule-following paradox) nor a correspondence between T and the object termed (hereby denying the idea of correspondence theory), but the irreducible fact that T is grounded in familiarity, being used to predicate other similar objects. This familiarity is independent of and, in some sense, external to S, making familiarity the grounding for semantic realism.
Still, Wilson's suggested realism is minimal, partly accepting McDowell's critique.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x\\text{ quus }y= \\begin{cases} x+y & \\text{for }x,y <57 \\\\[12pt] 5 & \\text{for } x\\ge 57 \\text{ or } y\\ge57 \\end{cases} "
}
]
| https://en.wikipedia.org/wiki?curid=695746 |
69576755 | Zone theorem | Theorem in computational and discrete geometry
In geometry, the zone theorem is a result that establishes the complexity of the zone of a line in an arrangement of lines.
Definition.
A "line arrangement", denoted as formula_0, is a subdivision of the plane, induced by a set of lines formula_1, into cells (formula_2-dimensional faces), edges (formula_3-dimensional faces) and vertices (formula_4-dimensional faces). Given a set of formula_5 lines formula_1, the line arrangement formula_0, and a line formula_6 (not belonging to formula_1), the "zone" of formula_6 is the set of faces intersected by formula_6. The "complexity" of a zone is the total number of edges in its boundary, expressed as a function of formula_5.
The zone theorem states that said complexity is formula_7.
History.
This result was published for the first time in 1985; Chazelle et al. gave the upper bound of formula_8 for the complexity of the zone of a line in an arrangement. In 1991, this bound was improved to formula_9, and it was also shown that this is the best possible upper bound up to a small additive factor. Then, in 2011, Rom Pinchasi proved that the complexity of the zone of a line in an arrangement is at most formula_10, and this is a tight bound.
Some paradigms used in the different proofs of the theorem are induction, sweep technique, tree construction, and Davenport-Schinzel sequences.
Generalizations.
Although the most popular version is for arrangements of lines in the plane, there exist some generalizations of the zone theorem. For instance, in dimension formula_11, considering arrangements of hyperplanes, the complexity of the zone of a hyperplane formula_12 is the number of facets (formula_13 - dimensional faces) bounding the set of cells (formula_11-dimensional faces) intersected by formula_12. Analogously, the formula_11-dimensional zone theorem states that the complexity of the zone of a hyperplane is formula_14. There are considerably fewer proofs for the theorem for dimension formula_15. For the formula_16-dimensional case, there are proofs based on sweep techniques and for higher dimensions is used Euler’s relation:
formula_17
Another generalization is considering arrangements of pseudolines (and pseudohyperplanes in dimension formula_11) instead of lines (and hyperplanes). Some proofs for the theorem work well in this case since they do not use the straightness of the lines substantially through their arguments.
Motivation.
The primary motivation to study the zone complexity in arrangements arises from looking for efficient algorithms to construct arrangements. A classical algorithm is the incremental construction, which can be roughly described as adding the lines one after the other and storing all faces generated by each in an appropriate data structure (the usual structure for arrangements is the doubly connected edge list (DCEL)). Here, the consequence of the zone theorem is that the entire construction of any arrangement of formula_5 lines can be done in time formula_18, since the insertion of each line takes time formula_7.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A(L)"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "2"
},
{
"math_id": 3,
"text": "1"
},
{
"math_id": 4,
"text": "0"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "l"
},
{
"math_id": 7,
"text": "O(n)"
},
{
"math_id": 8,
"text": "10n+2"
},
{
"math_id": 9,
"text": "\\lfloor9.5n\\rfloor -1"
},
{
"math_id": 10,
"text": "\\lfloor9.5n\\rfloor -3"
},
{
"math_id": 11,
"text": "d"
},
{
"math_id": 12,
"text": "h"
},
{
"math_id": 13,
"text": "d-1"
},
{
"math_id": 14,
"text": "O(n^{d-1})"
},
{
"math_id": 15,
"text": "d\\geq3"
},
{
"math_id": 16,
"text": "3"
},
{
"math_id": 17,
"text": "\\sum_{i=0}^{d} (-1)^i F_i \\geq 0. "
},
{
"math_id": 18,
"text": "O(n^2)"
}
]
| https://en.wikipedia.org/wiki?curid=69576755 |
69583443 | Circumcevian triangle | Triangle derived from a given triangle and a coplanar point
In Euclidean geometry, a circumcevian triangle is a special triangle associated with a reference triangle and a point in the plane of the triangle. It is also associated with the circumcircle of the reference triangle.
Definition.
Let P be a point in the plane of the reference triangle △"ABC". Let the lines AP, BP, CP intersect the circumcircle of △"ABC" at A', B', C'. The triangle △"A'B'C is called the circumcevian triangle"' of P with reference to △"ABC".
Coordinates.
Let a,b,c be the side lengths of triangle △"ABC" and let the trilinear coordinates of P be "α" : "β" : "γ". Then the trilinear coordinates of the vertices of the circumcevian triangle of P are as follows:
formula_0
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{array}{rccccc}\n A' =& -a\\beta\\gamma &:& (b\\gamma+c\\beta)\\beta &:& (b\\gamma+c\\beta)\\gamma \\\\\n B' =& (c\\alpha +a\\gamma)\\alpha &:& - b\\gamma\\alpha &:& (c\\alpha +a\\gamma) \\gamma \\\\\n C' =& (a\\beta +b\\alpha)\\alpha &:& (a\\beta +b\\alpha)\\beta &:& - c\\alpha\\beta\n\\end{array}"
}
]
| https://en.wikipedia.org/wiki?curid=69583443 |
695917 | Dirac adjoint | Dual to the Dirac spinor
In quantum field theory, the Dirac adjoint defines the dual operation of a Dirac spinor. The Dirac adjoint is motivated by the need to form well-behaved, measurable quantities out of Dirac spinors, replacing the usual role of the Hermitian adjoint.
Possibly to avoid confusion with the usual Hermitian adjoint, some textbooks do not provide a name for the Dirac adjoint but simply call it "ψ-bar".
Definition.
Let formula_0 be a Dirac spinor. Then its Dirac adjoint is defined as
formula_1
where formula_2 denotes the Hermitian adjoint of the spinor formula_0, and formula_3 is the time-like gamma matrix.
Spinors under Lorentz transformations.
The Lorentz group of special relativity is not compact, therefore spinor representations of Lorentz transformations are generally not unitary. That is, if formula_4 is a projective representation of some Lorentz transformation,
formula_5,
then, in general,
formula_6.
The Hermitian adjoint of a spinor transforms according to
formula_7.
Therefore, formula_8 is not a Lorentz scalar and formula_9 is not even Hermitian.
Dirac adjoints, in contrast, transform according to
formula_10.
Using the identity formula_11, the transformation reduces to
formula_12,
Thus, formula_13 transforms as a Lorentz scalar and formula_14 as a four-vector.
Usage.
Using the Dirac adjoint, the probability four-current J for a spin-1/2 particle field can be written as
formula_15
where c is the speed of light and the components of J represent the probability density ρ and the probability 3-current j:
formula_16.
Taking μ
0 and using the relation for gamma matrices
formula_17,
the probability density becomes
formula_18. | [
{
"math_id": 0,
"text": "\\psi"
},
{
"math_id": 1,
"text": "\\bar\\psi \\equiv \\psi^\\dagger \\gamma^0"
},
{
"math_id": 2,
"text": "\\psi^\\dagger"
},
{
"math_id": 3,
"text": "\\gamma^0"
},
{
"math_id": 4,
"text": "\\lambda"
},
{
"math_id": 5,
"text": "\\psi \\mapsto \\lambda \\psi"
},
{
"math_id": 6,
"text": "\\lambda^\\dagger \\ne \\lambda^{-1}"
},
{
"math_id": 7,
"text": "\\psi^\\dagger \\mapsto \\psi^\\dagger \\lambda^\\dagger"
},
{
"math_id": 8,
"text": "\\psi^\\dagger\\psi"
},
{
"math_id": 9,
"text": "\\psi^\\dagger\\gamma^\\mu\\psi"
},
{
"math_id": 10,
"text": "\\bar\\psi \\mapsto \\left(\\lambda \\psi\\right)^\\dagger \\gamma^0"
},
{
"math_id": 11,
"text": "\\gamma^0 \\lambda^\\dagger \\gamma^0 = \\lambda^{-1}"
},
{
"math_id": 12,
"text": "\\bar\\psi \\mapsto \\bar\\psi \\lambda^{-1}"
},
{
"math_id": 13,
"text": "\\bar\\psi\\psi"
},
{
"math_id": 14,
"text": "\\bar\\psi\\gamma^\\mu\\psi"
},
{
"math_id": 15,
"text": "J^\\mu = c \\bar\\psi \\gamma^\\mu \\psi"
},
{
"math_id": 16,
"text": "\\boldsymbol J = (c \\rho, \\boldsymbol j)"
},
{
"math_id": 17,
"text": "\\left(\\gamma^0\\right)^2 = I"
},
{
"math_id": 18,
"text": "\\rho = \\psi^\\dagger \\psi"
}
]
| https://en.wikipedia.org/wiki?curid=695917 |
69592861 | McCay cubic | Plane curve unique to a given triangle
In Euclidean geometry, the McCay cubic (also called M'Cay cubic or Griffiths cubic) is a cubic plane curve in the plane of a reference triangle and associated with it. It is the third cubic curve in Bernard Gilbert's Catalogue of Triangle Cubics and it is assigned the identification number K003.
Definition.
The McCay cubic can be defined by locus properties in several ways. For example, the McCay cubic is the locus of a point P such that the pedal circle of P is tangent to the nine-point circle of the reference triangle △"ABC". The McCay cubic can also be defined as the locus of point P such that the circumcevian triangle of P and △"ABC" are orthologic.
Equation of the McCay cubic.
The equation of the McCay cubic in barycentric coordinates formula_0 is
formula_1
The equation in trilinear coordinates formula_2 is
formula_3
McCay cubic as a stelloid.
A stelloid is a cubic that has three real concurring asymptotes making 60° angles with one another. McCay cubic is a stelloid in which the three asymptotes concur at the centroid of triangle ABC. A circum-stelloid having the same asymptotic directions as those of McCay cubic and concurring at a certain (finite) is called McCay stelloid. The point where the asymptoptes concur is called the "radial center" of the stelloid. Given a finite point X there is one and only one McCay stelloid with X as the radial center.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x:y:z"
},
{
"math_id": 1,
"text": "\\sum_{\\text{cyclic}}(a^2(b^2+c^2-a^2)x(c^2y^2-b^2z^2))=0."
},
{
"math_id": 2,
"text": "\\alpha : \\beta : \\gamma "
},
{
"math_id": 3,
"text": "\\alpha (\\beta^2 - \\gamma^2)\\cos A + \\beta (\\gamma^2 - \\alpha^2)\\cos B + \\gamma (\\alpha^2 - \\beta^2)\\cos C = 0"
}
]
| https://en.wikipedia.org/wiki?curid=69592861 |
695950 | Free cash flow | Financial accounting term
In financial accounting, free cash flow (FCF) or
free cash flow to firm (FCFF) is the amount by which a business's operating cash flow exceeds its working capital needs and expenditures on fixed assets (known as capital expenditures). It is that portion of cash flow that can be extracted from a company and distributed to creditors and securities holders without causing issues in its operations. As such, it is an indicator of a company's financial flexibility and is of interest to holders of the company's equity, debt, preferred stock and convertible securities, as well as potential lenders and investors.
Free cash flow can be calculated in various ways, depending on audience and available data. A common measure is to take the earnings before interest and taxes, add depreciation and amortization, and then subtract taxes, changes in working capital and capital expenditure. Depending on the audience, a number of refinements and adjustments may also be made to try to eliminate distortions.
Free cash flow may be different from net income, as free cash flow takes into account the purchase of capital goods and changes in working capital and excludes non-cash items.
Calculations.
Free cash flow is a non-GAAP measure of performance. As such, there are many ways to calculate free cash flow. Below is one common method for calculating free cash flow:
Note that the first three lines above are calculated on the standard statement of cash flows.
When net profit and tax rate applicable are given, you can also calculate it by taking:
where
When Profit After Tax and Debt/Equity ratio are available:
where d is the debt/equity ratio, e.g. for a 3:4 mix it will be 3/7.
Therefore,
Difference with net income.
There are two differences between net income and free cash flow. The first is the accounting for the purchase of capital goods. Net income deducts depreciation, while the free cash flow measure uses last period's net capital purchases.
The second difference is that the free cash flow measurement makes adjustments for changes in net working capital, where the net income approach does not. Typically, in a growing company with a 30-day collection period for receivables, a 30-day payment period for purchases, and a weekly payroll, it will require more working capital to finance the labor and profit components embedded in the growing receivables balance.
When a company has negative sales growth, it's likely to lower its capital spending. Receivables, provided they are being timely collected, will also ratchet down. All this "deceleration" will show up as additions to free cash flow. However, over the long term, decelerating sales trends will eventually catch up.
The net free cash flow definition should also allow for cash available to pay off the company's short term debt. It should also take into account any dividends that the company means to pay.
Net free cash flow = Operation cash flow − Capital expenses to keep current level of operation − dividends − Current portion of long term debt − Depreciation
Here, capex definition should not include additional investment on new equipment. However, maintenance cost can be added.
Dividends will be the base dividend that the company intends to distribute to its share holders.
Current portion of long term debt will be the minimum debt that the company needs to pay in order to not default.
Depreciation should be taken out since this will account for future investment for replacing the current property, plant and equipment (PPE).
If the net income category includes the income from discontinued operation and extraordinary income make sure it is not part of free cash flow.
Net of all the above give free cash available to be reinvested in operations without having to take more debt.
Alternative formula.
FCF measures:
In symbols:
formula_0
where
Investment is simply the net increase (decrease) in the firm's capital, from the end of one period to the end of the next period:
formula_1
where "K""t" represents the firm's invested capital at the end of period "t". Increases in non-cash current assets may, or may not be deducted, depending on whether they are considered to be maintaining the status quo, or to be investments for growth.
Unlevered free cash flow (i.e., cash flows before interest payments) is defined as EBITDA − CAPEX − changes in net working capital − taxes. This is the generally accepted definition. If there are mandatory repayments of debt, then some analysts utilize levered free cash flow, which is the same formula above, but less interest and mandatory principal repayments. The unlevered cash flow (UFCF) is usually used as the industry norm, because it allows for easier comparison of different companies’ cash flows. It is also preferred over the levered cash flow when conducting analyses to test the impact of different capital structures on the company.
Investment bankers compute free cash flow using the following formulae:
FCFF = After tax operating income + Noncash charges (such as D&A) − CAPEX − Working capital expenditures = Free cash flow to firm (FCFF)
FCFE = Net income + Noncash charges (such as D&A) − CAPEX − Change in non-cash working capital + Net borrowing = Free cash flow to equity (FCFE)
Or simply:
FCFE = FCFF + Net borrowing − Interest*(1−t)
Free cash flow can be broken into its expected and unexpected components when evaluating firm performance. This is useful when valuing a firm because there are always unexpected developments in a firm's performance. Being able to factor in unexpected cash flows provides a financial model.
formula_2
Where: formula_3
Agency costs.
In a 1986 paper in the "American Economic Review", Michael Jensen noted that free cash flows allowed firms' managers to finance projects earning low returns which, therefore, might not be funded by the equity or bond markets. Examining the US oil industry, which had earned substantial free cash flows in the 1970s and the early 1980s, he wrote that:
[the] 1984 cash flows of the ten largest oil companies were $48.5 billion, 28 percent of the total cash flows Going to Dominic Anthony Ferrante out of Rancho Cordova of the top 200 firms in Dun's Business Month survey. Consistent with the agency costs of free cash flow, management did not pay out the excess resources to shareholders. Instead, the industry continued to spend heavily on [exploration and development] activity even though average returns were below the cost of capital.
Jensen also noted a negative correlation between exploration announcements and the market valuation of these firms—the opposite effect to research announcements in other industries.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "FCF_t = OCB_t - I_t \\,"
},
{
"math_id": 1,
"text": "I_t = K_t - K_{t-1} \\,"
},
{
"math_id": 2,
"text": "FCF_t = E(FCF_t) + U(FCF_t) \\,"
},
{
"math_id": 3,
"text": " E(FCF_t)= FCF_t-1 * (FCF_{t-1}/FCF_{t-3})^{1/2} \\,"
}
]
| https://en.wikipedia.org/wiki?curid=695950 |
6959754 | Locally connected space | Property of topological spaces
In topology and other branches of mathematics, a topological space "X" is
locally connected if every point admits a neighbourhood basis consisting of open connected sets.
As a stronger notion, the space "X" is locally path connected if every point admits a neighbourhood basis consisting of open path connected sets.
Background.
Throughout the history of topology, connectedness and compactness have been two of the most
widely studied topological properties. Indeed, the study of these properties even among subsets of Euclidean space, and the recognition of their independence from the particular form of the Euclidean metric, played a large role in clarifying the notion of a topological property and thus a topological space. However, whereas the structure of "compact" subsets of Euclidean space was understood quite early on via the Heine–Borel theorem, "connected" subsets of formula_0 (for "n" > 1) proved to be much more complicated. Indeed, while any compact Hausdorff space is locally compact, a connected space—and even a connected subset of the Euclidean plane—need not be locally connected (see below).
This led to a rich vein of research in the first half of the twentieth century, in which topologists studied the implications between increasingly subtle and complex variations on the notion of a locally connected space. As an example, the notion of local connectedness im kleinen at a point and its relation to local connectedness will be considered later on in the article.
In the latter part of the twentieth century, research trends shifted to more intense study of spaces like manifolds, which are locally well understood (being locally homeomorphic to Euclidean space) but have complicated global behavior. By this it is meant that although the basic point-set topology of manifolds is relatively simple (as manifolds are essentially metrizable according to most definitions of the concept), their algebraic topology is far more complex. From this modern perspective, the stronger property of local path connectedness turns out to be more important: for instance, in order for a space to admit a universal cover it must be connected and locally path connected.
A space is locally connected if and only if for every open set U, the connected components of U (in the subspace topology) are open. It follows, for instance, that a continuous function from a locally connected space to a totally disconnected space must be locally constant. In fact the openness of components is so natural that one must be sure to keep in mind that it is not true in general: for instance Cantor space is totally disconnected but not discrete.
Definitions.
Let formula_1 be a topological space, and let formula_2 be a point of formula_3
A space formula_1 is called locally connected at formula_2 if every neighborhood of formula_2 contains a connected "open" neighborhood of formula_2, that is, if the point formula_2 has a neighborhood base consisting of connected open sets. A locally connected space is a space that is locally connected at each of its points.
Local connectedness does not imply connectedness (consider two disjoint open intervals in formula_4 for example); and connectedness does not imply local connectedness (see the topologist's sine curve).
A space formula_1 is called locally path connected at formula_2 if every neighborhood of formula_2 contains a path connected "open" neighborhood of formula_2, that is, if the point formula_2 has a neighborhood base consisting of path connected open sets. A locally path connected space is a space that is locally path connected at each of its points.
Locally path connected spaces are locally connected. The converse does not hold (see the lexicographic order topology on the unit square).
Connectedness im kleinen.
A space formula_1 is called connected im kleinen at formula_2 or weakly locally connected at formula_2 if every neighborhood of formula_2 contains a connected neighborhood of formula_2, that is, if the point formula_2 has a neighborhood base consisting of connected sets. A space is called weakly locally connected if it is weakly locally connected at each of its points; as indicated below, this concept is in fact the same as being locally connected.
A space that is locally connected at formula_2 is connected im kleinen at formula_5 The converse does not hold, as shown for example by a certain infinite union of decreasing broom spaces, that is connected im kleinen at a particular point, but not locally connected at that point. However, if a space is connected im kleinen at each of its points, it is locally connected.
A space formula_1 is said to be path connected im kleinen at formula_2 if every neighborhood of formula_2 contains a path connected neighborhood of formula_2, that is, if the point formula_2 has a neighborhood base consisting of path connected sets.
A space that is locally path connected at formula_2 is path connected im kleinen at formula_5 The converse does not hold, as shown by the same infinite union of decreasing broom spaces as above. However, if a space is path connected im kleinen at each of its points, it is locally path connected.
First examples.
A first-countable Hausdorff space formula_9 is locally path-connected if and only if formula_10 is equal to the final topology on formula_1 induced by the set formula_11 of all continuous paths formula_12
Properties.
<templatestyles src="Math_theorem/styles.css" />
Components and path components.
The following result follows almost immediately from the definitions but will be quite useful:
Lemma: Let "X" be a space, and formula_17 a family of subsets of "X". Suppose that formula_18 is nonempty. Then, if each formula_19 is connected (respectively, path connected) then the union formula_20 is connected (respectively, path connected).
Now consider two relations on a topological space "X": for formula_21 write:
formula_22 if there is a connected subset of "X" containing both "x" and "y"; and
formula_23 if there is a path connected subset of "X" containing both "x" and "y".
Evidently both relations are reflexive and symmetric. Moreover, if "x" and "y" are contained in a connected (respectively, path connected) subset "A" and "y" and "z" are connected in a connected (respectively, path connected) subset "B", then the Lemma implies that formula_24 is a connected (respectively, path connected) subset containing "x", "y" and "z". Thus each relation is an equivalence relation, and defines a partition of "X" into equivalence classes. We consider these two partitions in turn.
For "x" in "X", the set formula_25 of all points "y" such that formula_26 is called the connected component of "x". The Lemma implies that formula_25 is the unique maximal connected subset of "X" containing "x". Since
the closure of formula_25 is also a connected subset containing "x", it follows that formula_25 is closed.
If "X" has only finitely many connected components, then each component is the complement of a finite union of closed sets and therefore open. In general, the connected components need not be open, since, e.g., there exist totally disconnected spaces (i.e., formula_27 for all points "x") that are not discrete, like Cantor space. However, the connected components of a locally connected space are also open, and thus are clopen sets. It follows that a locally connected space "X" is a topological disjoint union formula_28 of its distinct connected components. Conversely, if for every open subset "U" of "X", the connected components of "U" are open, then "X" admits a base of connected sets and is therefore locally connected.
Similarly "x" in "X", the set formula_29 of all points "y" such that formula_30 is called the "path component" of "x". As above, formula_29 is also the union of all path connected subsets of "X" that contain "x", so by the Lemma is itself path connected. Because path connected sets are connected, we have formula_31 for all formula_32
However the closure of a path connected set need not be path connected: for instance, the topologist's sine curve is the closure of the open subset "U" consisting of all points "(x,sin(x))" with "x > 0", and "U", being homeomorphic to an interval on the real line, is certainly path connected. Moreover, the path components of the topologist's sine curve "C" are "U", which is open but not closed, and formula_33 which is closed but not open.
A space is locally path connected if and only if for all open subsets "U", the path components of "U" are open. Therefore the path components of a locally path connected space give a partition of "X" into pairwise disjoint open sets. It follows that an open connected subspace of a locally path connected space is necessarily path connected. Moreover, if a space is locally path connected, then it is also locally connected, so for all formula_34 formula_25 is connected and open, hence path connected, that is, formula_35 That is, for a locally path connected space the components and path components coincide.
Quasicomponents.
Let "X" be a topological space. We define a third relation on "X": formula_44 if there is no separation of "X" into open sets "A" and "B" such that "x" is an element of "A" and "y" is an element of "B". This is an equivalence relation on "X" and the equivalence class formula_45 containing "x" is called the quasicomponent of "x".
formula_45 can also be characterized as the intersection of all clopen subsets of "X" that contain "x". Accordingly formula_45 is closed; in general it need not be open.
Evidently formula_46 for all formula_32 Overall we have the following containments among path components, components and quasicomponents at "x":
formula_47
If "X" is locally connected, then, as above, formula_25 is a clopen set containing "x", so formula_48 and thus formula_49 Since local path connectedness implies local connectedness, it follows that at all points "x" of a locally path connected space we have
formula_50
Another class of spaces for which the quasicomponents agree with the components is the class of compact Hausdorff spaces.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\R^n"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "X."
},
{
"math_id": 4,
"text": "\\R"
},
{
"math_id": 5,
"text": "x."
},
{
"math_id": 6,
"text": "S = [0,1] \\cup [2,3]"
},
{
"math_id": 7,
"text": "\\R^1"
},
{
"math_id": 8,
"text": "\\Q"
},
{
"math_id": 9,
"text": "(X, \\tau)"
},
{
"math_id": 10,
"text": "\\tau"
},
{
"math_id": 11,
"text": "C([0, 1]; X)"
},
{
"math_id": 12,
"text": "[0, 1] \\to (X, \\tau)."
},
{
"math_id": 13,
"text": "\\coprod_i X_i"
},
{
"math_id": 14,
"text": "\\{X_i\\}"
},
{
"math_id": 15,
"text": "X_i"
},
{
"math_id": 16,
"text": "\\prod_i X_i"
},
{
"math_id": 17,
"text": "\\{Y_i\\}"
},
{
"math_id": 18,
"text": " \\bigcap_i Y_i "
},
{
"math_id": 19,
"text": "Y_i"
},
{
"math_id": 20,
"text": "\\bigcup_i Y_i"
},
{
"math_id": 21,
"text": "x,y \\in X,"
},
{
"math_id": 22,
"text": "x \\equiv_c y"
},
{
"math_id": 23,
"text": " x \\equiv_{pc} y "
},
{
"math_id": 24,
"text": "A \\cup B"
},
{
"math_id": 25,
"text": "C_x"
},
{
"math_id": 26,
"text": "y \\equiv_c x"
},
{
"math_id": 27,
"text": "C_x = \\{x\\}"
},
{
"math_id": 28,
"text": "\\coprod C_x"
},
{
"math_id": 29,
"text": "PC_x"
},
{
"math_id": 30,
"text": "y \\equiv_{pc} x"
},
{
"math_id": 31,
"text": "PC_x \\subseteq C_x"
},
{
"math_id": 32,
"text": "x \\in X."
},
{
"math_id": 33,
"text": "C \\setminus U,"
},
{
"math_id": 34,
"text": "x \\in X,"
},
{
"math_id": 35,
"text": "C_x = PC_x."
},
{
"math_id": 36,
"text": "I \\times I"
},
{
"math_id": 37,
"text": "I = [0, 1]"
},
{
"math_id": 38,
"text": "\\{a\\} \\times I"
},
{
"math_id": 39,
"text": "f : \\R \\to \\R_{\\ell}"
},
{
"math_id": 40,
"text": "\\R_{\\ell}"
},
{
"math_id": 41,
"text": "f"
},
{
"math_id": 42,
"text": "\\R_{\\ell}/"
},
{
"math_id": 43,
"text": "\\R_{\\ell},"
},
{
"math_id": 44,
"text": "x \\equiv_{qc} y"
},
{
"math_id": 45,
"text": "QC_x"
},
{
"math_id": 46,
"text": "C_x \\subseteq QC_x"
},
{
"math_id": 47,
"text": "PC_x \\subseteq C_x \\subseteq QC_x."
},
{
"math_id": 48,
"text": "QC_x \\subseteq C_x"
},
{
"math_id": 49,
"text": "QC_x = C_x."
},
{
"math_id": 50,
"text": "PC_x = C_x = QC_x."
},
{
"math_id": 51,
"text": "(\\{0\\}\\cup\\{\\frac{1}{n} : n \\in \\Z^+\\}) \\times [-1,1] \\setminus \\{(0,0)\\}"
},
{
"math_id": 52,
"text": "\\{0\\} \\times [-1,0)"
},
{
"math_id": 53,
"text": "\\{0\\} \\times (0,1]"
},
{
"math_id": 54,
"text": "QC_x = C_x = \\{x\\}"
}
]
| https://en.wikipedia.org/wiki?curid=6959754 |
69604129 | 1 Samuel 18 | First Book of Samuel chapter
1 Samuel 18 is the eighteenth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains David's interaction with Saul and his children, in particular Jonathan and Michal. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 30 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 1Q7 (1QSam; 50 BCE) with extant verses 17–18 and 4Q51 (4QSama; 100–50 BCE) with extant verses 4–5.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
Verses 1–5 of this chapter are a fitting conclusion to the account in the previous chapter 17 as David was retained in the court (verse 2), elevated for military actions (verse 5) and obtained general acclaim by the common people and the courtiers. On top of that Jonathan, Saul's oldest son, was attached to David in covenantal friendship, confirmed by Jonathan's handing over his clothes and armor to David (verse 4), symbolically transferring to David the right of succession and making David heir-apparent. On the other hand, Saul became jealous of David, and their relationship developed into one of 'respect and hatred, recognition and desire to kill', a mixed attitude which was especially triggered when Saul heard the couplet (verse 7) giving the clear message that David would become king. Saul feared David (verses 12, 15, 29) as Saul acknowledged that God was with David, whereas God had abandoned Saul (verse 12). Since then, a prominent theme appears in which Saul was thwarted in all his plans to hurt David, while for David each attempt became an opportunity to further his triumph (verses 14, 30).
Saul fears David (18:1–16).
The last chapter ends with David talking to Saul and Abner, whereas in the beginning of this chapter it was clear that Jonathan, Saul's crown prince, was also present at the event and once he had a chance to talk to David, he immediately befriended David. Jonathan loved David (verse 1), similar to how Saul, his father, had loved David (1 Samuel 16:21), and the experiences of fighting the Philistines against great odds led to a revelation that Jonathan and David shared a kindred spirit.
With the victory against Goliath, David was now seen as a brave man that Saul wanted to retain in his service (1 Samuel 14:52), and David proved himself worthy in the subsequent battles that the women who sang to celebrate great victories (cf. Exodus 15:20; Judges 11:34) ascribed a higher number of kills to David than Saul. Saul interpreted the supposedly 'non-partisan victory song' in the worst possible sense and became suspicious that David would take over his throne (verse 9). The next day, Saul got tormented by the 'evil spirit of God', that he twice attempted to pin David to the wall with his spear, but David, who was playing music for Saul, managed to escape both times. Next, Saul moved David from the position as the king's musician to be a commander of a thousand men and ordered him to face the Philistines, hoping that David would be killed by the enemies. But, this backfired when David achieved great successes in the battles and all Israel began to love him (18:16).
"Then Jonathan and David made a covenant, because he loved him as his own soul."
David marries Michal (18:17–30).
Saul's fear of David increased and affected his integrity as king: he took back his promise to give his first daughter, Merab, as David's wife, only to offer David his second daughter with additional conditions in order to get David killed by the Philistines (verse 25). David responded by saying that he was a 'poor man', likely an allusion to another broken promise of Saul that the killer of Goliath would get riches from the king (David confirmed the reward promise multiple times with different people; cf. 1 Samuel 17:25, 27, 30). Saul misinterpreted David's response as a concern of not able to pay the dowry for the marriage, so he announced a bride-price of a hundred Philistine foreskins. David decided to accept the challenge, but perhaps due to Saul's 'double-dealing ways' David presented double the amount of foreskins and had the "full numbers" counted before Saul (verse 27), so Saul had to keep his word to give Michal, his daughter, to David as wife. Having David as his son-in-law made Saul fear David even more, whereas it tremendously increased David's fame (verse 29–30).
"And Michal Saul's daughter loved David: and they told Saul, and the thing pleased him."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69604129 |
69605663 | Zeldovich–Taylor flow | Fluid motion of gaseous detonation products
Zeldovich–Taylor flow (also known as Zeldovich–Taylor expansion wave) is the fluid motion of gaseous detonation products behind Chapman–Jouguet detonation wave. The flow was described independently by Yakov Zeldovich in 1942 and G. I. Taylor in 1950, although G. I. Taylor carried out the work in 1941 that being circulated in the British Ministry of Home Security. Since naturally occurring detonation waves are in general a Chapman–Jouguet detonation wave, the solution becomes very useful in describing real-life detonation waves.
Mathematical description.
Consider a spherically outgoing Chapman–Jouguet detonation wave propagating with a constant velocity formula_0. By definition, immediately behind the detonation wave, the gas velocity is equal to the local sound speed formula_1 with respect to the wave. Let formula_2 be the radial velocity of the gas behind the wave, in a fixed frame. The detonation is ignited at formula_3 at formula_4. For formula_5, the gas velocity must be zero at the center formula_4 and should take the value formula_6 at the detonation location formula_7. The fluid motion is governed by the inviscid Euler equations
formula_8
where formula_9 is the density, formula_10 is the pressure and formula_11 is the entropy. The last equation implies that the flow is isentropic and hence we can write formula_12.
Since there are no length or time scales involved in the problem, one may look for a self-similar solution of the form formula_13, where formula_14. The first two equations then become
formula_15
where prime denotes differentiation with respect to formula_16. We can eliminate formula_17 between the two equations to obtain an equation that contains only formula_18 and formula_1. Because of the isentropic condition, we can express formula_19, that is to say, we can replace formula_20 with formula_21. This leads to
formula_22
For polytropic gases with constant specific heats, we have formula_23. The above set of equations cannot be solved analytically, but has to be integrated numerically. The solution has to be found for the range formula_24 subjected to the condition formula_25 at formula_26
The function formula_27 is found to monotonically decrease from its value formula_28 to zero at a finite value of formula_29, where a weak discontinuity (that is a function is continuous, but its derivatives may not) exists. The region between the detonation front and the trailing weak discontinuity is the rarefaction (or expansion) flow. Interior to the weak discontinuity formula_30 everywhere.
Location of the weak discontinuity (Mach wave).
From the second equation described above, it follows that when formula_30, formula_31. More precisely, as formula_32, that equation can be approximated as
formula_33
As formula_32, formula_34 and formula_35 if formula_16 decreases as formula_32. The left hand side of the above equation can become positive infinity only if formula_36. Thus, when formula_16 decreases to the value formula_37, the gas comes to rest (Here formula_38 is the sound speed corresponding to formula_30). Thus, the rarefaction motion occurs for formula_39 and there is no fluid motion for formula_40.
Behavior near the weak discontinuity.
Rewrite the second equation as
formula_41
In the neighborhood of the weak discontinuity, the quantities to the first order (such as formula_42) reduces the above equation to
formula_43
At this point, it is worth mentioning that in general, disturbances in gases are propagated with respect to the gas at the local sound speed. In other words, in the fixed frame, the disturbances are propagated at the speed formula_44 (the other possibility is formula_45 although it is of no interest here). If the gas is at rest formula_30, then the disturbance speed is formula_38. This is just a normal sound wave propagation. If however formula_18 is non-zero but a small quantity, then one find the correction for the disturbance propagation speed as formula_46 obtained using a Taylor series expansion, where formula_47 is the Landau derivative (for ideal gas, formula_48, where formula_49 is the specific heat ratio). This means that the above equation can be written as
formula_50
whose solution is
formula_51
where formula_52 is a constant. This determines formula_27 implicitly in the neighborhood of the week discontinuity where formula_18 is small. This equation shows that at formula_37, formula_30, formula_53, but all higher-order derivatives are discontinuous. In the above equation, subtract formula_54 from the left-hand side and formula_55 from the right-hand side to obtain
formula_56
which implies that formula_57 if formula_18 is a small quantity. It can be shown that the relation formula_57 not only holds for small formula_18, but throughout the rarefaction wave.
Behavior near the detonation front.
First let us show that the relation formula_57 is not only valid near the weak discontinuity, but throughout the region. If this inequality is not maintained, then there must be a point where formula_58 between the weak discontinuity and the detonation front. The second governing equation implies that at this point formula_59 must be infinite or, formula_60. Let us obtain formula_61 by taking the second derivative of the governing equation. In the resulting equation, impose the condition formula_62 to obtain formula_63. This implies that formula_64 reaches a maximum at this point which in turn implies that formula_27 cannot exist for formula_16 greater than the maximum point considered since otherwise formula_27 would be multi-valued. The maximum point at most can be corresponded to the outer boundary (detonation front). This means that formula_65 can vanish only on the boundary and it is already shown that formula_65 is positive near the weak discontinuity, formula_65 is positive everywhere in the region except the boundaries where it can vanish.
Note that near the detonation front, we must satisfy the condition formula_58. The value evaluated at formula_66 for the function formula_67, i.e., formula_68 is nothing but the velocity of the detonation front with respect to the gas velocity behind it. For a detonation front, the condition formula_69 must always be met, with the equality sign representing Chapman–Jouguet detonations and the inequalities representing over-driven detonations. The analysis describing the point formula_58 must correspond to the detonation front.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D"
},
{
"math_id": 1,
"text": "c"
},
{
"math_id": 2,
"text": "v(r,t)"
},
{
"math_id": 3,
"text": "t=0"
},
{
"math_id": 4,
"text": "r=0"
},
{
"math_id": 5,
"text": "t>0"
},
{
"math_id": 6,
"text": "v=D-c"
},
{
"math_id": 7,
"text": "r=Dt"
},
{
"math_id": 8,
"text": "\n\\begin{align}\n\\frac{\\partial \\rho}{\\partial t} + v\\frac{\\partial \\rho}{\\partial r} &= - \\rho\\left(\\frac{\\partial v}{\\partial r} + \\frac{2v}{r}\\right),\\\\\n\\frac{\\partial v}{\\partial t} + v \\frac{\\partial v}{\\partial r} &= - \\frac{1}{\\rho}\\frac{\\partial p}{\\partial r},\\\\\n\\frac{\\partial s}{\\partial t} + v \\frac{\\partial s}{\\partial r} &= 0\n\\end{align}\n"
},
{
"math_id": 9,
"text": "\\rho"
},
{
"math_id": 10,
"text": "p"
},
{
"math_id": 11,
"text": "s"
},
{
"math_id": 12,
"text": "c^2=d p/d \\rho"
},
{
"math_id": 13,
"text": "v(r,t)=v(\\xi), p(r,t) = p(\\xi),\\, \\rho(r,t) = \\rho(\\xi),\\,c(r,t) = c(\\xi)"
},
{
"math_id": 14,
"text": "\\xi=r/t"
},
{
"math_id": 15,
"text": "\n\\begin{align}\n(\\xi-v)\\rho'/\\rho &= v' + 2v/\\xi,\\\\\n(\\xi-v) v' &= p'/\\rho = c^2 \\rho'/\\rho\n\\end{align}\n"
},
{
"math_id": 16,
"text": "\\xi"
},
{
"math_id": 17,
"text": "\\rho'/\\rho"
},
{
"math_id": 18,
"text": "v"
},
{
"math_id": 19,
"text": "\\rho = \\rho(c), \\, p=p(c)"
},
{
"math_id": 20,
"text": "\\rho^{-1}d\\rho/dx"
},
{
"math_id": 21,
"text": "\\rho^{-1}c'd\\rho/dc"
},
{
"math_id": 22,
"text": "\\begin{align}\n(\\xi-v)\\frac{1}{\\rho}\\frac{d\\rho}{dc} c' &= v' + 2v/\\xi,\\\\\n\\left[\\frac{(\\xi-v)^2}{c^2}-1\\right]v' &= \\frac{2v}{\\xi}.\n\\end{align}\n"
},
{
"math_id": 23,
"text": "\\rho^{-1}d\\rho/dc = 2/[(\\gamma-1)c]"
},
{
"math_id": 24,
"text": "0\\leq \\xi \\leq D"
},
{
"math_id": 25,
"text": "\\xi-v=c"
},
{
"math_id": 26,
"text": "\\xi=D."
},
{
"math_id": 27,
"text": "v(\\xi)"
},
{
"math_id": 28,
"text": "v(D) = c(D)-D"
},
{
"math_id": 29,
"text": "\\xi<D"
},
{
"math_id": 30,
"text": "v=0"
},
{
"math_id": 31,
"text": "\\xi=c"
},
{
"math_id": 32,
"text": "v\\rightarrow 0"
},
{
"math_id": 33,
"text": "(\\ln v)' = 2c^2/[\\xi(\\xi^2-c^2)]."
},
{
"math_id": 34,
"text": "\\ln v\\rightarrow -\\infty"
},
{
"math_id": 35,
"text": "(\\ln v)'\\rightarrow \\infty"
},
{
"math_id": 36,
"text": "\\xi\\rightarrow c"
},
{
"math_id": 37,
"text": "\\xi=c_0"
},
{
"math_id": 38,
"text": "c_0"
},
{
"math_id": 39,
"text": "c_0<\\xi\\leq D"
},
{
"math_id": 40,
"text": "0\\leq \\xi \\leq c_0"
},
{
"math_id": 41,
"text": "v\\frac{d\\xi}{dv} = \\frac{1}{2}\\xi\\left[\\frac{(\\xi-v)^2}{c^2}-1\\right]."
},
{
"math_id": 42,
"text": "v,\\,\\xi-c_0,\\,c-c_0"
},
{
"math_id": 43,
"text": "v\\frac{d}{dv}(\\xi-c_0) = (\\xi-c_0) - (v+c-c_0)."
},
{
"math_id": 44,
"text": "v+c"
},
{
"math_id": 45,
"text": "v-c"
},
{
"math_id": 46,
"text": "v+c=c_0 + \\alpha_0 v"
},
{
"math_id": 47,
"text": "\\alpha_0"
},
{
"math_id": 48,
"text": "\\alpha_0=(\\gamma+1)/2"
},
{
"math_id": 49,
"text": "\\gamma"
},
{
"math_id": 50,
"text": "v\\frac{d}{dv}(\\xi-c_0) - (\\xi-c_0) = \\alpha_0 v"
},
{
"math_id": 51,
"text": "\\xi-c_0 = \\alpha_0 v\\ln (A/v)"
},
{
"math_id": 52,
"text": "A"
},
{
"math_id": 53,
"text": "dv/d\\xi=0"
},
{
"math_id": 54,
"text": "v+c-c_0"
},
{
"math_id": 55,
"text": "\\alpha_0v"
},
{
"math_id": 56,
"text": "\\xi-v-c = (\\xi-c_0)-(v+c-c_0)=\\alpha_0 v[\\ln(A/v)-1]"
},
{
"math_id": 57,
"text": "\\xi-v>c"
},
{
"math_id": 58,
"text": "\\xi-v=c,\\, v\\neq 0"
},
{
"math_id": 59,
"text": "v'"
},
{
"math_id": 60,
"text": "d\\xi/dv=0"
},
{
"math_id": 61,
"text": "d^2\\xi/dv^2"
},
{
"math_id": 62,
"text": "\\xi-v=c,\\,v\\neq 0,\\, d\\xi/dv=0"
},
{
"math_id": 63,
"text": "d^2\\xi/dv^2 = -\\alpha_0 \\xi/c_0v\\neq 0"
},
{
"math_id": 64,
"text": "\\xi(v)"
},
{
"math_id": 65,
"text": "\\xi-v-c"
},
{
"math_id": 66,
"text": "\\xi=D"
},
{
"math_id": 67,
"text": "\\xi-v"
},
{
"math_id": 68,
"text": "D-v(D)"
},
{
"math_id": 69,
"text": "D-v(D)\\leq c(D)"
}
]
| https://en.wikipedia.org/wiki?curid=69605663 |
69607043 | Guderley–Landau–Stanyukovich problem | Guderley–Landau–Stanyukovich problem describes the time evolution of converging shock waves. The problem was discussed by G. Guderley in 1942 and independently by Lev Landau and K. P. Stanyukovich in 1944, where the later authors' analysis was published in 1955.
Mathematical description.
Consider a spherically converging shock wave that was initiated by some means at a radial location formula_0 and directed towards the center. As the shock wave travels towards the origin, its strength increases since the shock wave compresses lesser and lesser amount of mass as it propagates. The shock wave location formula_1 thus varies with time. The self-similar solution to be described corresponds to the region formula_2, that is to say, the shock wave has travelled enough to forget about the initial condition.
Since the shock wave in the self-similar region is strong, the pressure behind the wave formula_3 is very large in comparison with the pressure ahead of the wave formula_4. According to Rankine–Hugoniot conditions, for strong waves, although formula_5, formula_6, where formula_7 represents gas density; in other words, the density jump across the shock wave is finite. For the analysis, one can thus assume formula_8 and formula_9, which in turn removes the velocity scale by setting formula_10 since formula_11.
At this point, it is worth noting that the analogous problem in which a strong shock wave propagating outwards is known to be described by the Taylor–von Neumann–Sedov blast wave. The description for Taylor–von Neumann–Sedov blast wave utilizes formula_12 and the total energy content of the flow to develop a self-similar solution. Unlike this problem, the imploding shock wave is not self-similar throughout the entire region (the flow field near formula_0 depends on the manner in which the shock wave is generated) and thus the Guderley–Landau–Stanyukovich problem attempts to describe in a self-similar manner, the flow field only for formula_2; in this self-similar region, energy is not constant and in fact, will be shown to decrease with time (the total energy of the entire region is still constant). Since the self-similar region is small in comparison with the initial size of the shock wave region, only a small fraction of the total energy is accumulated in the self-similar region. The problem thus contains no length scale to use dimensional arguments to find out the self-similar description i.e., the dependence of formula_13 on formula_14 cannot be determined by dimensional arguments alone. The problems of these kind are described by the self-similar solution of the second kind.
For convenience, measure the time formula_14 such that the converging shock wave reaches the origin at time formula_15. For formula_16, the converging shock approaches the origin and for formula_17, the reflected shock wave emerges from the origin. The location of shock wave formula_1 is assumed to be described by the function
formula_18
where formula_19 is the similarity index and formula_20 is a constant. The reflected shock emerges with the same similarity index. The value of formula_19 is determined from the condition that a self-similar solution exists, whereas the constant formula_20 cannot be described from the self-similar analysis; the constant formula_21 contains information from the region formula_22 and therefore can be determined only when the entire region of the flow is solved. The dimension of formula_20 will be found only after solving for formula_19. For Taylor–von Neumann–Sedov blast wave, dimensional arguments can be used to obtain formula_23
The shock-wave velocity is given by
formula_24
According to Rankine–Hugoniot conditions the gas velocity formula_25, pressure formula_3 and density formula_26 immediately behind the strong shock front, for an ideal gas are given by
formula_27
These will serve as the boundary conditions for the flow behind the shock front.
Self-similar solution.
The governing equations are
formula_28
where formula_7 is the density, formula_29 is the pressure, formula_30 is the entropy and formula_31 is the radial velocity. In place of the pressure formula_32, we can use the sound speed formula_33 using the relation formula_34.
To obtain the self-similar equations, we introduce
formula_35
Note that since both formula_14 and formula_31 are negative, formula_36. Formally the solution has to be found for the range formula_37. The boundary conditions at formula_38 are given by
formula_39
The boundary conditions at formula_40 can be derived from the observation at the time of collapse formula_15, wherein formula_41 becomes infinite. At the moment of collapse, the flow variables at any distance from the origin must be finite, that is to say, formula_31 and formula_42 must be finite for formula_43. This is possible only if
formula_44
Substituting the self-similar variables into the governing equations lead to
formula_45
From here, we can easily solve for formula_46 and formula_47 (or, formula_48) to find two equations. As a third equation, we could two of the equations by eliminating the variable formula_41. The resultant equations are
formula_49
where formula_50 and formula_51. It can be easily seen once the third equation is solved for formula_52, the first two equations can be integrated using simple quadratures.
The third equation is first-order differential equation for the function formula_54 with the boundary condition formula_55 pertaining to the condition behind the shock front. But there is another boundary condition that needs to be satisfied, i.e., formula_56 pertaining to the condition found at formula_40. This additional condition can be satisfied not for any arbitrary value of formula_19, but there exists only one value of formula_19 for which the second condition can be satisfied. Thus formula_19 is obtained as an eigenvalue. This eigenvalue can be obtained numerically.
The condition that determines formula_19 can be explained by plotting the integral curve formula_52 as shown in the figure as a solid curve. The point formula_20 is the initial condition for the differential equation, i.e., formula_57. The integral curve must end at the point formula_58. In the same figure, the parabola formula_59 corresponding to the condition formula_60 is also plotted as a dotted curve. It can be easily shown than the point formula_20 always lies above this parabola. This means that the integral curve formula_52 must intersect the parabola to reach the point formula_61. In all the three differential equation, the ratio formula_62 appears implying that this ratio vanishes at point formula_63 where the integral curve intersects the parabola. The physical requirement for the functions formula_64 and formula_65 is that they must be single-valued functions of formula_41 to get a unique solution. This means that the functions formula_66 and formula_67 cannot have extrema anywhere inside the domain. But at the point formula_63, formula_62 can vanish, indicating that the aforementioned functions have extrema. The only way to avoid this situation is to make the ratio formula_62 at formula_63 finite. That is to say, as formula_68 becomes zero, we require formula_69 also to be zero in such a manner to obtain formula_70. At formula_63,
formula_71
Numerical integrations of the third equation provide formula_72 for formula_73 and formula_74 for formula_53. These values for formula_19 may be compared with an approximate formula formula_75, derived by Landau and Stanyukovich. It can be established that as formula_76, formula_77. In general, the similarity index formula_19 is an irrational number.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r=R_0"
},
{
"math_id": 1,
"text": "r=R(t)"
},
{
"math_id": 2,
"text": "r\\sim R\\ll R_0"
},
{
"math_id": 3,
"text": "p_1"
},
{
"math_id": 4,
"text": "p_0"
},
{
"math_id": 5,
"text": "p_1\\gg p_0"
},
{
"math_id": 6,
"text": "\\rho_1\\sim \\rho_0"
},
{
"math_id": 7,
"text": "\\rho"
},
{
"math_id": 8,
"text": "p_0=0"
},
{
"math_id": 9,
"text": "\\rho_0\\neq 0"
},
{
"math_id": 10,
"text": "c_0=0"
},
{
"math_id": 11,
"text": "c_0^2=\\gamma p_0/\\rho_0"
},
{
"math_id": 12,
"text": "\\rho_0"
},
{
"math_id": 13,
"text": "R(t)"
},
{
"math_id": 14,
"text": "t"
},
{
"math_id": 15,
"text": "t=0"
},
{
"math_id": 16,
"text": "t<0"
},
{
"math_id": 17,
"text": "t>0"
},
{
"math_id": 18,
"text": "R(t) = A (-t)^\\alpha"
},
{
"math_id": 19,
"text": "\\alpha"
},
{
"math_id": 20,
"text": "A"
},
{
"math_id": 21,
"text": " A"
},
{
"math_id": 22,
"text": "r\\sim R_0"
},
{
"math_id": 23,
"text": "\\alpha=2/5."
},
{
"math_id": 24,
"text": "D = \\frac{dR}{dt} = -\\alpha A (-t)^{\\alpha-1}= \\frac{\\alpha R}{t}."
},
{
"math_id": 25,
"text": "v_1"
},
{
"math_id": 26,
"text": "\\rho_1"
},
{
"math_id": 27,
"text": "v_1 = \\frac{2}{\\gamma+1}D, \\quad p_1 = \\frac{2}{\\gamma+1}\\rho_0 D^2, \\quad \\rho_1= \\rho_0 \\frac{\\gamma+1}{\\gamma-1}."
},
{
"math_id": 28,
"text": "\n\\begin{align}\n\\frac{\\partial \\rho}{\\partial t} + v \\frac{\\partial \\rho}{\\partial r} &= - \\rho\\left(\\frac{\\partial v}{\\partial r} + \\frac{2v}{r}\\right),\\\\\n\\frac{\\partial v}{\\partial t} + v \\frac{\\partial v}{\\partial r} &= - \\frac{1}{\\rho}\\frac{\\partial p}{\\partial r},\\\\\n\\frac{\\partial s}{\\partial t} + v \\frac{\\partial s}{\\partial r} &= 0\n\\end{align}\n"
},
{
"math_id": 29,
"text": "p"
},
{
"math_id": 30,
"text": "s"
},
{
"math_id": 31,
"text": "v"
},
{
"math_id": 32,
"text": "p(r,t)"
},
{
"math_id": 33,
"text": "c(r,t)"
},
{
"math_id": 34,
"text": "c^2 = \\gamma p/\\rho"
},
{
"math_id": 35,
"text": "\\xi = \\frac{r}{R(t)},\\quad V(\\xi) = \\frac{vt}{\\alpha r}, \\quad G(\\xi) = \\frac{\\rho}{\\rho_0}, \\quad Z(\\xi) = \\frac{c^2 t^2}{\\alpha^2 r^2}."
},
{
"math_id": 36,
"text": "V>0"
},
{
"math_id": 37,
"text": "1<\\xi<\\infty"
},
{
"math_id": 38,
"text": "\\xi=1"
},
{
"math_id": 39,
"text": "V(1) = \\frac{2}{\\gamma+1}, \\quad G(1)=\\frac{\\gamma+1}{\\gamma-1}, \\quad Z(1) = \\frac{2\\gamma(\\gamma-1)}{(\\gamma+1)^2}."
},
{
"math_id": 40,
"text": "\\xi=\\infty"
},
{
"math_id": 41,
"text": "\\xi"
},
{
"math_id": 42,
"text": "c^2"
},
{
"math_id": 43,
"text": "t=0,\\,r\\neq 0"
},
{
"math_id": 44,
"text": "V(\\infty)=0, \\quad Z(\\infty) = 0."
},
{
"math_id": 45,
"text": "\\begin{align}\n(1-V) \\frac{dV}{d\\ln\\xi} - \\frac{Z}{\\gamma}\\frac{d\\ln G}{d\\ln\\xi} - \\frac{1}{\\gamma}\\frac{dZ}{d\\ln\\xi} &= \\frac{2}{\\gamma}Z- V\\left(\\frac{1}{\\alpha}-V\\right),\\\\\n\\frac{dV}{d\\ln \\xi} - (1-V) \\frac{d\\ln G}{d\\ln\\xi} & = -3V,\\\\\n(\\gamma-1) Z \\frac{d\\ln G}{d\\ln \\xi} - \\frac{dZ}{d\\ln\\xi} &= \\frac{2Z}{1-V}\\left(\\frac{1}{\\alpha}-V\\right).\n\\end{align}\n"
},
{
"math_id": 46,
"text": "d\\ln G/d\\ln\\xi"
},
{
"math_id": 47,
"text": "d\\ln V/d\\ln\\xi"
},
{
"math_id": 48,
"text": "d\\ln Z/d\\ln\\xi"
},
{
"math_id": 49,
"text": "\\begin{align}\n\\frac{d\\ln\\xi}{dV} &= - \\frac{\\Delta}{\\Delta_1},\\\\\n(1-V)\\frac{d\\ln G}{d\\ln\\xi} &= 3V - \\frac{\\Delta_1}{\\Delta},\\\\\n\\frac{dZ}{dV} &= \\frac{Z}{1-V} \\left\\{\\frac{\\Delta[2/\\alpha-(3\\gamma-1) V]}{\\Delta_1} + \\gamma-1\\right\\}\n\\end{align}\n"
},
{
"math_id": 50,
"text": "\\Delta=Z-(1-V)^2"
},
{
"math_id": 51,
"text": "\\Delta_1=[3V-2(1-\\alpha)/\\alpha\\gamma]Z - V(1-V)(1/\\alpha-V)"
},
{
"math_id": 52,
"text": "Z=Z(V)"
},
{
"math_id": 53,
"text": "\\gamma=7/5"
},
{
"math_id": 54,
"text": "Z(V)"
},
{
"math_id": 55,
"text": "Z(2/(\\gamma+1))=2\\gamma(\\gamma-1)/(\\gamma+1)^2"
},
{
"math_id": 56,
"text": "Z(0)=0"
},
{
"math_id": 57,
"text": "A:(V,Z)=(2/(\\gamma+1),2\\gamma(\\gamma-1)/(\\gamma+1)^2)"
},
{
"math_id": 58,
"text": "O:(V,Z)=(0,0)"
},
{
"math_id": 59,
"text": "Z=(1-V)^2"
},
{
"math_id": 60,
"text": "\\Delta=0"
},
{
"math_id": 61,
"text": "O"
},
{
"math_id": 62,
"text": "\\Delta/\\Delta_1"
},
{
"math_id": 63,
"text": "B"
},
{
"math_id": 64,
"text": "V,\\,G"
},
{
"math_id": 65,
"text": "Z"
},
{
"math_id": 66,
"text": "\\xi(V),\\,\\xi(G)"
},
{
"math_id": 67,
"text": "\\xi(Z)"
},
{
"math_id": 68,
"text": "\\Delta"
},
{
"math_id": 69,
"text": "\\Delta_1"
},
{
"math_id": 70,
"text": "\\Delta/\\Delta_1=0/0=\\text{finite}"
},
{
"math_id": 71,
"text": "Z=(1-V)^2, \\quad (3V-2(1-\\alpha)/\\alpha\\gamma)Z = V(1-V)(1/\\alpha-V)."
},
{
"math_id": 72,
"text": "\\alpha=0.6883740859"
},
{
"math_id": 73,
"text": "\\gamma=5/3"
},
{
"math_id": 74,
"text": "\\alpha=0.7171745015"
},
{
"math_id": 75,
"text": "\\alpha = [1+2\\gamma/(\\sqrt{\\gamma}+\\sqrt{2})^2]^{-1}"
},
{
"math_id": 76,
"text": "\\gamma\\rightarrow 1"
},
{
"math_id": 77,
"text": "\\alpha\\rightarrow 1"
}
]
| https://en.wikipedia.org/wiki?curid=69607043 |
69609068 | Hermann Nicolai | German physicist
Hermann Nicolai (born 11 July 1952 in Friedberg) is a German theoretical physicist and director emeritus at the Max Planck Institute for Gravitational Physics in Potsdam-Golm.
Education and career.
At Karlsruhe Institute of Technology, Hermann Nicolai, beginning in 1971, studied physics and mathematics with a "Diplom" in 1975 with a doctorate in 1978 under the supervision of Julius Wess. At Heidelberg University, Nicolai was from 1978 to 1979 an assistant in theoretical physics. From 1979 to 1986, he worked at CERN in Geneva as a staff member in the theory department. In 1983 he received his habilitation at Heidelberg University. He was a professor (with civil service grade C3) of theoretical physics at Karlsruhe Institute of Technology from 1986 to 1988 and from 1988 to 1997 a professor (with civil service grade C4) of theoretical physics at the University of Hamburg. At the Max Planck Institute for Gravitational Physics, Nicolai was head of the department "Quantum Gravity and Unified Field Theories" and a director from 1997 to 2020, when he retired as director emeritus.
He was a member of the editorial board of "Communications in Mathematical Physics" from 1993 to 1995. Then, from 1998 to 2003 the editor-in-chief of the journal "Classical and Quantum Gravity", and from 2006 to 2011 the editor-in-chief of the journal "General Relativity and Gravitation".
In 1991, Nicolai received the Otto-Klung-Award (now called the Klung Wilhelmy Science Award), in 2010 the Albert Einstein Medal, and in 2013 the Gay-Lussac-Humboldt Prize. He was appointed an honorary professor at the Humboldt University of Berlin and in 2005 at the University of Hannover.
Research.
In the mid 1980s, Nicolai and Bernard de Wit developed the ""N" = 8 supergravity theory", which arises from the dimensional reduction of the maximally supersymmetrical eleven-dimensional supergravity to four space-time dimensions ("d" = 4) and for which, from many plausible viewpoints, a maximal supersymmetry has a supergravity theory with a graviton and no particle with a spin greater than 2.
In the 2000s, Nicolai and colleagues investigated the behavior of gravitational equations close to a gravitational singularity such as the Big Bang; these investigation lead to models with chaotic dynamical billiards, in the case of classical general relativity theory in three dimensions. In the case of eleven-dimensional supergravity, these investigations to ten-dimensional "cosmological billiards", and the infinite-dimensional hyperbolic Kac Moody algebra formula_0 appears as a symmetry. formula_0 contains the largest finite-dimensional exceptional semi-simple complex Lie algebra formula_1, which has been studied as a candidate for a grand unified theory (GUT]. Nicolai proposed a purely algebraic description of the universe in cosmological space-time regions near the singularity (within the Planck time) using the formula_0-symmetry, whereby the space-time dimensions result as an emergent phenomenon.
Nicolai has also done research on a special role for formula_0 in M-Theory.
He and de Wit also constructed maximally gauged ("N" = 16) supergravity theories in three dimensions and their symmetries. Furthermore, Nicolai and colleagues examined generalizations of the variables of loop quantum gravity to supergravity / string theory.
Selected publications.
In addition to the publications cited in the footnotes: | [
{
"math_id": 0,
"text": "E_ {10}"
},
{
"math_id": 1,
"text": "E_8"
}
]
| https://en.wikipedia.org/wiki?curid=69609068 |
6961430 | Total correlation | In probability theory and in particular in information theory, total correlation (Watanabe 1960) is one of several generalizations of the mutual information. It is also known as the "multivariate constraint" (Garner 1962) or "multiinformation" (Studený & Vejnarová 1999). It quantifies the redundancy or dependency among a set of "n" random variables.
Definition.
For a given set of "n" random variables formula_0, the total correlation formula_1 is defined as the Kullback–Leibler divergence from the joint distribution formula_2 to the independent distribution of formula_3,
formula_4
This divergence reduces to the simpler difference of entropies,
formula_5
where formula_6 is the information entropy of variable formula_7, and formula_8 is the joint entropy of the variable set formula_0. In terms of the discrete probability distributions on variables formula_9, the total correlation is given by
formula_10
The total correlation is the amount of information "shared" among the variables in the set. The sum formula_11 represents the amount of information in bits (assuming base-2 logs) that the variables would possess if they were totally independent of one another (non-redundant), or, equivalently, the average code length to transmit the values of all variables if each variable was (optimally) coded independently. The term formula_12 is the "actual" amount of information that the variable set contains, or equivalently, the average code length to transmit the values of all variables if the set of variables was (optimally) coded together. The difference between
these terms therefore represents the absolute redundancy (in bits) present in the given
set of variables, and thus provides a general quantitative measure of the
"structure" or "organization" embodied in the set of variables
(Rothstein 1952). The total correlation is also the Kullback–Leibler divergence between the actual distribution formula_13 and its maximum entropy product approximation formula_3.
Total correlation quantifies the amount of dependence among a group of variables. A near-zero total correlation indicates that the variables in the group are essentially statistically independent; they are completely unrelated, in the sense that knowing the value of one variable does not provide any clue as to the values of the other variables. On the other hand, the maximum total correlation (for a fixed set of individual entropies formula_14) is given by
formula_15
and occurs when one of the variables determines "all" of the other variables. The variables are then maximally related in the sense that knowing the value of one variable provides complete information about the values of all the other variables, and the variables can be figuratively regarded as "cogs," in which the position of one cog determines the positions of all the others (Rothstein 1952).
It is important to note that the total correlation counts up "all" the redundancies among a set of variables, but that these redundancies may be distributed throughout the variable set in a variety of complicated ways (Garner 1962). For example, some variables in the set may be totally inter-redundant while others in the set are completely independent. Perhaps more significantly, redundancy may be carried in interactions of various degrees: A group of variables may not possess any pairwise redundancies, but may possess higher-order "interaction" redundancies of the kind exemplified by the parity function. The decomposition of total correlation into its constituent redundancies is explored in a number sources (Mcgill 1954, Watanabe 1960, Garner 1962, Studeny & Vejnarova 1999, Jakulin & Bratko 2003a, Jakulin & Bratko 2003b, Nemenman 2004, Margolin et al. 2008, Han 1978, Han 1980).
Conditional total correlation.
Conditional total correlation is defined analogously to the total correlation, but adding a condition to each term. Conditional total correlation is similarly defined as a Kullback-Leibler divergence between two conditional probability distributions,
formula_16
Analogous to the above, conditional total correlation reduces to a difference of conditional entropies,
formula_17
Uses of total correlation.
Clustering and feature selection algorithms based on total correlation have been explored by Watanabe. Alfonso et al. (2010) applied the concept of total correlation to the optimisation of water monitoring networks. | [
{
"math_id": 0,
"text": "\\{X_1,X_2,\\ldots,X_n\\}"
},
{
"math_id": 1,
"text": "C(X_1,X_2,\\ldots,X_n)"
},
{
"math_id": 2,
"text": "p(X_1, \\ldots, X_n)"
},
{
"math_id": 3,
"text": "p(X_1)p(X_2)\\cdots p(X_n)"
},
{
"math_id": 4,
"text": "C(X_1, X_2, \\ldots, X_n) \\equiv \\operatorname{D_{KL}}\\left[ p(X_1, \\ldots, X_n) \\| p(X_1)p(X_2)\\cdots p(X_n)\\right] \\; ."
},
{
"math_id": 5,
"text": "C(X_1,X_2,\\ldots,X_n) = \\left[\\sum_{i=1}^n H(X_i)\\right] - H(X_1, X_2, \\ldots, X_n)"
},
{
"math_id": 6,
"text": "H(X_{i})"
},
{
"math_id": 7,
"text": "X_i \\,"
},
{
"math_id": 8,
"text": "H(X_1,X_2,\\ldots,X_n)"
},
{
"math_id": 9,
"text": "\\{X_1, X_2, \\ldots, X_n\\} "
},
{
"math_id": 10,
"text": "C(X_1,X_2,\\ldots,X_n)= \\sum_{x_1\\in\\mathcal{X}_1} \\sum_{x_2\\in\\mathcal{X}_2} \\ldots \\sum_{x_n\\in\\mathcal{X}_n} p(x_1,x_2,\\ldots,x_n)\\log\\frac{p(x_1,x_2,\\ldots,x_n)} {p(x_1)p(x_2)\\cdots p(x_n)}.\n"
},
{
"math_id": 11,
"text": "\\begin{matrix}\\sum_{i=1}^n H(X_i)\\end{matrix}"
},
{
"math_id": 12,
"text": "H(X_{1},X_{2},\\ldots ,X_{n})"
},
{
"math_id": 13,
"text": "p(X_1,X_2,\\ldots,X_n)"
},
{
"math_id": 14,
"text": "H(X_1), ..., H(X_n)"
},
{
"math_id": 15,
"text": "C_\\max = \\sum_{i=1}^n H(X_i)-\\max\\limits_{X_i}H(X_i),"
},
{
"math_id": 16,
"text": "C(X_1, X_2, \\ldots, X_n|Y=y) \\equiv \\operatorname{D_{KL}}\\left[ p(X_1, \\ldots, X_n|Y=y) \\| p(X_1|Y=y)p(X_2|Y=y)\\cdots p(X_n|Y=y)\\right] \\; ."
},
{
"math_id": 17,
"text": "C(X_1,X_2,\\ldots,X_n|Y=y) = \\sum_{i=1}^n H(X_i|Y=y) - H(X_1, X_2, \\ldots, X_n|Y=y)"
}
]
| https://en.wikipedia.org/wiki?curid=6961430 |
69614416 | Geometric logic | In mathematical logic, geometric logic is an infinitary generalisation of coherent logic, a restriction of first-order logic due to Skolem that is proof-theoretically tractable. Geometric logic is capable of expressing many mathematical theories and has close connections to topos theory.
Definitions.
A theory of first-order logic is geometric if it is can be axiomatised using only axioms of the form
formula_0
where I and J are disjoint collections of formulae indices that each may be infinite and the formulae φ are either atoms or negations of atoms. If all the axioms are finite (i.e., for each axiom, both I and J are finite), the theory is coherent.
Theorem.
Every first-order theory has a coherent conservative extension.
Significance.
list eight consequences of the above theorem that explain its significance (omitting footnotes and most references):
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\bigwedge_{i \\in I} \\phi_{i,1} \\vee \\dots \\vee \\phi_{i,n_i} \\implies \\bigvee_{j \\in J} \\phi_{j,1} \\vee \\dots \\vee \\phi_{j,m_j}\n"
}
]
| https://en.wikipedia.org/wiki?curid=69614416 |
69615015 | USBM wettability index | The U.S. Bureau of Mines (USBM), developed by Donaldson et al. in 1969, is a method to measure wettability of petroleum reservoir rocks. In this method, the areas under the forced displacement Capillary pressure curves of oil and water drive processes are denoted as formula_0 and formula_1 to calculate the USBM index.
formula_2
USBM index is positive for water-wet rocks, and negative for oil-wet systems.
Bounded USBM (or USBM*).
The USBM index is theoretically unbounded and can vary from negative infinity to positive infinity. Since other wettability indices such as Amott-Harvey, Lak wettability index and modified Lak are bounded in the range of -1 to 1, Abouzar Mirzaei-Paiaman highlighted the bounded form of USBM (called USBM*) as a replacement of the traditional USBM as
formula_3
USBM* varies from -1 to 1 for strongly oil-wet and strongly water-wet rocks, respectively.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A1"
},
{
"math_id": 1,
"text": "A2"
},
{
"math_id": 2,
"text": "USBM = log\\frac{A_{\\mathit{1}}} {\\ A_{\\mathit{2}}} "
},
{
"math_id": 3,
"text": "USBM* = \\frac{A_{\\mathit{1}}-A_{\\mathit{2}}} {\\ A_{\\mathit{1}}+A_{\\mathit{2}}} "
}
]
| https://en.wikipedia.org/wiki?curid=69615015 |
69615924 | Šindel sequence | In additive combinatorics, a Šindel sequence is a periodic sequence of integers with the property that its partial sums include all of the triangular numbers. For instance, the sequence that begins 1, 2, 3, 4, 3, 2 is a Šindel sequence, with the triangular partial sums
<templatestyles src="Block indent/styles.css"/> 1 = 1,
<templatestyles src="Block indent/styles.css"/> 3 = 1 + 2,
<templatestyles src="Block indent/styles.css"/> 6 = 1 + 2 + 3,
<templatestyles src="Block indent/styles.css"/>10 = 1 + 2 + 3 + 4,
<templatestyles src="Block indent/styles.css"/>15 = 1 + 2 + 3 + 4 + 3 + 2,
<templatestyles src="Block indent/styles.css"/>21 = 1 + 2 + 3 + 4 + 3 + 2 + 1 + 2 + 3,
<templatestyles src="Block indent/styles.css"/>28 = 1 + 2 + 3 + 4 + 3 + 2 + 1 + 2 + 3 + 4 + 3,
etc. Another way of describing such a sequence is that it can be partitioned into contiguous subsequences whose sums are the consecutive integers:
This particular example is used in the gearing of the Prague astronomical clock, as part of a mechanism for chiming the clock's bells the correct number of times at each hour. The Šindel sequences are named after Jan Šindel, a Czech scientist in the 14th and 15th centuries whose calculations were used in the design of the Prague clock. The definition and name of these sequences were given by Michal Křížek, Alena Šolcová, and Lawrence Somer, in their work analyzing the mathematics of the Prague clock.
If formula_0 denotes the sum of the numbers within a single period of a periodic sequence, and formula_0 is odd, then only the triangular numbers formula_1 up to formula_2 need to be checked, to determine whether it is a Šindel sequence. If all of these triangular numbers are partial sums of the sequence, then all larger triangular numbers will be as well. For even values of formula_0, a larger set of triangular numbers needs to be checked, up to formula_3.
In the Prague clock, an auxiliary gear with slots spaced at intervals of 1, 2, 3, 4, 3, and 2 units (repeating in the example Šindel sequence in each of its rotations) is synchronized with and regulates the motion of another larger gear whose slots are spaced at intervals of 1, 2, 3, 4, 5, ..., 24 units, revolving once a day with its spacing controlling the number of chimes on each hour. In order to keep these two gears synchronized, it is important that, for every revolution of the large gear, the small gear also revolves an integer number of times. Mathematically, this means that the sum formula_4 of the period of the Šindel sequence must evenly divide formula_5, the sum of spacing intervals of the large gear. For this reason it is of interest to find Šindel sequences with a given period sum formula_0. In connection with this problem, a "primitive Šindel sequence", is a Šindel sequence no two of whose numbers can be replaced by their sum, forming a shorter Šindel sequence. For every formula_0 there exists a unique primitive Šindel sequence having period sum equal to formula_0. Note however, that this sequence may be formed by repeating a shorter Šindel sequence more than one time.
A sequence that just repeats the number 1, with any period, is a Šindel sequence, and is called the "trivial Šindel sequence". If formula_0 is a power of two, then the trivial Šindel sequence with period formula_0 is primitive, and is the unique primitive Šindel sequence with period sum formula_0. For any other choice of formula_0, the unique primitive Šindel sequence with period sum formula_0 is not trivial.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "s"
},
{
"math_id": 1,
"text": "\\tbinom{i}{2}"
},
{
"math_id": 2,
"text": "\\tbinom{(s+1)/2}{2}"
},
{
"math_id": 3,
"text": "\\tbinom{s}{2}"
},
{
"math_id": 4,
"text": "s=15"
},
{
"math_id": 5,
"text": "\\tbinom{25}{2}"
}
]
| https://en.wikipedia.org/wiki?curid=69615924 |
69616376 | 1 Samuel 19 | First Book of Samuel chapter
1 Samuel 19 is the nineteenth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's escape from Saul's repeated attempts to kill him. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 24 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q52 (4QSamb; 250 BCE) with extant verses 10–13, 15–17.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
David became a member of Saul's household with his marriage to Michal, but that did not stop Saul trying to kill David as Saul openly shared this plan with his trusted servants (verse 1). Ironically the loyalty of Saul's own children, Jonathan and Michal, saved David from Saul's further attempts.
Saul tried to kill David (19:1–10).
Saul's renewed plans to kill David were now brought into the open (verse 1), but Jonathan became David's conciliator, reminding Saul that David was innocent and his success was YHWH's victory, so Saul should not kill a person endowed with divine power like David. Saul listened and promised under divine oath not to kill David (verse 5), then accepted David again in his court. However, after David achieves another victory over the Philistines, Saul's anger was aroused again (verses 8–10), that he again tried to pin David to the wall with javelin, but one more time David managed to escape.
"And the evil spirit from the LORD was upon Saul, as he sat in his house with his javelin in his hand: and David played with his hand."
Michal saved David's life (19:11–24).
After an unsuccessful attempt to kill David with his spear, Saul set guard around David's residence with the order to kill David the next morning (verse 11). David's wife, Michal, warned him of her father's evil plan (verse 11), helped him to escape (verse 12), and to give him time using a makeshift mannequin consisting of a "teraphim", a garment and goats' hair (as a 'wig') to confirm the impression that he was sick in bed (verses 13–17). A point is made that David was saving his own life (verse 11) and that Michal, so as not to displease her father, was not participating in the escape, but in obedience to David only assisted him in executing it (verse 17), thus she was loyal to both sides.
David went to meet Samuel in his home base (1 Samuel 7:17) and they journeyed together to Naioth in Ramah area, which was a prophetic center, just like Nob was a priestly center. Saul sent three groups of messengers but each was 'seized by prophetic frenzy', which also happened to Saul himself when he decided to go to Naioth, in a deliberate act to defy YHWH, even when he had the same experience before (1 Samuel 10:12; 11:6).
"And Michal took an image and laid it in the bed, put a cover of goats’ hair for his head, and covered it with clothes."
"And he stripped off his clothes also, and prophesied before Samuel in like manner, and lay down naked all that day and all that night. Wherefore they say, Is Saul also among the prophets?"
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69616376 |
6961771 | Principal ideal ring | Ring in which every ideal is principal
In mathematics, a principal right (left) ideal ring is a ring "R" in which every right (left) ideal is of the form "xR" ("Rx") for some element "x" of "R". (The right and left ideals of this form, generated by one element, are called principal ideals.) When this is satisfied for both left and right ideals, such as the case when "R" is a commutative ring, "R" can be called a principal ideal ring, or simply principal ring.
If only the finitely generated right ideals of "R" are principal, then "R" is called a right Bézout ring. Left Bézout rings are defined similarly. These conditions are studied in domains as Bézout domains.
A principal ideal ring which is also an integral domain is said to be a "principal ideal domain" (PID). In this article the focus is on the more general concept of a principal ideal ring which is not necessarily a domain.
General properties.
If "R" is a principal right ideal ring, then it is certainly a right Noetherian ring, since every right ideal is finitely generated. It is also a right Bézout ring since all finitely generated right ideals are principal. Indeed, it is clear that principal right ideal rings are exactly the rings which are both right Bézout and right Noetherian.
Principal right ideal rings are closed under finite direct products. If formula_0, then each right ideal of "R" is of the form formula_1, where each formula_2 is a right ideal of "R"i. If all the "R"i are principal right ideal rings, then "A"i="x"i"R"i, and then it can be seen that formula_3. Without much more effort, it can be shown that right Bézout rings are also closed under finite direct products.
Principal right ideal rings and right Bézout rings are also closed under quotients, that is, if "I" is a proper ideal of principal right ideal ring "R", then the quotient ring "R/I" is also principal right ideal ring. This follows readily from the isomorphism theorems for rings.
All properties above have left analogues as well.
Commutative examples.
1. The ring of integers: formula_4
2. The integers modulo "n": formula_5.
3. Let formula_6 be rings and formula_7. Then "R" is a principal ring if and only if "R""i" is a principal ring for all "i".
4. The localization of a principal ring at any multiplicative subset is again a principal ring. Similarly, any quotient of a principal ring is again a principal ring.
5. Let "R" be a Dedekind domain and "I" be a nonzero ideal of "R". Then the quotient "R"/"I" is a principal ring. Indeed, we may factor "I" as a product of prime
powers: formula_8, and by the Chinese Remainder Theorem
formula_9, so it suffices to see that each
formula_10 is a principal ring. But formula_10 is isomorphic to the quotient formula_11 of the discrete valuation ring
formula_12 and, being a quotient of a principal ring, is itself a principal ring.
6. Let "k" be a finite field and put formula_13, formula_14 and formula_15. Then R is a finite local ring which is "not" principal.
7. Let "X" be a finite set. Then formula_16 forms a commutative principal ideal ring with unity, where formula_17 represents set symmetric difference and formula_18 represents the powerset of "X". If "X" has at least two elements, then the ring also has zero divisors. If "I" is an ideal, then formula_19. If instead "X" is infinite, the ring is "not" principal: take the ideal generated by the finite subsets of "X", for example.
Structure theory for commutative PIR's.
The principal rings constructed in Example 5. above are always Artinian rings; in particular they are isomorphic to a finite direct product of principal Artinian local rings.
A local Artinian principal ring is called a special principal ring and has an extremely simple ideal structure: there are only finitely many ideals, each of which is a power of the maximal ideal. For this reason, special principal rings are examples of uniserial rings.
The following result gives a complete classification of principal rings in terms of special principal rings and principal ideal domains.
Zariski–Samuel theorem: Let "R" be a principal ring. Then "R" can be written as a direct product formula_20, where each "R"i is either a principal ideal domain or a special principal ring.
The proof applies the Chinese Remainder theorem to a minimal primary decomposition of the zero ideal.
There is also the following result, due to Hungerford:
Theorem (Hungerford): Let "R" be a principal ring. Then "R" can be written as a direct product formula_20, where each "R"i is a quotient of a principal ideal domain.
The proof of Hungerford's theorem employs Cohen's structure theorems for complete local rings.
Arguing as in Example 3. above and using the Zariski-Samuel theorem, it is easy to check that Hungerford's theorem is equivalent to the statement that any special principal ring is the quotient of a discrete valuation ring.
Noncommutative examples.
Every semisimple ring "R" which is not just a product of fields is a noncommutative right and left principal ideal domain. Every right and left ideal is a direct summand of "R", and so is of the form "eR" or "Re" where "e" is an idempotent of "R". Paralleling this example, von Neumann regular rings are seen to be both right and left Bézout rings.
If "D" is a division ring and formula_21 is a ring endomorphism which is not an automorphism, then the skew polynomial ring formula_22 is known to be a principal left ideal domain which is not right Noetherian, and hence it cannot be a principal right ideal ring. This shows that even for domains principal left and principal right ideal rings are different.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R=\\prod_{i=1}^nR_i"
},
{
"math_id": 1,
"text": "A=\\prod_{i=1}^nA_i"
},
{
"math_id": 2,
"text": "A_i"
},
{
"math_id": 3,
"text": "(x_1,\\ldots,x_n)R=A"
},
{
"math_id": 4,
"text": "\\mathbb{Z}"
},
{
"math_id": 5,
"text": "\\mathbb{Z}/n\\mathbb{Z}"
},
{
"math_id": 6,
"text": "R_1,\\ldots,R_n"
},
{
"math_id": 7,
"text": "R = \\prod_{i=1}^n R_i"
},
{
"math_id": 8,
"text": " I = \\prod_{i=1}^n P_i^{a_i}"
},
{
"math_id": 9,
"text": " R/I \\cong \\prod_{i=1}^n R/P_i^{a_i}"
},
{
"math_id": 10,
"text": "R/P_i^{a_i}"
},
{
"math_id": 11,
"text": "R_{P_i}/P_i^{a_i} R_{P_i}"
},
{
"math_id": 12,
"text": "R_{P_i}"
},
{
"math_id": 13,
"text": " A = k[x,y]"
},
{
"math_id": 14,
"text": "\\mathfrak{m} = \\langle x, y \\rangle "
},
{
"math_id": 15,
"text": " R = A/\\mathfrak{m}^2 "
},
{
"math_id": 16,
"text": " (\\mathcal{P}(X),\\Delta,\\cap) "
},
{
"math_id": 17,
"text": "\\Delta"
},
{
"math_id": 18,
"text": "\\mathcal{P}(X)"
},
{
"math_id": 19,
"text": " I=(\\bigcup I)"
},
{
"math_id": 20,
"text": "\\prod_{i=1}^n R_i"
},
{
"math_id": 21,
"text": "\\sigma"
},
{
"math_id": 22,
"text": "D[x,\\sigma]"
}
]
| https://en.wikipedia.org/wiki?curid=6961771 |
6962225 | Iota and Jot | Esoteric programming languages
In formal language theory and computer science, Iota and Jot (from Greek iota ι, Hebrew yodh י, the smallest letters in those two alphabets) are languages, extremely minimalist formal systems, designed to be even simpler than other more popular alternatives, such as lambda calculus and SKI combinator calculus. Thus, they can also be considered minimalist computer programming languages, or Turing tarpits, esoteric programming languages designed to be as small as possible but still Turing-complete. Both systems use only two symbols and involve only two operations. Both were created by professor of linguistics Chris Barker in 2001. Zot (2002) is a successor to Iota that supports input and output.
Note that this article uses Backus-Naur form to describe syntax.
Universal iota.
Chris Barker's universal iota combinator ι has the very simple λf.fSK structure defined here, using denotational semantics in terms of the lambda calculus,
From this, one can recover the usual SKI expressions, thus:
Because of its minimalism, it has influenced research concerning Chaitin's constant.
Iota.
Iota is the LL(1) language that prefix orders trees of the aforementioned Universal iota ι combinator leafs, consed by function application ε,
iota = "1" | "0" iota iota
so that for example denotes formula_0, whereas denotes formula_1.
Jot.
Jot is the regular language consisting of all sequences of 0 and 1,
jot = "" | jot "0" | jot "1"
The semantics is given by translation to SKI expressions.
The empty string denotes formula_2,
formula_3 denotes formula_4,
where formula_5 is the translation of formula_6,
and formula_7 denotes formula_8.
The point of the formula_7 case is that the translation satisfies formula_9 for arbitrary SKI terms formula_10 and formula_11.
For example,
formula_12
holds for arbitrary strings formula_6.
Similarly,
formula_13
holds as well.
These two examples are the base cases of the translation of arbitrary SKI terms to Jot given by Barker,
making Jot a natural Gödel numbering of all algorithms.
Jot is connected to Iota by the fact that formula_14 and by using the same identities on SKI terms for obtaining the basic combinators formula_15 and formula_16.
Zot.
The Zot and Positive Zot languages command Iota computations, from inputs to outputs by continuation-passing style, in syntax resembling Jot,
zot = pot | ""
pot = iot | pot iot
iot = "0" | "1"
where produces the continuation formula_17,
and produces the continuation formula_18,
and wi consumes the final input digit i by continuing through the continuation w.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "((\\iota\\iota)(\\iota\\iota))"
},
{
"math_id": 1,
"text": "(\\iota(\\iota(\\iota\\iota)))"
},
{
"math_id": 2,
"text": "I"
},
{
"math_id": 3,
"text": "w0"
},
{
"math_id": 4,
"text": "(([w]S)K)"
},
{
"math_id": 5,
"text": "[w]"
},
{
"math_id": 6,
"text": "w"
},
{
"math_id": 7,
"text": "w1"
},
{
"math_id": 8,
"text": "(S(K[w]))"
},
{
"math_id": 9,
"text": "(([w1]A)B) = ([w](A B))"
},
{
"math_id": 10,
"text": "A"
},
{
"math_id": 11,
"text": "B"
},
{
"math_id": 12,
"text": "[w11100] = (([w1110]S)K) = (((([w111]S)K)S)K) = ((([w11](SK))S)K) = (([w1]((SK)S))K) = ([w](((SK)S)K)) = ([w]K)"
},
{
"math_id": 13,
"text": "[w11111000] = (((((([w11111]S)K)S)K)S)K) = ([w](((((SK)S)K)S)K)) = ([w]S)"
},
{
"math_id": 14,
"text": "[w0] = (\\iota[w])"
},
{
"math_id": 15,
"text": "K"
},
{
"math_id": 16,
"text": "S"
},
{
"math_id": 17,
"text": "\\lambda cL.L(\\lambda lR.R(\\lambda r.c(lr)))"
},
{
"math_id": 18,
"text": "\\lambda c.c\\iota"
}
]
| https://en.wikipedia.org/wiki?curid=6962225 |
69623562 | Stochastic logarithm | Term in stochastic calculus
In stochastic calculus, stochastic logarithm of a semimartingale formula_0such that formula_1 and formula_2 is the semimartingale formula_3 given byformula_4In layperson's terms, stochastic logarithm of formula_0 measures the cumulative percentage change in formula_0.
Notation and terminology.
The process formula_3 obtained above is commonly denoted formula_5. The terminology "stochastic logarithm" arises from the similarity of formula_5 to the natural logarithm formula_6: If formula_0 is absolutely continuous with respect to time and formula_7, then "formula_3" solves, path-by-path, the differential equation formula_8whose solution is formula_9.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Y"
},
{
"math_id": 1,
"text": "Y\\neq0"
},
{
"math_id": 2,
"text": "Y_-\\neq0"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "dX_t=\\frac{dY_t}{Y_{t-}},\\quad X_0=0."
},
{
"math_id": 5,
"text": "\\mathcal{L}(Y)"
},
{
"math_id": 6,
"text": "\\log(Y)"
},
{
"math_id": 7,
"text": "Y\\neq 0"
},
{
"math_id": 8,
"text": "\\frac{dX_t}{dt} = \\frac{\\frac{dY_t}{dt}}{Y_t},"
},
{
"math_id": 9,
"text": "X =\\log|Y|-\\log|Y_0|"
},
{
"math_id": 10,
"text": "Y\\neq 0, Y_-\\neq 0"
},
{
"math_id": 11,
"text": "\\mathcal{L}(Y)_t = \\log\\Biggl|\\frac{Y_t}{Y_0}\\Biggl|\n+\\frac12\\int_0^t\\frac{d[Y]^c_s}{Y_{s-}^2}\n+\\sum_{s\\le t}\\Biggl(\\log\\Biggl| 1 + \\frac{\\Delta Y_s}{Y_{s-}} \\Biggr|\n-\\frac{\\Delta Y_s}{Y_{s-}}\\Biggr),\\qquad t\\ge0,"
},
{
"math_id": 12,
"text": "[Y]^c"
},
{
"math_id": 13,
"text": "t"
},
{
"math_id": 14,
"text": "\\mathcal{L}(Y)_t = \\log\\Biggl|\\frac{Y_t}{Y_0}\\Biggl|\n+\\frac12\\int_0^t\\frac{d[Y]^c_s}{Y_{s-}^2},\\qquad t\\ge0."
},
{
"math_id": 15,
"text": "\\mathcal{L}(Y) = \\log\\Biggl|\\frac{Y}{Y_0}\\Biggl|."
},
{
"math_id": 16,
"text": "\\Delta X\\neq -1"
},
{
"math_id": 17,
"text": "\\mathcal{L}(\\mathcal{E}(X)) = X-X_0"
},
{
"math_id": 18,
"text": "Y_-\\neq 0"
},
{
"math_id": 19,
"text": "\\mathcal{E}(\\mathcal{L}(Y)) = Y/Y_0"
},
{
"math_id": 20,
"text": "\\log(Y_t)"
},
{
"math_id": 21,
"text": "\\mathcal{L}(Y)_t"
},
{
"math_id": 22,
"text": "Y_t"
},
{
"math_id": 23,
"text": "[0,t]"
},
{
"math_id": 24,
"text": "\\mathcal{L}(Y_t)"
},
{
"math_id": 25,
"text": "0"
},
{
"math_id": 26,
"text": "Y^{(1)},Y^{(2)}"
},
{
"math_id": 27,
"text": "\\mathcal{L}\\bigl(Y^{(1)}Y^{(2)}\\bigr) = \\mathcal{L}\\bigl(Y^{(1)}\\bigr) \n+ \\mathcal{L}\\bigl(Y^{(2)}\\bigr) \n+ \\bigl[\\mathcal{L}\\bigl(Y^{(1)}\\bigr),\\mathcal{L}\\bigl(Y^{(2)}\\bigr)\\bigr]."
},
{
"math_id": 28,
"text": "1/\\mathcal{E}(X)"
},
{
"math_id": 29,
"text": "\\mathcal{L}\\biggl(\\frac{1}{\\mathcal{E}(X)}\\biggr)_t = X_0-X_t-[X]^c_t\n+\\sum_{s\\leq t}\\frac{(\\Delta X_s)^2}{1+\\Delta X_s}."
},
{
"math_id": 30,
"text": "Q"
},
{
"math_id": 31,
"text": "P"
},
{
"math_id": 32,
"text": "Z"
},
{
"math_id": 33,
"text": "Z_\\infty = dQ/dP"
},
{
"math_id": 34,
"text": "U"
},
{
"math_id": 35,
"text": "U+[U,\\mathcal{L}(Z)]"
}
]
| https://en.wikipedia.org/wiki?curid=69623562 |
69624901 | Bicrossed product of Hopf algebra | Concept in Hopf algebra
In quantum group and Hopf algebra, the bicrossed product is a process to create new Hopf algebras from the given ones. It's motivated by the Zappa–Szép product of groups. It was first discussed by M. Takeuchi in 1981, and now a general tool for construction of Drinfeld quantum double.
Bicrossed product.
Consider two bialgebras formula_0 and formula_1, if there exist linear maps formula_2 turning formula_1 a module coalgebra over formula_0, and formula_3 turning formula_0 into a right module coalgebra over formula_1. We call them a pair of matched bialgebras, if we set formula_4 and formula_5, the following conditions are satisfied
formula_6
formula_7
formula_8
formula_9
formula_10
for all formula_11 and formula_12. Here the Sweedler's notation of coproduct of Hopf algebra is used.
For matched pair of Hopf algebras formula_0 and formula_1, there exists a unique Hopf algebra over formula_13, the resulting Hopf algebra is called bicrossed product of formula_0 and formula_1 and denoted by formula_14,
Drinfeld quantum double.
For a given Hopf algebra formula_20, its dual space formula_21 has a canonical Hopf algebra structure and formula_20 and formula_22 are matched pairs. In this case, the bicrossed product of them is called Drinfeld quantum double formula_23.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "\\alpha:A\\otimes X \\to X"
},
{
"math_id": 3,
"text": "\\beta: A\\otimes X\\to A"
},
{
"math_id": 4,
"text": "\\alpha(a\\otimes x)=a\\cdot x"
},
{
"math_id": 5,
"text": "\\beta(a\\otimes x)=a^x"
},
{
"math_id": 6,
"text": "a\\cdot (xy)=\\sum_{(a),(x)}(a_{(1)} \\cdot x_{(1)}) (a_{(2)}^{x_{(2)}} \\cdot y)"
},
{
"math_id": 7,
"text": "a\\cdot 1_X=\\varepsilon_A(a)1_X"
},
{
"math_id": 8,
"text": "(ab)^x=\\sum_{(b),(x)}a^{b_{(1)} \\cdot x_{(1)}} b_{(2)}^{x_{(2)}}"
},
{
"math_id": 9,
"text": "1_A^x=\\varepsilon_X(x)1_A"
},
{
"math_id": 10,
"text": "\\sum_{(a),(x)}a_{(1)}^{x_{(1)}} \\otimes a_{(2)}\\cdot x_{(2)}=\\sum_{(a),(x)}a_{(2)}^{x_{(2)}}\\otimes a_{(1)}\\cdot x_{(1)}"
},
{
"math_id": 11,
"text": "a,b\\in A"
},
{
"math_id": 12,
"text": "x,y\\in X"
},
{
"math_id": 13,
"text": "X\\otimes A"
},
{
"math_id": 14,
"text": "X \\bowtie A"
},
{
"math_id": 15,
"text": "(1_X\\otimes 1_A)"
},
{
"math_id": 16,
"text": "(x\\otimes a)(y\\otimes b)=\\sum_{(a),(y)}x(a_{(1)}\\cdot y_{(1)}) \\otimes a_{(2)}^{y_{(2)}} b"
},
{
"math_id": 17,
"text": "\\varepsilon(x\\otimes a)=\\varepsilon_X(x)\\varepsilon_A(a)"
},
{
"math_id": 18,
"text": "\\Delta(x\\otimes a)=\\sum_{(x),(a)} (x_{(1)}\\otimes a_{(1)}) \\otimes (x_{(2)}\\otimes a_{(2)})"
},
{
"math_id": 19,
"text": "S(x\\otimes a)=\\sum_{(x),(a)}S(a_{(2)})\\cdot S(x_{(2)}) \\otimes S(a_{(1)})^{S(x_{(1)})}"
},
{
"math_id": 20,
"text": "H"
},
{
"math_id": 21,
"text": "H^*"
},
{
"math_id": 22,
"text": "H^{*cop}"
},
{
"math_id": 23,
"text": "D(H)=H^{*cop}\\bowtie H"
}
]
| https://en.wikipedia.org/wiki?curid=69624901 |
6962728 | Magnetic resonance elastography | Magnetic resonance elastography (MRE) is a form of elastography that specifically leverages MRI to quantify and subsequently map the mechanical properties (elasticity or stiffness) of soft tissue. First developed and described at Mayo Clinic by Muthupillai et al. in 1995, MRE has emerged as a powerful, non-invasive diagnostic tool, namely as an alternative to biopsy and serum tests for staging liver fibrosis.
Diseased tissue (e.g. a breast tumor) is often stiffer than the surrounding normal (fibroglandular) tissue, providing motivation to assess tissue stiffness. This principle of operation is the basis for the longstanding practice of palpation, which, however, is limited (except at surgery) to superficial organs and pathologies, and by its subjective, qualitative nature, depending on the skill and touch sensitivity of the practitioner. Conventional imaging techniques of CT, MRI, US, and nuclear medicine are unable to offer any insight on the elastic modulus of soft tissue. MRE, as a quantitative method of assessing tissue stiffness, provides reliable insight to visualize a variety of disease processes which affect tissue stiffness in the liver, brain, heart, pancreas, kidney, spleen, breast, uterus, prostate, and skeletal muscle.
MRE is conducted in three steps: first, a mechanical vibrator is used on the surface of the patient's body to generate shear waves that travel into the patient's deeper tissues; second, an MRI acquisition sequence measures the propagation and velocity of the waves; and finally this information is processed by an inversion algorithm to quantitatively infer and map tissue stiffness in 3-D. This stiffness map is called an elastogram, and is the final output of MRE, along with conventional 3-D MRI images as shown on the right.
Mechanics of soft tissue.
MRE quantitatively determines the stiffness of biological tissues by measuring its mechanical response to an external stress. Specifically, MRE calculates the shear modulus of a tissue from its shear-wave displacement measurements. The elastic modulus quantifies the stiffness of a material, or how well it resists elastic deformation as a force is applied. For elastic materials, strain is directly proportional to stress within an elastic region. The elastic modulus is seen as the proportionality constant between stress and strain within this region. Unlike purely elastic materials, biological tissues are viscoelastic, meaning that it has characteristics of both elastic solids and viscous liquids. Their mechanical responses depend on the magnitude of the applied stress as well as the strain rate. The stress-strain curve for a viscoelastic material exhibits hysteresis. The area of the hysteresis loop represents the amount of energy lost as heat when a viscoelastic material undergoes an applied stress and is distorted. For these materials, the elastic modulus is complex and can be separated into two components: a storage modulus and a loss modulus. The storage modulus expresses the contribution from elastic solid behavior while the loss modulus expresses the contribution from viscous liquid behavior. Conversely, elastic materials exhibit a pure solid response. When a force is applied, these materials elastically store and release energy, which does not result in energy loss in the form of heat.
Yet, MRE and other elastography imaging techniques typically utilize a mechanical parameter estimation that assumes biological tissues to be linearly elastic and isotropic for simplicity purposes. The effective shear modulus formula_0 can be expressed with the following equation:
formula_1
where formula_2 is the elastic modulus of the material and formula_3 is the Poisson's ratio.
The Poisson's ratio for soft tissues is approximated to equal 0.5, resulting in the ratio between the elastic modulus and shear modulus to equal 3. This relationship can be used to estimate the stiffness of biological tissues based on the calculated shear modulus from shear-wave propagation measurements. A driver system produces and transmits acoustic waves set at a specific frequency (50–500 Hz) to the tissue sample. At these frequencies, the velocity of shear waves can be about 1–10 m/s. The effective shear modulus can be calculated from the shear wave velocity with the following:
formula_4
where formula_5 is the tissue density and formula_6 is the shear wave velocity.
Recent studies have been focused on incorporating mechanical parameter estimations into post-processing inverse algorithms that account for the complex viscoelastic behavior of soft tissues. Creating new parameters could potentially increase the specificity of MRE measurements and diagnostic testing.
Applications.
Liver.
Liver fibrosis is a common condition arising in many liver diseases. Progression of fibrosis can lead to cirrhosis and end-stage liver disease. MRE-based measurement of liver stiffness has emerged as the most accurate non-invasive technique for detecting and staging liver fibrosis. MRE provides quantitative maps of tissue stiffness over large regions of the liver. Abnormally increased liver stiffness is a direct consequence of liver fibrosis. The diagnostic performance of MRE in assessing liver fibrosis has been established in multiple studies.
Liver MRE examinations are performed in MRI systems that have been equipped for the technique. Patients should fast for 3 to 4 hours prior to their MRE exam to allow for the most accurate measurement of liver stiffness. Patients lie supine in the MRI scanner for the examination. A special device is placed on the right side of the chest wall over the liver to apply gentle vibration which generates propagating shear waves in the liver. Imaging is for MRE is very quick, with data acquired in a series of 1-4 periods of breath-holding, each lasting 15–20 seconds.
A standardized approach for performing and analyzing liver MRE exams has been documented by the RSNA Quantitative Imaging Biomarkers Alliance. The technical success rate of Liver MRE is very high (95-100%)
Brain.
MRE of the brain was first presented in the early 2000s. Elastogram measures have been correlated with memory tasks, fitness measures, and progression of various neurodegenerative conditions. For example, regional and global decreases in brain viscoelasticity have been observed in Alzheimer's disease and multiple sclerosis. It has been found that as the brain ages, it loses its viscoelastic integrity due to degeneration of neurons and oligodendrocytes. A recent study looked into both the isotropic and anisotropic stiffness in brain and found a correlation between the two and with age, particularly in gray matter.
MRE may also have applications for understanding the adolescent brain. Recently, it was found that adolescents have regional differences in brain viscoelasticity relative to adults.
MRE has also been applied to functional neuroimaging. Whereas functional magnetic resonance imaging (fMRI) infers brain activity by detecting relatively slow changes in blood flow, functional MRE is capable of detecting neuromechanical changes in the brain related to neuronal activity occurring on the 100-millisecond scale.
Kidney.
MRE has also been applied to investigate the biomechanical properties of the kidney. The feasibility of clinical renal MRE was first reported in 2011 for healthy volunteers and in 2012 for renal transplant patients. Renal MRE is more challenging than MRE of larger organs such as the brain or liver due to fine mechanical features in the renal cortex and medulla as well as the acoustically shielded position of the kidneys within the abdominal cavity. To overcome these challenges, researchers have been looking at different passive drivers and imaging techniques to best deliver shear waves to the kidneys. Studies investigating renal diseases such as renal allograft dysfunction, lupus nephritis, immunoglobulin A nephropathy (IgAN), diabetic nephrology, renal tumors and chronic kidney disease demonstrate that kidney stiffness is sensitive to kidney function and renal perfusion.
Prostate.
The prostate can also be examined by MRE, in particular for the detection and diagnosis of prostate cancer. To ensure good shear wave penetration in the prostate gland, different actuator systems were designed and evaluated. Preliminary results in patients with prostate cancer showed that changes in stiffness allowed differentiation of cancerous tissue from normal tissue. Magnetic Resonance Elastography has been successfully used in patients with prostate cancer showing high specificity and sensitivity in differentiating prostate cancer from benign prostatic diseases (see figure on right (b)). Even higher specificity of 95% for prostate cancer was achieved when Magnetic Resonance Elastography was combined with systematic image interpretation using PI-RADS (version 2.1).
Pancreas.
The pancreas is one of the softest tissues in the abdomen. Given that pancreatic diseases including pancreatitis and pancreatic cancer significantly increase stiffness, MRE is a promising tool for diagnosing benign and malignant conditions of the pancreas. Abnormally high pancreatic stiffness was detected by MRE in patients with both acute and chronic pancreatitis. Pancreatic stiffness was also used to distinguish pancreatic malignancy from benign masses and to predict the occurrence of pancreatic fistula after pancreaticoenteric anastomosis. Quantification of the volume of pancreatic tumors based on tomoelastographic measurement of stiffness was found to be excellently correlated with tumor volumes estimated by contrast-enhanced computed tomography. In patients with pancreatic ductal adenocarcinoma stiffness was found to be elevated in the tumor as well as in pancreatic parenchyma distal to the tumor, suggesting heterogeneous pancreatic involvement (figure on right (c)).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mu"
},
{
"math_id": 1,
"text": "\\mu=E/[2(1+\\nu)]"
},
{
"math_id": 2,
"text": "E"
},
{
"math_id": 3,
"text": "\\nu"
},
{
"math_id": 4,
"text": "\\mu=\\rho{v_s}^2"
},
{
"math_id": 5,
"text": "\\rho"
},
{
"math_id": 6,
"text": "v_s"
}
]
| https://en.wikipedia.org/wiki?curid=6962728 |
69628781 | Pairwise compatibility graph | A graph class
In graph theory, a graph formula_0 is a pairwise compatibility graph (PCG) if there exists a tree formula_1 and two non-negative real numbers formula_2 such that each node formula_3 of formula_0 has a one-to-one mapping with a leaf node formula_4 of formula_1 such that two nodes formula_3 and formula_5 are adjacent in formula_0 if and only if the distance between formula_4 and formula_6 are in the interval formula_7.
The subclasses of PCG include graphs of at most seven vertices, cycles, forests, complete graphs, interval graphs and ladder graphs. However, there is a graph with eight vertices that is known not to be a PCG.
Relationship to phylogenetics.
Pairwise compatibility graphs were first introduced by Paul Kearney, J. Ian Munro and Derek Phillips in the context of phylogeny reconstruction. When sampling from a phylogenetic tree, the task of finding nodes whose path distance lies between given lengths formula_2 is equivalent to finding a clique in the associated PCG.
Complexity.
The computational complexity of recognizing a graph as a PCG is unknown as of 2020. However, the related problem of finding for a graph formula_0 and a selection of non-edge relations formula_8 a PCG containing formula_0 as a subgraph and with none of the edges in formula_8 is known to be NP-hard.
The task of finding nodes in a tree whose paths distance lies between formula_9 and formula_10 is known to be solvable in polynomial time. Therefore, if the tree could be recovered from a PCG in polynomial time, then the clique problem on PCGs would be polynomial too. As of 2020, neither of these complexities is known.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "d_{min} < d_{max}"
},
{
"math_id": 3,
"text": "u'"
},
{
"math_id": 4,
"text": "u"
},
{
"math_id": 5,
"text": "v'"
},
{
"math_id": 6,
"text": "v"
},
{
"math_id": 7,
"text": "[d_{min}, d_{max}]"
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "d_{min}"
},
{
"math_id": 10,
"text": "d_{max}"
}
]
| https://en.wikipedia.org/wiki?curid=69628781 |
696297 | Nonlocal | Nonlocal may refer to:
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\mathcal L\\phi(x)"
}
]
| https://en.wikipedia.org/wiki?curid=696297 |
696317 | Buchberger's algorithm | Algorithm for computing Gröbner bases
In the theory of multivariate polynomials, Buchberger's algorithm is a method for transforming a given set of polynomials into a Gröbner basis, which is another set of polynomials that have the same common zeros and are more convenient for extracting information on these common zeros. It was introduced by Bruno Buchberger simultaneously with the definition of Gröbner bases.
Euclidean algorithm for polynomial greatest common divisor computation and Gaussian elimination of linear systems are special cases of Buchberger's algorithm when the number of variables or the degrees of the polynomials are respectively equal to one.
For other Gröbner basis algorithms, see .
Algorithm.
A crude version of this algorithm to find a basis for an ideal I of a polynomial ring "R" proceeds as follows:
Input A set of polynomials "F" that generates I
Output A Gröbner basis "G" for I
# "G" := "F"
# For every "fi", "fj" in "G", denote by "gi" the leading term of "fi" with respect to the given monomial ordering, and by "aij" the least common multiple of "gi" and "gj".
# Choose two polynomials in "G" and let "S""ij" = "f"i − "f""j" "(Note that the leading terms here will cancel by construction)".
# Reduce "S""ij", with the multivariate division algorithm relative to the set "G" until the result is not further reducible. If the result is non-zero, add it to "G".
# Repeat steps 2-4 until all possible pairs are considered, including those involving the new polynomials added in step 4.
# Output "G"
The polynomial "S""ij" is commonly referred to as the "S"-polynomial, where "S" refers to "subtraction" (Buchberger) or "syzygy" (others). The pair of polynomials with which it is associated is commonly referred to as critical pair.
There are numerous ways to improve this algorithm beyond what has been stated above. For example, one could reduce all the new elements of "F" relative to each other before adding them. If the leading terms of "fi" and "fj" share no variables in common, then "Sij" will "always" reduce to 0 (if we use only fi and fj for reduction), so we needn't calculate it at all.
The algorithm terminates because it is consistently increasing the size of the monomial ideal generated by the leading terms of our set "F", and Dickson's lemma (or the Hilbert basis theorem) guarantees that any such ascending chain must eventually become constant.
Complexity.
The computational complexity of Buchberger's algorithm is very difficult to estimate, because of the number of choices that may dramatically change the computation time. Nevertheless, T. W. Dubé has proved that the degrees of the elements of a reduced Gröbner basis are always bounded by
formula_0,
where "n" is the number of variables, and "d" the maximal total degree of the input polynomials. This allows, in theory, to use linear algebra over the vector space of the polynomials of degree bounded by this value, for getting an algorithm of complexity
formula_1.
On the other hand, there are examples where the Gröbner basis contains elements of degree
formula_2,
and the above upper bound of complexity is optimal. Nevertheless, such examples are extremely rare.
Since its discovery, many variants of Buchberger's have been introduced to improve its efficiency. Faugère's F4 and F5 algorithms are presently the most efficient algorithms for computing Gröbner bases, and allow to compute routinely Gröbner bases consisting of several hundreds of polynomials, having each several hundreds of terms and coefficients of several hundreds of digits.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2\\left(\\frac{d^2}{2} +d\\right)^{2^{n-2}}"
},
{
"math_id": 1,
"text": "d^{2^{n+o(1)}}"
},
{
"math_id": 2,
"text": "d^{2^{\\Omega(n)}}"
}
]
| https://en.wikipedia.org/wiki?curid=696317 |
69634293 | Active Brownian particle | Model of self-propelled motion in a dissipative environment
An active Brownian particle (ABP) is a model of self-propelled motion in a dissipative environment. It is a nonequilibrium generalization of a Brownian particle.
The self-propulsion results from a force that acts on the particle's center of mass and points in the direction of an intrinsic body axis (the particle orientation). It is common to treat particles as spheres, though other shapes (such as rods) have also been studied. Both the center of mass and the direction of the propulsive force are subjected to white noise, which contributes a diffusive component to the overall dynamics. In its simplest version, the dynamics is overdamped and the propulsive force has constant magnitude, so that the magnitude of the velocity is likewise constant (speed-up to terminal velocity is instantaneous).
The term "active Brownian particle" usually refers to this simple model and its straightforward extensions, though some authors have used it for more general self-propelled particle models.
Equations of motion.
Mathematically, an active Brownian particle is described by its center of mass coordinates formula_0 and a unit vector formula_1 giving the orientation. In two dimensions, the orientation vector can be parameterized by the 2D polar angle formula_2, so that formula_3. The equations of motion in this case are the following stochastic differential equations:
formula_4
where
formula_5
with formula_6 the 2×2 identity matrix. The terms formula_7 and formula_8 are translational and rotational white noise, which is understood as a heuristic representation of the Wiener process. Finally, formula_9 is an external potential, formula_10 is the mass, formula_11 is the friction, formula_12 is the magnitude of the self-propulsion velocity, and formula_13 and formula_14 are the translational and rotational diffusion coefficients.
The dynamics can also be described in terms of a probability density function formula_15, which gives the probability, at time formula_16, of finding a particle at position formula_0 and with orientation formula_2. By averaging over the stochastic trajectories from the equations of motion, formula_15 can be shown to obey the following partial differential equation:
formula_17
Behavior.
For an isolated particle far from boundaries, the combination of diffusion and self-propulsion produces a stochastic (fluctuating) trajectory that appears ballistic over short length scales and diffusive over large length scales. The transition from ballistic to diffusive motion is defined by a characteristic length formula_18, called the persistence length.
In the presence of boundaries or other particles, more complex behavior is possible. Even in the absence of attractive forces, particles tend to accumulate at boundaries. Obstacles placed within a bath of active Brownian particles can induce long-range density variations and nonzero currents in steady state.
Sufficiently concentrated suspensions of active Brownian particles phase separate into a dense and dilute regions. The particles' motility drives a positive feedback loop, in which particles collide and hinder each other's motion, leading to further collisions and particle accumulation. At a coarse-grained level, a particle's "effective" self-propulsion velocity decreases with increased density, which promotes clustering. In the more general context of self-propelled particle models, this behavior is known as "motility-induced phase separation". It is a type of athermal phase separation because it occurs even if the particles are spheres with hard-core (purely repulsive) interactions.
Variations.
A variant of active Brownian motion involves complete directional reversals in addition to rotational diffusion. This movement pattern is seen in bacteria like "Myxococcus xanthus", "Pseudomonas putida," "Pseudoalteromonas haloplanktis", "Shewanella putrefaciens", and "Pseudomonas citronellolis.
Notes.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{r}"
},
{
"math_id": 1,
"text": "\\hat{\\mathbf{n}}"
},
{
"math_id": 2,
"text": "\\theta"
},
{
"math_id": 3,
"text": "\\hat{\\mathbf{n}} = (\\cos \\theta, \\sin \\theta)"
},
{
"math_id": 4,
"text": "\n\\begin{align}\n \\dot{\\mathbf{r}} &= v_0 \\hat{\\mathbf{n}} - (m \\xi)^{-1} \\nabla V(\\mathbf{r}) + \\sqrt{2 D_t} \\, \\boldsymbol{\\eta}_{\\text{trans}}(t)\n \\\\\n \\dot{\\theta} &= \\sqrt{2 D_r} \\, \\eta_{\\text{rot}}(t).\n\\end{align}\n"
},
{
"math_id": 5,
"text": "\n\\begin{align}\n \\langle \\eta_{\\text{rot}}(t)\\rangle &= 0; \\qquad \\langle \\eta_{\\text{rot}}(t) \\eta_{\\text{rot}}(t')\\rangle = \\delta(t - t') \\\\\n \\langle \\boldsymbol{\\eta}_{\\text{trans}}(t)\\rangle &= \\boldsymbol{0}; \\qquad \\langle \\boldsymbol{\\eta}_{\\text{trans}}(t) \\boldsymbol{\\eta}^{\\intercal}_{\\text{trans}}(t') \\rangle = \\mathbf{I} \\delta(t-t')\n\\end{align}\n"
},
{
"math_id": 6,
"text": "\\mathbf{I}"
},
{
"math_id": 7,
"text": "\\boldsymbol{\\eta}_{\\text{trans}}(t)"
},
{
"math_id": 8,
"text": "\\eta_{\\text{rot}}(t)"
},
{
"math_id": 9,
"text": "V(\\mathbf{r})"
},
{
"math_id": 10,
"text": "m"
},
{
"math_id": 11,
"text": "\\xi"
},
{
"math_id": 12,
"text": "v_0"
},
{
"math_id": 13,
"text": "D_t"
},
{
"math_id": 14,
"text": "D_r"
},
{
"math_id": 15,
"text": "f(\\mathbf{r},\\theta, t)"
},
{
"math_id": 16,
"text": "t"
},
{
"math_id": 17,
"text": "\n\\frac{\\partial f}{\\partial t} + v_0 \\hat{n} \\cdot \\nabla f = (m \\xi)^{-1} \\nabla \\cdot (\\nabla V(\\mathbf{r}) \\, f) + D_r \\frac{\\partial^2 f}{\\partial \\theta^2} + D_t \\nabla^2 f\n"
},
{
"math_id": 18,
"text": "\\ell = v_0/D_r"
}
]
| https://en.wikipedia.org/wiki?curid=69634293 |
69638491 | 1 Samuel 20 | First Book of Samuel chapter
1 Samuel 20 is the twentieth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's escape from Saul's repeated attempts to kill him. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 42 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 37–40 and 4Q52 (4QSamb; 250 BCE) with extant verses 26–42.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
Analysis.
A continuing major theme in this chapter is how Saul's family acted against Saul and sided with David, especially Jonathan, who had previously managed to reconcile both of them, now was forced to take sides. Initially he stood by his father and his father's oath not to harm David (19:6) that he refused to believe that David was close to death, nonetheless he was willing to find out Saul's true intention during the Feast of the New Moon and to inform David using their agreed coded message about the outcome. At this time Saul explicitly told Jonathan that their dynasty of kingship could not be realized as long as David was alive. However, Saul's blind ambition had enlarged the extent of the rift between him and his family, to the point that his enmity towards David had 'isolated him from his own kin'.
Jonathan and David renew their covenant (20:1–29).
After escaping from Saul's pursuit in Naioth, David once again sought Jonathan to find out why Saul wanted to kill him. They agreed on a method whereby Jonathan, after establishing Saul's intention, would, unknown to anyone else, inform David.
"And David said to Jonathan, “Indeed tomorrow is the New Moon, and I should not fail to sit with the king to eat. But let me go, that I may hide in the field until the third day at evening."
"And when you have stayed three days, go down quickly and come to the place where you hid on the day of the deed; and remain by the stone Ezel."
Saul seeks to kill Jonathan (20:30–42).
Jonathan opened the conversation with Saul by providing an excuse for David's absence, then with a defense of David (verse 32) echoing David's own words in verse 1, which moved from being a position of conciliator between David and Saul to be of David's defender under threat from his father (verses 30–33). Saul's fierce reply and attempt to kill Jonathan shows that David had little choice but to leave Saul's court and run away, not out of disloyalty nor for his own ambition, but due to events beyond his control. A promise is made by David to extend his covenant with Jonathan to include Jonatan's 'house' (verse 15) and his 'descendants' (verse 42), anticipating David's kindness to Jonathan's son, Mephibosheth (2 Samuel 9) and the survival of the house of Saul.
"And as soon as the lad was gone, David arose out of a place toward the south, and fell on his face to the ground, and bowed himself three times: and they kissed one another, and wept one with another, until David exceeded."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69638491 |
696408 | Monomial order | Order for the terms of a polynomial
In mathematics, a monomial order (sometimes called a term order or an admissible order) is a total order on the set of all (monic) monomials in a given polynomial ring, satisfying the property of respecting multiplication, i.e.,
Monomial orderings are most commonly used with Gröbner bases and multivariate division. In particular, the property of "being" a Gröbner basis is always relative to a specific monomial order.
Definition, details and variations.
Besides respecting multiplication, monomial orders are often required to be well-orders, since this ensures the multivariate division procedure will terminate. There are however practical applications also for multiplication-respecting order relations on the set of monomials that are not well-orders.
In the case of finitely many variables, well-ordering of a monomial order is equivalent to the conjunction of the following two conditions:
Since these conditions may be easier to verify for a monomial order defined through an explicit rule, than to directly prove it is a well-ordering, they are sometimes preferred in definitions of monomial order.
Leading monomials, terms, and coefficients.
The choice of a total order on the monomials allows sorting the terms of a polynomial. The leading term of a polynomial is thus the term of the largest monomial (for the chosen monomial ordering).
Concretely, let "R" be any ring of polynomials. Then the set "M" of the (monic) monomials in "R" is a basis of "R", considered as a vector space over the field of the coefficients. Thus, any nonzero polynomial "p" in "R" has a unique expression
formula_4
as a linear combination of monomials, where "S" is a finite subset of "M" and the "c""u" are all nonzero. When a monomial order has been chosen, the leading monomial is the largest "u" in "S", the leading coefficient is the corresponding "c""u", and the leading term is the corresponding "c""u""u". "Head" monomial/coefficient/term is sometimes used as a synonym of "leading". Some authors use "monomial" instead of "term" and "power product" instead of "monomial". In this article, a monomial is assumed to not include a coefficient.
The defining property of monomial orderings implies that the order of the terms is kept when multiplying a polynomial by a monomial. Also, the leading term of a product of polynomials is the product of the leading terms of the factors.
Examples.
On the set formula_5 of powers of any one variable "x", the only monomial orders are the natural ordering 1 < "x" < x2 < x3 < ... and its converse, the latter of which is not a well-ordering. Therefore, the notion of monomial order becomes interesting only in the case of multiple variables.
The monomial order implies an order on the individual indeterminates. One can simplify the classification of monomial orders by assuming that the indeterminates are named "x"1, "x"2, "x"3, ... in decreasing order for the monomial order considered, so that always "x"1 > "x"2 > "x"3 > ... (If there should be infinitely many indeterminates, this convention is incompatible with the condition of being a well ordering, and one would be forced to use the opposite ordering; however the case of polynomials in infinitely many variables is rarely considered.) In the example below we use "x", "y" and "z" instead of "x"1, "x"2 and "x"3. With this convention there are still many examples of different monomial orders.
Lexicographic order.
Lexicographic order (lex) first compares exponents of "x"1 in the monomials, and in case of equality compares exponents of "x"2, and so forth. The name is derived from the similarity with the usual alphabetical order used in lexicography for dictionaries, if monomials are represented by the sequence of the exponents of the indeterminates. If the number of indeterminates is fixed (as it is usually the case), the lexicographical order is a well-order, although this is not the case for the lexicographical order applied to sequences of various lengths (see ).
For monomials of degree at most two in two indeterminates formula_6, the lexicographic order (with formula_7) is
formula_8
For Gröbner basis computations, the lexicographic ordering tends to be the most costly; thus it should be avoided, as far as possible, except for very simple computations.
Graded lexicographic order.
Graded lexicographic order (grlex, or deglex for degree lexicographic order) first compares the total degree (sum of all exponents), and in case of a tie applies lexicographic order. This ordering is not only a well ordering, it also has the property that any monomial is preceded only by a finite number of other monomials; this is not the case for lexicographic order, where all (infinitely many) powers of "y" are less than "x" (that lexicographic order is nevertheless a well ordering is related to the impossibility of constructing an infinite decreasing chain of monomials).
For monomials of degree at most two in two indeterminates formula_6, the graded lexicographic order (with formula_7) is
formula_9
Although very natural, this ordering is rarely used: the Gröbner basis for the graded reverse lexicographic order, which follows, is easier to compute and provides the same information on the input set of polynomials.
Graded reverse lexicographic order.
Graded reverse lexicographic order (grevlex, or degrevlex for degree reverse lexicographic order) compares the total degree first, then uses a lexicographic order as tie-breaker, but it "reverses the outcome" of the lexicographic comparison so that lexicographically larger monomials of the same degree are considered to be degrevlex smaller. For the final order to exhibit the conventional ordering "x"1 > "x"2 > ... > "x"n of the indeterminates, it is furthermore necessary that the tie-breaker lexicographic order before reversal considers the "last" indeterminate "x"n to be the largest, which means it must start with that indeterminate. A concrete recipe for the graded reverse lexicographic order is thus to compare by the total degree first, then compare exponents of the "last" indeterminate "x""n" but "reversing the outcome" (so the monomial with smaller exponent is larger in the ordering), followed (as always only in case of a tie) by a similar comparison of "x""n"−1, and so forth ending with "x"1.
The differences between graded lexicographic and graded reverse lexicographic orders are subtle, since they in fact coincide for 1 and 2 indeterminates. The first difference comes for degree 2 monomials in 3 indeterminates, which are graded lexicographic ordered as formula_10 but graded reverse lexicographic ordered as formula_11. The general trend is that the reverse order exhibits all variables among the small monomials of any given degree, whereas with the non-reverse order the intervals of smallest monomials of any given degree will only be formed from the smallest variables.
Elimination order.
Block order or elimination order (lexdeg) may be defined for any number of blocks but, for sake of simplicity, we consider only the case of two blocks (however, if the number of blocks equals the number of variables, this order is simply the lexicographic order). For this ordering, the variables are divided in two blocks "x"1..., "x""h" and "y"1...,"y""k" and a monomial ordering is chosen for each block, usually the graded reverse lexicographical order. Two monomials are compared by comparing their "x" part, and in case of a tie, by comparing their "y" part. This ordering is important as it allows "elimination", an operation which corresponds to projection in algebraic geometry.
Weight order.
Weight order depends on a vector formula_12 called the weight vector. It first compares the dot product of the exponent sequences of the monomials with this weight vector, and in case of a tie uses some other fixed monomial order. For instance, the graded orders above are weight orders for the "total degree" weight vector (1,1...,1). If the "a""i" are rationally independent numbers (so in particular none of them are zero and all fractions formula_13 are irrational) then a tie can never occur, and the weight vector itself specifies a monomial ordering. In the contrary case, one could use another weight vector to break ties, and so on; after using "n" linearly independent weight vectors, there cannot be any remaining ties. One can in fact define "any" monomial ordering by a sequence of weight vectors (Cox et al. pp. 72–73), for instance (1,0,0...,0), (0,1,0...,0), ... (0,0...,1) for lex, or (1,1,1...,1), (1,1..., 1,0), ... (1,0...,0) for grevlex.
For example, consider the monomials formula_14, formula_15, formula_16, and formula_17; the monomial orders above would order these four monomials as follows:
Related notions.
When using monomial orderings to compute Gröbner bases, different orders can lead to different results, and the difficulty of the computation may vary dramatically. For example, graded reverse lexicographic order has a reputation for producing, almost always, the Gröbner bases that are the easiest to compute (this is enforced by the fact that, under rather common conditions on the ideal, the polynomials in the Gröbner basis have a degree that is at most exponential in the number of variables; no such complexity result exists for any other ordering). On the other hand, elimination orders are required for elimination and relative problems. | [
{
"math_id": 0,
"text": "u \\leq v"
},
{
"math_id": 1,
"text": "w"
},
{
"math_id": 2,
"text": "uw \\leq vw"
},
{
"math_id": 3,
"text": "1 \\leq u"
},
{
"math_id": 4,
"text": " p = \\textstyle\\sum_{u \\in S} c_u u "
},
{
"math_id": 5,
"text": " \\left\\{ x^n \\mid n \\in \\mathbb{N} \\right\\} "
},
{
"math_id": 6,
"text": "x_1, x_2"
},
{
"math_id": 7,
"text": "x_1>x_2"
},
{
"math_id": 8,
"text": " \nx_1^2 > x_1x_2 > x_1 > x_2^2 > x_2 > 1.\n"
},
{
"math_id": 9,
"text": " \nx_1^2 > x_1x_2 > x_2^2 > x_1 > x_2 > 1.\n"
},
{
"math_id": 10,
"text": " x_1^2 > x_1 x_2 > x_1 x_3 > x_2^2 > x_2 x_3 > x_3^2 "
},
{
"math_id": 11,
"text": " x_1^2 > x_1 x_2 > x_2^2 > x_1 x_3 > x_2 x_3 > x_3^2 "
},
{
"math_id": 12,
"text": "(a_1,\\ldots,a_n)\\in\\R_{\\geq0}^n"
},
{
"math_id": 13,
"text": "\\tfrac{a_i}{a_j}"
},
{
"math_id": 14,
"text": "xy^2z"
},
{
"math_id": 15,
"text": "z^2"
},
{
"math_id": 16,
"text": "x^3"
},
{
"math_id": 17,
"text": "x^2z^2"
},
{
"math_id": 18,
"text": "x^3 > x^2z^2 > xy^2z > z^2"
},
{
"math_id": 19,
"text": "x"
},
{
"math_id": 20,
"text": "x^2z^2 > xy^2z > x^3 > z^2"
},
{
"math_id": 21,
"text": "xy^2z > x^2z^2 > x^3 > z^2"
},
{
"math_id": 22,
"text": "z"
},
{
"math_id": 23,
"text": "x^2z^2 > xy^2z > z^2 > x^3"
}
]
| https://en.wikipedia.org/wiki?curid=696408 |
69642430 | Ultrapolynomial | In mathematics, an ultrapolynomial is a power series in several variables whose coefficients are bounded in some specific sense.
Definition.
Let formula_0 and formula_1 a field (typically formula_2 or formula_3) equipped with a norm (typically the absolute value). Then a function formula_4 of the form formula_5 is called an ultrapolynomial of class formula_6, if the coefficients formula_7 satisfy formula_8 for all formula_9, for some formula_10 and formula_11 (resp. for every formula_10 and some formula_12).
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "d \\in \\mathbb{N}"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "\\mathbb{R}"
},
{
"math_id": 3,
"text": "\\mathbb{C}"
},
{
"math_id": 4,
"text": "P: K^d \\rightarrow K"
},
{
"math_id": 5,
"text": "P(x) = \\sum_{\\alpha \\in \\mathbb{N}^d} c_\\alpha x^\\alpha"
},
{
"math_id": 6,
"text": "\\left\\{ M_p \\right\\}"
},
{
"math_id": 7,
"text": "c_\\alpha"
},
{
"math_id": 8,
"text": "\\left| c_\\alpha \\right| \\leq C L^{\\left| \\alpha \\right|}/M_\\alpha"
},
{
"math_id": 9,
"text": "\\alpha \\in \\mathbb{N}^d"
},
{
"math_id": 10,
"text": "L>0"
},
{
"math_id": 11,
"text": "C>0"
},
{
"math_id": 12,
"text": "C(L)>0"
}
]
| https://en.wikipedia.org/wiki?curid=69642430 |
69642916 | 1 Samuel 21 | First Book of Samuel chapter
1 Samuel 21 is the twenty-first chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's escape from Saul's repeated attempts to kill him. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 15 verses in English Bibles, but 16 verses in Hebrew Bible with different verse numbering.
Verse numbering.
There are some differences in verse numbering of this chapter in English Bibles and Hebrew texts:
This article generally follows the common numbering in Christian English Bible versions, with notes to the numbering in Hebrew Bible versions.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q52 (4QSamb; 250 BCE) with extant verses 1–3, 5–10.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
David in Nob (21:1–9).
David's visit in Nob (north of Jerusalem) is the first scene in a narrative of the priests providing support for David—not willingly like from Jonathan and Michal, but through deception—that continues with tragic consequences in 22:6–23. David's surprise visit was suspicious, but quickly allayed by a concocted story of a secret mission.. The priest at Nob, Ahimelech, the grandson of Eli, was persuaded to give provision to David and his young men from 'holy bread' or 'bread of Presence' which was only reserved for priests (Leviticus 24:9), based on David's assurances that the young men were 'ceremonially clean' — through abstention from sex and that their 'vessels' (euphemism for 'sexual organs') were clean.
David also obtained Goliath's sword, which was 'wrapped in cloth behind the
ephod' (verse 9), a significant omen for future successes.
Verse 1.
Now David came to Nob, to Ahimelech the priest. And Ahimelech was afraid when he met David, and said to him, “Why are you alone, and no one is with you?"
Verse 7.
Now a certain man of the servants of Saul was there that day, detained before the Lord. And his name was Doeg, an Edomite, the chief of the herdsmen who belonged to Saul."
The reference to Doeg the Edomite in this verse becomes meaningful in the next part of the plot (22:9–10,18), whose presence could also be related to the long-standing animosity between Israel and Edom (Genesis 25:25, 30; Numbers 20:1–21; Judges 3:7-11). His 'detention' in the sanctuary was probably connected with an act of penance, or that he might be 'cultically unclean'.
David in Gath (21:10–15).
David planned to take refuge in Gath, but was recognized by the courtiers of Gath, who recited the words specifically connected with his successes against the Philistines, perhaps by the fact that he was carrying Goliath's sword. Being outside YHWH's territory and within reach of the Philistines (maybe because he had not consulted YHWH before fleeing to Gath), David
acted quickly to feign madness. Achish, the king of Gath, was deceived and immediately let David go..
"And the servants of Achish said to him, “Is this not David the king of the land? Did they not sing of him to one another in dances, saying:"
"‘Saul has slain his thousands,"
"And David his ten thousands’?”"
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69642916 |
69644269 | Cross-entropy benchmarking | Cross-entropy benchmarking (also referred to as XEB) is a quantum benchmarking protocol which can be used to demonstrate quantum supremacy. In XEB, a random quantum circuit is executed on a quantum computer multiple times in order to collect a set of formula_0 samples in the form of bitstrings formula_1. The bitstrings are then used to calculate the cross-entropy benchmark fidelity (formula_2) via a classical computer, given by
formula_3,
where formula_4 is the number of qubits in the circuit and formula_5 is the probability of a bitstring formula_6 for an ideal quantum circuit formula_7. If formula_8, the samples were collected from a noiseless quantum computer. If formula_9, then the samples could have been obtained via random guessing. This means that if a quantum computer did generate those samples, then the quantum computer is too noisy and thus has no chance of performing beyond-classical computations. Since it takes an exponential amount of resources to classically simulate a quantum circuit, there comes a point when the biggest supercomputer that runs the best classical algorithm for simulating quantum circuits can't compute the XEB. Crossing this point is known as achieving quantum supremacy; and after entering the quantum supremacy regime, XEB can only be estimated.
The Sycamore processor was the first to demonstrate quantum supremacy via XEB. Instances of random circuits with formula_10 and 20 cycles were run to obtain an XEB of formula_11. Generating samples took 200 seconds on the quantum processor when it would have taken 10,000 years on Summit at the time of the experiment. Improvements in classical algorithms have shortened the runtime to about a week on Sunway TaihuLight thus collapsing Sycamore's claim to quantum supremacy. As of 2021, the latest demonstration of quantum supremacy by Zuchongzhi 2.1 with formula_12, 24 cycles and an XEB of formula_13 holds. It takes around 4 hours to generate samples on Zuchonzhi 2.1 when it would take 10,000 years on Sunway.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " k "
},
{
"math_id": 1,
"text": "\\{x_{1}, \\dots, x_{k}\\}"
},
{
"math_id": 2,
"text": "F_{\\rm XEB}"
},
{
"math_id": 3,
"text": "F_{\\rm XEB}= 2^{n} \\langle P(x_{i}) \\rangle_{k} - 1 = \\frac{2^{n}}{k} \\left(\\sum_{i=1}^{k}|\\langle 0^{n}|C|x_{i}\\rangle|^{2}\\right) - 1"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "P(x_{i}) "
},
{
"math_id": 6,
"text": "{x_{i}}"
},
{
"math_id": 7,
"text": "C"
},
{
"math_id": 8,
"text": "F_{XEB} = 1"
},
{
"math_id": 9,
"text": "F_{\\rm XEB} = 0"
},
{
"math_id": 10,
"text": "n = 53"
},
{
"math_id": 11,
"text": "0.0024"
},
{
"math_id": 12,
"text": "n = 60"
},
{
"math_id": 13,
"text": "0.000366"
}
]
| https://en.wikipedia.org/wiki?curid=69644269 |
696449 | Granular material | Conglomeration of discrete solid, macroscopic particles
A granular material is a conglomeration of discrete solid, macroscopic particles characterized by a loss of energy whenever the particles interact (the most common example would be friction when grains collide). The constituents that compose granular material are large enough such that they are not subject to thermal motion fluctuations. Thus, the lower size limit for grains in granular material is about 1 μm. On the upper size limit, the physics of granular materials may be applied to ice floes where the individual grains are icebergs and to asteroid belts of the Solar System with individual grains being asteroids.
Some examples of granular materials are snow, nuts, coal, sand, rice, coffee, corn flakes, salt, and bearing balls. Research into granular materials is thus directly applicable and goes back at least to Charles-Augustin de Coulomb, whose law of friction was originally stated for granular materials. Granular materials are commercially important in applications as diverse as pharmaceutical industry, agriculture, and energy production.
Powders are a special class of granular material due to their small particle size, which makes them more cohesive and more easily suspended in a gas.
The soldier/physicist Brigadier Ralph Alger Bagnold was an early pioneer of the physics of granular matter and whose book "The Physics of Blown Sand and Desert Dunes" remains an important reference to this day. According to material scientist Patrick Richard, "Granular materials are ubiquitous in nature and are the second-most manipulated material in industry (the first one is water)".
In some sense, granular materials do not constitute a single phase of matter but have characteristics reminiscent of solids, liquids, or gases depending on the average energy per grain. However, in each of these states, granular materials also exhibit properties that are unique.
Granular materials also exhibit a wide range of pattern forming behaviors when excited (e.g. vibrated or allowed to flow). As such granular materials under excitation can be thought of as an example of a complex system. They also display fluid-based instabilities and phenomena such as Magnus effect.
Definitions.
Granular matter is a system composed of many macroscopic particles. Microscopic particles (atoms\molecules) are described (in classical mechanics) by all DOF of the system. Macroscopic particles are described only by DOF of the motion of each particle as a rigid body. In each particle are a lot of internal DOF. Consider inelastic collision between two particles - the energy from velocity as rigid body is transferred to microscopic internal DOF. We get “Dissipation” - irreversible heat generation. The result is that without external driving, eventually all particles will stop moving. In macroscopic particles thermal fluctuations are irrelevant.
When a matter is dilute and dynamic (driven) then it is called granular gas and dissipation phenomenon dominates.
When a matter is dense and static, then it is called granular solid and jamming phenomenon dominates.
When the density is intermediate, then it is called granular liquid.
Static behaviors.
Coulomb friction Law.
Coulomb regarded internal forces between granular particles as a friction process, and proposed the friction law, that the force of friction of solid particles is proportional to the normal pressure between them and the static friction coefficient is greater than the kinetic friction coefficient. He studied the collapse of piles of sand and found empirically two critical angles: the maximal stable angle formula_0 and the minimum angle of repose formula_1. When the sandpile slope reaches the maximum stable angle, the sand particles on the surface of the pile begin to fall. The process stops when the surface inclination angle is equal to the angle of repose. The difference between these two angles, formula_2, is the Bagnold angle, which is a measure of the hysteresis of granular materials. This phenomenon is due to the force chains: stress in a granular solid is not distributed uniformly but is conducted away along so-called force chains which are networks of grains resting on one another. Between these chains are regions of low stress whose grains are shielded for the effects of the grains above by vaulting and arching. When the shear stress reaches a certain value, the force chains can break and the particles at the end of the chains on the surface begin to slide. Then, new force chains form until the shear stress is less than the critical value, and so the sandpile maintains a constant angle of repose.
Janssen Effect.
In 1895, H. A. Janssen discovered that in a vertical cylinder filled with particles, the pressure measured at the base of the cylinder does not depend on the height of the filling, unlike Newtonian fluids at rest which follow Stevin's law. Janssen suggested a simplified model with the following assumptions:
1) The vertical pressure, formula_3, is constant in the horizontal plane;
2) The horizontal pressure, formula_4, is proportional to the vertical pressure formula_3, where formula_5 is constant in space;
3) The wall friction static coefficient formula_6 sustains the vertical load at the contact with the wall;
4) The density of the material is constant over all depths.
The pressure in the granular material is then described in a different law, which accounts for saturation:formula_7where formula_8 and formula_9 is the radius of the cylinder, and at the top of the silo formula_10.
The given pressure equation does not account for boundary conditions, such as the ratio between the particle size to the radius of the silo. Since the internal stress of the material cannot be measured, Janssen's speculations have not been verified by any direct experiment.
Rowe Stress - Dilatancy Relation.
In the early 1960s, Rowe studied dilatancy effect on shear strength in shear tests and proposed a relation between them.
The mechanical properties of assembly of mono-dispersed particles in 2D can be analyzed based on the representative elementary volume, with typical lengths, formula_11, in vertical and horizontal directions respectively. The geometric characteristics of the system is described by formula_12 and the variable formula_13, which describes the angle when the contact points begin the process of sliding. Denote by formula_14 the vertical direction, which is the direction of the major principal stress, and by formula_15 the horizontal direction, which is the direction of the minor principal stress.
Then stress on the boundary can be expressed as the concentrated force borne by individual particles. Under biaxial loading with uniform stress formula_16 and therefore formula_17.
At equilibrium state:
formula_18
where formula_19, the friction angle, is the angle between the contact force and the contact normal direction.
formula_20, which describes the angle that if the tangential force falls within the friction cone the particles would still remain steady. It is determined by the coefficient of friction formula_21, so formula_22. Once stress is applied to the system then formula_19 gradually increases while formula_23 remains unchanged. When formula_24 then the particles will begin sliding, resulting in changing the structure of the system and creating new force chains. formula_25, the horizontal and vertical displacements respectively satisfies:
formula_26
Granular gases.
If the granular material is driven harder such that contacts between the grains become highly infrequent, the material enters a gaseous state. Correspondingly, one can define a granular temperature equal to the root mean square of grain velocity fluctuations that is analogous to thermodynamic temperature.
Unlike conventional gases, granular materials will tend to cluster and clump due to the dissipative nature of the collisions between grains. This clustering has some interesting consequences. For example, if a partially partitioned box of granular materials is vigorously shaken then grains will over time tend to collect in one of the partitions rather than spread evenly into both partitions as would happen in a conventional gas. This effect, known as the granular Maxwell's demon, does not violate any thermodynamics principles since energy is constantly being lost from the system in the process.
Ulam Model.
Consider formula_27 particles, particle formula_28 having energy formula_29. At some constant rate per unit time, randomly choose two particles formula_30 with energies formula_31 and compute the sum formula_32. Now, randomly distribute the total energy between the two particles: choose randomly formula_33 so that the first particle, after the collision, has energy formula_34, and the second formula_35.
The stochastic evolution equation:formula_36where formula_37 is the collision rate, formula_38 is randomly picked from formula_39 (uniform distribution) and j is an index also randomly chosen from a uniform distribution. The average energy per particle: formula_40
The second moment:
formula_41
Now the time derivative of the second moment:
formula_42
In steady state:
formula_43
Solving the differential equation for the second moment:
formula_44
However, instead of characterizing the moments, we can analytically solve the energy distribution, from the moment generating function. Consider the Laplace transform: formula_45.
Where formula_46, and formula_47
the n derivative:
formula_48
now:
formula_49
formula_50
formula_51
Solving for formula_52 with change of variables formula_53:
formula_54
We will show that formula_55 (Boltzmann Distribution) by taking its Laplace transform and calculate the generating function:
formula_56
Jamming transition.
Granular systems are known to exhibit jamming and undergo a jamming transition which is thought of as a thermodynamic phase transition to a jammed state.
The transition is from fluid-like phase to a solid-like phase and it is controlled by temperature, formula_57, volume fraction, formula_58, and shear stress, formula_59. The normal phase diagram of glass transition is in the formula_60 plane and it is divided into a jammed state region and unjammed liquid state by a transition line. The phase diagram for granular matter lies in the formula_61 plane, and the critical stress curve formula_62 divides the state phase to jammed\unjammed region, which corresponds to granular solids\liquids respectively. For isotropically jammed granular system, when formula_58 is reduced around a certain point, formula_63, the bulk and shear moduli approach 0. The formula_63 point corresponds to the critical volume fraction formula_64. Define the distance to point formula_63, the critical volume fraction, formula_65. The behavior of granular systems near the formula_63 point was empirically found to resemble second-order transition: the bulk modulus shows a power law scaling with formula_66 and there are some divergent characteristics lengths when formula_66 approaches zero. While formula_64 is constant for an infinite system, for a finite system boundary effects result in a distribution of formula_64 over some range.
The Lubachevsky-Stillinger algorithm of jamming allows one to produce simulated jammed granular configurations.
Pattern formation.
Excited granular matter is a rich pattern-forming system. Some of the pattern-forming behaviours seen in granular materials are:
Some of the pattern-forming behaviours have been possible to reproduce in computer simulations.
There are two main computational approaches to such simulations, time-stepped and event-driven, the former being the most efficient for a higher density of the material and the motions of a lower intensity, and the latter for a lower density of the material and the motions of a higher intensity.
Acoustic effects.
Some beach sands, such as those of the aptly named Squeaky Beach, exhibit squeaking when walked upon. Some desert dunes are known to exhibit booming during avalanching or when their surface is otherwise disturbed. Granular materials discharged from silos produce loud acoustic emissions in a process known as silo honking.
Granulation.
Granulation is the act or process in which primary powder particles are made to adhere to form larger, multiparticle entities called "granules."
Crystallization.
When water or other liquids are cooled sufficiently slowly, randomly positioned molecules rearrange and solid crystals emerge and grow. A similar crystallisation process may occur in randomly packed granular materials. Unlike removing energy by cooling, crystallization in granular material is achieved by external driving. Ordering, or crystallization of granular materials has been observed to occur in periodically sheared as well as vibrated granular matter. In contrast to molecular systems, the positions of the individual particles can be tracked in the experiment. Computer simulations for a system of spherical grains reveal that homogeneous crystallization emerges at a volume fraction formula_67. The computer simulations identify the minimal ingredients necessary for granular crystallization. In particular, gravity and friction are not necessary.
Computational modeling of granular materials.
Several methods are available for modeling of granular materials. Most of these methods consist of the statistical methods by which various statistical properties, derived from either point data or an image, are extracted and used to generate stochastic models of the granular medium. A recent and comprehensive review of such methods is available in Tahmasebi and other (2017). Another alternative for building a pack of granular particles that recently has been presented is based on the level-set algorithm by which the real shape of the particle can be captured and reproduced through the extracted statistics for particles' morphology.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta_m"
},
{
"math_id": 1,
"text": "\\theta_r"
},
{
"math_id": 2,
"text": "\\Delta \\theta=\\theta_m - \\theta_r"
},
{
"math_id": 3,
"text": "\\sigma_{zz}"
},
{
"math_id": 4,
"text": "\\sigma_{rr}"
},
{
"math_id": 5,
"text": "K=\\frac{\\sigma_{rr}}{\\sigma_{zz}}"
},
{
"math_id": 6,
"text": "\\mu = \\frac{\\sigma_{rz}}{\\sigma_{rr}}"
},
{
"math_id": 7,
"text": "p(z)=p_\\infin [1-\\exp(-z/\\lambda)]"
},
{
"math_id": 8,
"text": "\\lambda = \\frac{R}{2\\mu K}"
},
{
"math_id": 9,
"text": "R"
},
{
"math_id": 10,
"text": "z=0"
},
{
"math_id": 11,
"text": "\\ell_1, \\ell_2"
},
{
"math_id": 12,
"text": "\\alpha=\\arctan(\\frac{\\ell_1}{\\ell_2})"
},
{
"math_id": 13,
"text": "\\beta"
},
{
"math_id": 14,
"text": "\\sigma_{11}"
},
{
"math_id": 15,
"text": "\\sigma_{22}"
},
{
"math_id": 16,
"text": "\\sigma_{12}=\\sigma_{21}=0"
},
{
"math_id": 17,
"text": "F_{12}=F_{21}=0"
},
{
"math_id": 18,
"text": "\\frac{F_{11}}{F_{22}}=\\frac{\\sigma_{11}\\ell_2}{\\sigma_{22}\\ell_1}=\\tan(\\theta +\\beta)"
},
{
"math_id": 19,
"text": "\\theta"
},
{
"math_id": 20,
"text": "\\theta_{\\mu}"
},
{
"math_id": 21,
"text": "\\mu=tg\\phi_u"
},
{
"math_id": 22,
"text": "\\theta \\leq \\theta_\\mu"
},
{
"math_id": 23,
"text": "\\alpha,\\beta"
},
{
"math_id": 24,
"text": "\\theta \\geq \\theta_{\\mu}"
},
{
"math_id": 25,
"text": "\\Delta_1,\\Delta_2"
},
{
"math_id": 26,
"text": "\\frac{\\dot{\\Delta_2}}{\\dot{\\Delta_1}}=\\frac{\\dot{\\varepsilon_{22}}\\ell_2}{\\dot{\\varepsilon_{11}}\\ell_1}=-\\tan\\beta"
},
{
"math_id": 27,
"text": "N"
},
{
"math_id": 28,
"text": "i"
},
{
"math_id": 29,
"text": "\\varepsilon_{i}"
},
{
"math_id": 30,
"text": "i, j"
},
{
"math_id": 31,
"text": "\\varepsilon_{i},\\varepsilon_{j}"
},
{
"math_id": 32,
"text": "\\varepsilon_{i}+\\varepsilon_{j}"
},
{
"math_id": 33,
"text": "z\\in\\left[0,1\\right]"
},
{
"math_id": 34,
"text": "z\\left(\\varepsilon_{i}+\\varepsilon_{j}\\right)"
},
{
"math_id": 35,
"text": "\\left(1-z\\right)\\left(\\varepsilon_{i}+\\varepsilon_{j}\\right)"
},
{
"math_id": 36,
"text": "\\varepsilon_{i}(t+dt)=\\begin{cases}\n\\varepsilon_{i}(t) & probability:\\,1-\\Gamma dt\\\\\nz\\left(\\varepsilon_{i}(t)+\\varepsilon_{j}(t)\\right) & probability:\\,\\Gamma dt\n\\end{cases}"
},
{
"math_id": 37,
"text": "\\Gamma"
},
{
"math_id": 38,
"text": "z"
},
{
"math_id": 39,
"text": "\\left[0,1\\right]"
},
{
"math_id": 40,
"text": "\n\\begin{align}\n\\left\\langle \\varepsilon(t+dt)\\right\\rangle \n& =\\left(1-\\Gamma dt\\right)\\left\\langle \\varepsilon(t)\\right\\rangle +\\Gamma dt\\cdot\\left\\langle z\\right\\rangle \\left(\\left\\langle \\varepsilon_{i}\\right\\rangle +\\left\\langle \\varepsilon_{j}\\right\\rangle \\right)\\\\\n& =\\left(1-\\Gamma dt\\right)\\left\\langle \\varepsilon(t)\\right\\rangle +\\Gamma dt\\cdot\\dfrac{1}{2}\\left(\\left\\langle \\varepsilon(t)\\right\\rangle +\\left\\langle \\varepsilon(t)\\right\\rangle \\right)\\\\\n& =\\left\\langle \\varepsilon(t)\\right\\rangle\n\\end{align} "
},
{
"math_id": 41,
"text": "\\begin{align}\n\\left\\langle \\varepsilon^{2}(t+dt)\\right\\rangle \t\n& =\\left(1-\\Gamma dt\\right)\\left\\langle \\varepsilon^{2}(t)\\right\\rangle +\\Gamma dt\\cdot\\left\\langle z^{2}\\right\\rangle \\left\\langle \\varepsilon_{i}^{2}+2\\varepsilon_{i}\\varepsilon_{j}+\\varepsilon_{j}^{2}\\right\\rangle \\\\\n\t& =\\left(1-\\Gamma dt\\right)\\left\\langle \\varepsilon^{2}(t)\\right\\rangle +\\Gamma dt\\cdot\\dfrac{1}{3}\\left(2\\left\\langle \\varepsilon^{2}(t)\\right\\rangle +2\\left\\langle \\varepsilon(t)\\right\\rangle ^{2}\\right)\n\\end{align} "
},
{
"math_id": 42,
"text": "\\dfrac{d\\left\\langle \\varepsilon^{2}\\right\\rangle }{dt}=lim_{dt\\rightarrow0}\\dfrac{\\left\\langle \\varepsilon^{2}(t+dt)\\right\\rangle -\\left\\langle \\varepsilon^{2}(t)\\right\\rangle }{dt}=-\\dfrac{\\Gamma}{3}\\left\\langle \\varepsilon^{2}\\right\\rangle +\\dfrac{2\\Gamma}{3}\\left\\langle \\varepsilon\\right\\rangle ^{2} "
},
{
"math_id": 43,
"text": "\\dfrac{d\\left\\langle \\varepsilon^{2}\\right\\rangle }{dt}=0\\Rightarrow\\left\\langle \\varepsilon^{2}\\right\\rangle =2\\left\\langle \\varepsilon\\right\\rangle ^{2} "
},
{
"math_id": 44,
"text": "\\left\\langle \\varepsilon^{2}\\right\\rangle -2\\left\\langle \\varepsilon\\right\\rangle ^{2}=\\left(\\left\\langle \\varepsilon^{2}(0)\\right\\rangle -2\\left\\langle \\varepsilon(0)\\right\\rangle ^{2}\\right)e^{-\\frac{\\Gamma}{3}t} "
},
{
"math_id": 45,
"text": "g(\\lambda)=\\left\\langle e^{-\\lambda\\varepsilon}\\right\\rangle =\\int_{0}^{\\infty}e^{-\\lambda\\varepsilon}\\rho(\\varepsilon)d\\varepsilon "
},
{
"math_id": 46,
"text": "g(0)=1 "
},
{
"math_id": 47,
"text": "\\dfrac{dg}{d\\lambda}=-\\int_{0}^{\\infty}\\varepsilon e^{-\\lambda\\varepsilon}\\rho(\\varepsilon)d\\varepsilon=-\\left\\langle \\varepsilon\\right\\rangle "
},
{
"math_id": 48,
"text": "\\dfrac{d^{n}g}{d\\lambda^{n}}=\\left(-1\\right)^{n}\\int_{0}^{\\infty}\\varepsilon^{n}e^{-\\lambda\\varepsilon}\\rho(\\varepsilon)d\\varepsilon=\\left\\langle \\varepsilon^{n}\\right\\rangle "
},
{
"math_id": 49,
"text": "e^{-\\lambda\\varepsilon_{i}(t+dt)}=\\begin{cases}\ne^{-\\lambda\\varepsilon_{i}(t)} & 1-\\Gamma t\\\\\ne^{-\\lambda z\\left(\\varepsilon_{i}(t)+\\varepsilon_{j}(t)\\right)} & \\Gamma t\n\\end{cases} "
},
{
"math_id": 50,
"text": "\\left\\langle e^{-\\lambda\\varepsilon\\left(t+dt\\right)}\\right\\rangle =\\left(1-\\Gamma dt\\right)\\left\\langle e^{-\\lambda\\varepsilon_{i}(t)}\\right\\rangle +\\Gamma dt\\left\\langle e^{-\\lambda z\\left(\\varepsilon_{i}(t)+\\varepsilon_{j}(t)\\right)}\\right\\rangle "
},
{
"math_id": 51,
"text": "g\\left(\\lambda,t+dt\\right)=\\left(1-\\Gamma dt\\right)g\\left(\\lambda,t\\right)+\\Gamma dt\\int_{0}^{1}\\underset{=g^{2}(\\lambda z,t)}{\\underbrace{\\left\\langle e^{-\\lambda z\\varepsilon_{i}(t)}\\right\\rangle \\left\\langle e^{-\\lambda z\\varepsilon_{j}(t)}\\right\\rangle }}dz "
},
{
"math_id": 52,
"text": "g(\\lambda) "
},
{
"math_id": 53,
"text": "\\delta=\\lambda z "
},
{
"math_id": 54,
"text": "\\lambda g(\\lambda)=\\int_{0}^{\\lambda}g^{2}(\\delta)d\\delta\\Rightarrow\\lambda g'(\\lambda)+g(\\lambda)=g^{2}(\\lambda)\\Rightarrow g(\\lambda)=\\dfrac{1}{\\lambda T+1} "
},
{
"math_id": 55,
"text": "\\rho(\\varepsilon)=\\dfrac{1}{T}e^{-\\frac{\\varepsilon}{T}}\n "
},
{
"math_id": 56,
"text": "\\int_{0}^{\\infty}\\dfrac{1}{T}e^{-\\frac{\\varepsilon}{T}}\\cdot e^{-\\lambda\\varepsilon}d\\varepsilon=\\dfrac{1}{T}\\int_{0}^{\\infty}e^{-\\left(\\lambda+\\frac{1}{T}\\right)\\varepsilon}d\\varepsilon=-\\dfrac{1}{T\\left(\\lambda+\\frac{1}{T}\\right)}e^{-\\left(\\lambda+\\frac{1}{T}\\right)\\varepsilon}|_{0}^{\\infty}=\\dfrac{1}{\\lambda T+1}=g(\\lambda) "
},
{
"math_id": 57,
"text": "T"
},
{
"math_id": 58,
"text": "\\phi"
},
{
"math_id": 59,
"text": "\\Sigma"
},
{
"math_id": 60,
"text": "\\phi ^{-1}-T"
},
{
"math_id": 61,
"text": "\\phi^{-1}-\\Sigma"
},
{
"math_id": 62,
"text": "\\Sigma(\\phi)"
},
{
"math_id": 63,
"text": "J"
},
{
"math_id": 64,
"text": "\\phi_c"
},
{
"math_id": 65,
"text": "\\Delta\\phi\\equiv\\phi-\\phi_c"
},
{
"math_id": 66,
"text": "\\Delta\\phi"
},
{
"math_id": 67,
"text": "\\phi = 0.646 \\pm 0.001"
}
]
| https://en.wikipedia.org/wiki?curid=696449 |
6964629 | Interaction information | The interaction information is a generalization of the mutual information for more than two variables.
There are many names for interaction information, including "amount of information", "information correlation", "co-information", and simply "mutual information". Interaction information expresses the amount of information (redundancy or synergy) bound up in a set of variables, "beyond" that which is present in any subset of those variables. Unlike the mutual information, the interaction information can be either positive or negative. These functions, their negativity and minima have a direct interpretation in algebraic topology.
Definition.
The conditional mutual information can be used to inductively define the interaction information for any finite number of variables as follows:
formula_0
where
formula_1
Some authors define the interaction information differently, by swapping the two terms being subtracted in the preceding equation. This has the effect of reversing the sign for an odd number of variables.
For three variables formula_2, the interaction information formula_3 is given by
formula_4
where formula_5 is the mutual information between variables formula_6 and formula_7, and formula_8 is the conditional mutual information between variables formula_6 and formula_7 given formula_9. The interaction information is symmetric, so it does not matter which variable is conditioned on. This is easy to see when the interaction information is written in terms of entropy and joint entropy, as follows:
formula_10
In general, for the set of variables formula_11, the interaction information can be written in the following form (compare with Kirkwood approximation):
formula_12
For three variables, the interaction information measures the influence of a variable formula_9 on the amount of information shared between formula_6 and formula_7. Because the term formula_8 can be larger than formula_5, the interaction information can be negative as well as positive. This will happen, for example, when formula_6 and formula_7 are independent but not conditionally independent given formula_9. Positive interaction information indicates that variable formula_9 inhibits (i.e., "accounts for" or "explains" some of) the correlation between formula_6 and formula_7, whereas negative interaction information indicates that variable formula_9 facilitates or enhances the correlation.
Properties.
Interaction information is bounded. In the three variable case, it is bounded by
formula_13
If three variables form a Markov chain formula_14, then formula_15, but formula_16. Therefore
formula_17
Examples.
Positive interaction information.
Positive interaction information seems much more natural than negative interaction information in the sense that such "explanatory" effects are typical of common-cause structures. For example, clouds cause rain and also block the sun; therefore, the correlation between rain and darkness is partly accounted for by the presence of clouds, formula_18. The result is positive interaction information formula_19.
Negative interaction information.
A car's engine can fail to start due to either a dead battery or a blocked fuel pump. Ordinarily, we assume that battery death and fuel pump blockage are independent events, formula_20. But knowing that the car fails to start, if an inspection shows the battery to be in good health, we can conclude that the fuel pump must be blocked. Therefore formula_21, and the result is negative interaction information.
Difficulty of interpretation.
The possible negativity of interaction information can be the source of some confusion. Many authors have taken zero interaction information as a sign that three or more random variables do not interact, but this interpretation is wrong.
To see how difficult interpretation can be, consider a set of eight independent binary variables formula_22. Agglomerate these variables as follows:
formula_23
Because the formula_24's overlap each other (are redundant) on the three binary variables formula_25, we would expect the interaction information formula_26 to equal formula_27 bits, which it does. However, consider now the agglomerated variables
formula_28
These are the same variables as before with the addition of formula_29. However, formula_30 in this case is actually equal to formula_31 bit, indicating less redundancy. This is correct in the sense that
formula_32
but it remains difficult to interpret.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I(X_1;\\ldots;X_{n+1}) = I(X_1;\\ldots;X_n) - I(X_1;\\ldots;X_n\\mid X_{n+1}),"
},
{
"math_id": 1,
"text": "I(X_1;\\ldots;X_n \\mid X_{n+1}) = \\mathbb E_{X_{n+1}}\\big(I(X_1;\\ldots;X_n) \\mid X_{n+1}\\big)."
},
{
"math_id": 2,
"text": "\\{X,Y,Z\\}"
},
{
"math_id": 3,
"text": "I(X;Y;Z)"
},
{
"math_id": 4,
"text": "\nI(X;Y;Z) = I(X;Y)-I(X;Y \\mid Z)\n"
},
{
"math_id": 5,
"text": "I(X;Y)"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "Y"
},
{
"math_id": 8,
"text": "I(X;Y \\mid Z)"
},
{
"math_id": 9,
"text": "Z"
},
{
"math_id": 10,
"text": "\n\\begin{alignat}{3}\nI(X;Y;Z) &= &&\\; \\bigl( H(X) + H(Y) + H(Z) \\bigr) \\\\\n & &&- \\bigl( H(X,Y) + H(X,Z) + H(Y,Z) \\bigr) \\\\\n & &&+ H(X,Y,Z)\n\\end{alignat}\n"
},
{
"math_id": 11,
"text": "\\mathcal{V}=\\{X_{1},X_{2},\\ldots ,X_{n}\\}"
},
{
"math_id": 12,
"text": "\nI(\\mathcal{V}) = \\sum_{\\mathcal{T}\\subseteq \\mathcal{V}}(-1)^{\\left\\vert\\mathcal{T}\\right\\vert-1}H(\\mathcal{T}) \n"
},
{
"math_id": 13,
"text": "-\\min \\{ I(X;Y \\mid Z), I(Y;Z \\mid X), I(X;Z \\mid Y) \\} \\leq I(X;Y;Z) \\leq \\min \\{ I(X;Y), I(Y;Z), I(X;Z) \\}"
},
{
"math_id": 14,
"text": "X\\to Y \\to Z"
},
{
"math_id": 15,
"text": "I(X;Z \\mid Y)=0"
},
{
"math_id": 16,
"text": "I(X;Z)\\ge 0"
},
{
"math_id": 17,
"text": "I(X;Y;Z) = I(X;Z) - I(X;Z \\mid Y) = I(X;Z) \\ge 0."
},
{
"math_id": 18,
"text": "I(\\text{rain};\\text{dark}\\mid\\text{cloud}) < I(\\text{rain};\\text{dark})"
},
{
"math_id": 19,
"text": "I(\\text{rain};\\text{dark};\\text{cloud})"
},
{
"math_id": 20,
"text": "I(\\text{blocked fuel}; \\text{dead battery}) = 0"
},
{
"math_id": 21,
"text": "I(\\text{blocked fuel}; \\text{dead battery} \\mid \\text{engine fails}) > 0"
},
{
"math_id": 22,
"text": "\\{X_{1},X_{2},X_{3},X_{4},X_{5},X_{6},X_{7},X_{8}\\}"
},
{
"math_id": 23,
"text": "\n\\begin{align}\nY_{1} &=\\{X_{1},X_{2},X_{3},X_{4},X_{5},X_{6},X_{7}\\} \\\\\nY_{2} &=\\{X_{4},X_{5},X_{6},X_{7}\\} \\\\\nY_{3} &=\\{X_{5},X_{6},X_{7},X_{8}\\} \n\\end{align}\n"
},
{
"math_id": 24,
"text": "Y_{i}"
},
{
"math_id": 25,
"text": "\\{X_{5},X_{6},X_{7}\\}"
},
{
"math_id": 26,
"text": "I(Y_{1};Y_{2};Y_{3})"
},
{
"math_id": 27,
"text": "3"
},
{
"math_id": 28,
"text": "\n\\begin{align}\nY_{1} &=\\{X_{1},X_{2},X_{3},X_{4},X_{5},X_{6},X_{7}\\} \\\\\nY_{2} &=\\{X_{4},X_{5},X_{6},X_{7}\\} \\\\\nY_{3} &=\\{X_{5},X_{6},X_{7},X_{8}\\} \\\\\nY_{4} &=\\{X_{7},X_{8}\\}\n\\end{align}\n"
},
{
"math_id": 29,
"text": "Y_{4}=\\{X_{7},X_{8}\\}"
},
{
"math_id": 30,
"text": "I(Y_{1};Y_{2};Y_{3};Y_{4})"
},
{
"math_id": 31,
"text": "+1"
},
{
"math_id": 32,
"text": "\n\\begin{align}\nI(Y_{1};Y_{2};Y_{3};Y_{4}) &= I(Y_{1};Y_{2};Y_{3})-I(Y_{1};Y_{2};Y_{3}|Y_{4}) \\\\\n&= 3-2 \\\\\n&= 1\n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=6964629 |
696472 | Axiom of constructibility | Possible axiom for set theory in mathematics
The axiom of constructibility is a possible axiom for set theory in mathematics that asserts that every set is constructible. The axiom is usually written as V" = "L. The axiom, first investigated by Kurt Gödel, is inconsistent with the proposition that zero sharp exists and stronger large cardinal axioms (see list of large cardinal properties). Generalizations of this axiom are explored in inner model theory.
Implications.
The axiom of constructibility implies the axiom of choice (AC), given Zermelo–Fraenkel set theory without the axiom of choice (ZF). It also settles many natural mathematical questions that are independent of Zermelo–Fraenkel set theory with the axiom of choice (ZFC); for example, the axiom of constructibility implies the generalized continuum hypothesis, the negation of Suslin's hypothesis, and the existence of an analytical (in fact, formula_0) non-measurable set of real numbers, all of which are independent of ZFC.
The axiom of constructibility implies the non-existence of those large cardinals with consistency strength greater or equal to 0#, which includes some "relatively small" large cardinals. For example, no cardinal can be ω1-Erdős in "L". While "L" does contain the initial ordinals of those large cardinals (when they exist in a supermodel of "L"), and they are still initial ordinals in "L", it excludes the auxiliary structures (e.g. measures) that endow those cardinals with their large cardinal properties.
Although the axiom of constructibility does resolve many set-theoretic questions, it is not typically accepted as an axiom for set theory in the same way as the ZFC axioms. Among set theorists of a realist bent, who believe that the axiom of constructibility is either true or false, most believe that it is false. This is in part because it seems unnecessarily "restrictive", as it allows only certain subsets of a given set (for example, formula_1 can't exist), with no clear reason to believe that these are all of them. In part it is because the axiom is contradicted by sufficiently strong large cardinal axioms. This point of view is especially associated with the Cabal, or the "California school" as Saharon Shelah would have it.
In arithmetic.
Especially from the 1950s to the 1970s, there have been some investigations into formulating an analogue of the axiom of constructibility for subsystems of second-order arithmetic. A few results stand out in the study of such analogues:
Significance.
The major significance of the axiom of constructibility is in Kurt Gödel's proof of the relative consistency of the axiom of choice and the generalized continuum hypothesis to Von Neumann–Bernays–Gödel set theory. (The proof carries over to Zermelo–Fraenkel set theory, which has become more prevalent in recent years.)
Namely Gödel proved that formula_10 is relatively consistent (i.e. if formula_11 can prove a contradiction, then so can formula_12), and that in formula_12
formula_13
thereby establishing that AC and GCH are also relatively consistent.
Gödel's proof was complemented in later years by Paul Cohen's result that both AC and GCH are "independent", i.e. that the negations of these axioms (formula_14 and formula_15) are also relatively consistent to ZF set theory.
Statements true in "L".
Here is a list of propositions that hold in the constructible universe (denoted by "L"):
Accepting the axiom of constructibility (which asserts that every set is constructible) these propositions also hold in the von Neumann universe, resolving many propositions in set theory and some interesting questions in analysis.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\Delta^1_2"
},
{
"math_id": 1,
"text": "0^\\sharp\\subseteq \\omega"
},
{
"math_id": 2,
"text": "\\Sigma_2^1"
},
{
"math_id": 3,
"text": "\\textrm{Constr}(X)"
},
{
"math_id": 4,
"text": "\\mathcal P(\\omega)\\vDash\\textrm{Constr}(X)"
},
{
"math_id": 5,
"text": "X\\in\\mathcal P(\\omega)\\cap L"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "\\Pi_3^1"
},
{
"math_id": 8,
"text": "M\\vDash\\textrm{V=L}"
},
{
"math_id": 9,
"text": "M\\cap\\mathcal P(\\omega)\\vDash\\textrm{Analytical}\\;\\textrm{form}\\;\\textrm{of}\\;\\textrm{V=L}"
},
{
"math_id": 10,
"text": "V=L"
},
{
"math_id": 11,
"text": "ZFC + (V=L)"
},
{
"math_id": 12,
"text": "ZF"
},
{
"math_id": 13,
"text": "V=L\\implies AC\\land GCH,"
},
{
"math_id": 14,
"text": "\\lnot AC"
},
{
"math_id": 15,
"text": "\\lnot GCH"
},
{
"math_id": 16,
"text": "F:\\textrm{Ord}\\to\\textrm{V}"
}
]
| https://en.wikipedia.org/wiki?curid=696472 |
6965259 | Extended finite-state machine | In a conventional finite state machine, the transition is associated with a set of input Boolean conditions and a set of output Boolean functions. In an extended finite state machine (EFSM) model, the transition can be expressed by an “if statement” consisting of a set of trigger conditions. If trigger conditions are all satisfied, the transition is fired, bringing the machine from the current state to the next state and performing the specified data operations.
Definition.
An EFSM is defined as a 7-tuple formula_0 where
Structure.
EFSM Architecture: An EFSM model consists of the following three major combinational blocks (and a few registers).
The cycle behavior of an EFSM can be divided into three steps: | [
{
"math_id": 0,
"text": "M=(I,O,S,D,F,U,T)"
},
{
"math_id": 1,
"text": "D_1 \\times \\ldots \\times D_n"
},
{
"math_id": 2,
"text": "f_i : D \\rightarrow \\{0,1\\}"
},
{
"math_id": 3,
"text": "u_i : D \\rightarrow D"
},
{
"math_id": 4,
"text": "T : S \\times F \\times I \\rightarrow S \\times U \\times O"
}
]
| https://en.wikipedia.org/wiki?curid=6965259 |
696568 | L-theory | In mathematics, algebraic "L"-theory is the "K"-theory of quadratic forms; the term was coined by C. T. C. Wall,
with "L" being used as the letter after "K". Algebraic "L"-theory, also known as "Hermitian "K"-theory",
is important in surgery theory.
Definition.
One can define "L"-groups for any ring with involution "R": the quadratic "L"-groups formula_0 (Wall) and the symmetric "L"-groups formula_1 (Mishchenko, Ranicki).
Even dimension.
The even-dimensional "L"-groups formula_2 are defined as the Witt groups of ε-quadratic forms over the ring "R" with formula_3. More precisely,
formula_2
is the abelian group of equivalence classes formula_4 of non-degenerate ε-quadratic forms formula_5 over R, where the underlying R-modules F are finitely generated free. The equivalence relation is given by stabilization with respect to hyperbolic ε-quadratic forms:
formula_6.
The addition in formula_2 is defined by
formula_7
The zero element is represented by formula_8 for any formula_9. The inverse of formula_4 is formula_10.
Odd dimension.
Defining odd-dimensional "L"-groups is more complicated; further details and the definition of the odd-dimensional "L"-groups can be found in the references mentioned below.
Examples and applications.
The "L"-groups of a group formula_11 are the "L"-groups formula_12 of the group ring formula_13. In the applications to topology formula_11 is the fundamental group
formula_14 of a space formula_15. The quadratic "L"-groups formula_12
play a central role in the surgery classification of the homotopy types of formula_16-dimensional manifolds of dimension formula_17, and in the formulation of the Novikov conjecture.
The distinction between symmetric "L"-groups and quadratic "L"-groups, indicated by upper and lower indices, reflects the usage in group homology and cohomology. The group cohomology formula_18 of the cyclic group formula_19 deals with the fixed points of a formula_19-action, while the group homology formula_20 deals with the orbits of a formula_19-action; compare formula_21 (fixed points) and formula_22 (orbits, quotient) for upper/lower index notation.
The quadratic "L"-groups: formula_23 and the symmetric "L"-groups: formula_24 are related by
a symmetrization map formula_25 which is an isomorphism modulo 2-torsion, and which corresponds to the polarization identities.
The quadratic and the symmetric "L"-groups are 4-fold periodic (the comment of Ranicki, page 12, on the non-periodicity of the symmetric "L"-groups refers to another type of "L"-groups, defined using "short complexes").
In view of the applications to the classification of manifolds there are extensive calculations of
the quadratic formula_26-groups formula_12. For finite formula_11
algebraic methods are used, and mostly geometric methods (e.g. controlled topology) are used for infinite formula_11.
More generally, one can define "L"-groups for any additive category with a "chain duality", as in Ranicki (section 1).
Integers.
The simply connected "L"-groups are also the "L"-groups of the integers, as
formula_27 for both formula_26 = formula_28 or formula_29 For quadratic "L"-groups, these are the surgery obstructions to simply connected surgery.
The quadratic "L"-groups of the integers are:
formula_30
In doubly even dimension (4"k"), the quadratic "L"-groups detect the signature; in singly even dimension (4"k"+2), the "L"-groups detect the Arf invariant (topologically the Kervaire invariant).
The symmetric "L"-groups of the integers are:
formula_31
In doubly even dimension (4"k"), the symmetric "L"-groups, as with the quadratic "L"-groups, detect the signature; in dimension (4"k"+1), the "L"-groups detect the de Rham invariant.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L_*(R)"
},
{
"math_id": 1,
"text": "L^*(R)"
},
{
"math_id": 2,
"text": "L_{2k}(R)"
},
{
"math_id": 3,
"text": "\\epsilon = (-1)^k"
},
{
"math_id": 4,
"text": "[\\psi]"
},
{
"math_id": 5,
"text": "\\psi \\in Q_\\epsilon(F)"
},
{
"math_id": 6,
"text": "[\\psi] = [\\psi'] \\Longleftrightarrow n, n' \\in {\\mathbb N}_0: \\psi \\oplus H_{(-1)^k}(R)^n \\cong \\psi' \\oplus H_{(-1)^k}(R)^{n'}"
},
{
"math_id": 7,
"text": "[\\psi_1] + [\\psi_2] := [\\psi_1 \\oplus \\psi_2]."
},
{
"math_id": 8,
"text": "H_{(-1)^k}(R)^n"
},
{
"math_id": 9,
"text": "n \\in {\\mathbb N}_0"
},
{
"math_id": 10,
"text": "[-\\psi]"
},
{
"math_id": 11,
"text": "\\pi"
},
{
"math_id": 12,
"text": "L_*(\\mathbf{Z}[\\pi])"
},
{
"math_id": 13,
"text": "\\mathbf{Z}[\\pi]"
},
{
"math_id": 14,
"text": "\\pi_1 (X)"
},
{
"math_id": 15,
"text": "X"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "n > 4"
},
{
"math_id": 18,
"text": "H^*"
},
{
"math_id": 19,
"text": "\\mathbf{Z}_2"
},
{
"math_id": 20,
"text": "H_*"
},
{
"math_id": 21,
"text": "X^G"
},
{
"math_id": 22,
"text": "X_G = X/G"
},
{
"math_id": 23,
"text": "L_n(R)"
},
{
"math_id": 24,
"text": "L^n(R)"
},
{
"math_id": 25,
"text": "L_n(R) \\to L^n(R)"
},
{
"math_id": 26,
"text": "L"
},
{
"math_id": 27,
"text": "L(e) := L(\\mathbf{Z}[e]) = L(\\mathbf{Z})"
},
{
"math_id": 28,
"text": "L^*"
},
{
"math_id": 29,
"text": "L_*."
},
{
"math_id": 30,
"text": "\\begin{align}\nL_{4k}(\\mathbf{Z}) &= \\mathbf{Z} && \\text{signature}/8\\\\\nL_{4k+1}(\\mathbf{Z}) &= 0\\\\\nL_{4k+2}(\\mathbf{Z}) &= \\mathbf{Z}/2 && \\text{Arf invariant}\\\\\nL_{4k+3}(\\mathbf{Z}) &= 0.\n\\end{align}"
},
{
"math_id": 31,
"text": "\\begin{align}\nL^{4k}(\\mathbf{Z}) &= \\mathbf{Z} && \\text{signature}\\\\\nL^{4k+1}(\\mathbf{Z}) &= \\mathbf{Z}/2 && \\text{de Rham invariant}\\\\\nL^{4k+2}(\\mathbf{Z}) &= 0\\\\\nL^{4k+3}(\\mathbf{Z}) &= 0.\n\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=696568 |
6966 | Chinese calendar | Lunisolar calendar from China
The traditional Chinese calendar (;" informally " ) is a lunisolar calendar, combining the solar, lunar, and other cycles for various social and agricultural purposes. More recently, in China and Chinese communities the Gregorian calendar has been adopted and adapted in various ways, and is generally the basis for standard civic purposes, though also incorporating traditional lunisolar holidays. Also, there are many types and subtypes of the Chinese calendar, partly reflecting developments in astronomical observation and horology, with over a millennium's worth of history. The major modern form is the Gregorian calendar-based official version of Mainland China, though diaspora versions are also notable in other regions of China and Chinese-influenced cultures; however, aspects of the traditional lunisolar calendar remain popular, including the association of the twelve animals of the Chinese Zodiac in relation to months and years.
The traditional calendar used the sexagenary cycle-based "ganzhi" system's mathematically repeating cycles of Heavenly Stems and Earthly Branches. Together with astronomical, horological, and phenologic observations, definitions, measurements, and predictions of years, months, and days were refined to an accurate standard. Astronomical phenomena and calculations emphasized especially the efforts to mathematically correlate the solar and lunar cycles from the perspective of the earth, which, however, are known to require some degree of numeric approximation or compromises.
The logic of the various permutations of the Chinese calendar has been based in considerations such as the technical form from mathematics and astronomy, the philosophical considerations, and the political, and the resulting disparities between different calendars is significantly notable. Various similar calendar systems are also known from various regions or ethnic groups of Central Asia, South Asia, and other ethnic regions. Indeed, the Chinese calendar has influenced and been influenced by most parts of the world these days. One particularly popular feature is the Chinese zodiac. The Chinese calendar and horology includes many multifaceted methods of computing years, eras, months, days and hours (with modern horology even splitting the seconds into very tiny sub-units using atomic methods).
Epochs are one of the important features of calendar systems. An epoch is a particular point in time at which a calendar system may use as its initial time reference, allowing for the consecutive numbering of years from a chosen starting year, date, or time. In the Chinese calendar system, examples include the inauguration of Huangdi or the birth of Confucius. Also, many dynasties had their own dating systems, which could include regnal eras based on the inauguration of a dynasty, the enthronement of a particular monarch, or eras arbitrarily designated due to political or other considerations, such as a desire for a change of luck. Era names are useful for determining dates on artifacts such as ceramics, which were often traditionally dated by an era name during the production process.
Historical variations of the lunisolar calendar are features of the Chinese calendar system. The topic of the Chinese calendar includes various traditional types of the Chinese calendar. As is generally the case with calendar systems, the Chinese calendars tend to focus on basic calendar functions, such as the identification of years, months, and days according to astronomical phenomena and calculations, with a special effort to correlate the solar and lunar cycles experienced on earth—an effort which is known to mathematically require some degree of approximation. One of the major features of some traditional calendar systems in China (and elsewhere) has been the idea of the sexagenary cycle. The Chinese lunisolar calendar has had several significant variations over the course of time and history. Many historical variations in the Chinese calendar are associated with political changes, such as dynastic succession. Solar and agricultural calendars have a long history in China. Purely lunar calendar systems were known in China; however, purely lunar calendars tended to be of limited utility, and were not widely accepted by farmers, who for agricultural purposes needed to focus on predictability of seasons for planting and harvesting purposes, and thus required a calendar useful for agricultural. For farming purposes and keeping track of the seasons Chinese solar or lunisolar calendars were particularly useful. Thus, over time, the publication of multipurpose and agricultural almanacs has become a longstanding tradition in China
Various other astronomical phenomena have been incorporated into calendars besides the cycles of the sun and the moon, for example, the planets and the constellations (or mansions) of asterisms along the ecliptic.
Many Chinese holidays and other areas both in ancient and modern times have been determined by the traditional lunisolar calendar or considerations based upon the lunisolar calendar; and, which now are generally combined with more modern calendar considerations. The traditions of the lunisolar calendar remain very popular and the Gregorian calendar has been used as the standard basis for civic calendars.
Etymology.
The name of calendar is in , and was represented in earlier character forms variants (歷, 厤), and ultimately derived from an ancient form (秝). The ancient form of the character consists of two stalks of rice plant (禾), arranged in parallel. This character represents the order in space and also the order in time. As its meaning became complex, the modern dedicated character (曆) was created to represent the meaning of calendar.
Maintaining the correctness of calendars was an important task to maintain the authority of rulers, being perceived as a way to measure the ability of a ruler. For example, someone seen as a competent ruler would foresee the coming of seasons and prepare accordingly. This understanding was also relevant in predicting abnormalities of the Earth and celestial bodies, such as lunar and solar eclipses. The significant relationship between authority and timekeeping helps to explain why there are 102 calendars in Chinese history, trying to predict the correct courses of sun, moon and stars, and marking good time and bad time. Each calendar is named as XX曆 and recorded in a dedicated calendar section in history books of different eras. The last one in imperial era was 時憲曆. A ruler would issue an almanac before the commencement of each year. There were private almanac issuers, usually illegal, when a ruler lost his control to some territories.
Various modern Chinese calendar names resulted from the struggle between the introduction of Gregorian calendar by government and the preservation of customs by the public in the era of Republic of China. The government wanted to abolish the Chinese calendar to force everyone to use the Gregorian calendar, and even abolished the Lunar New Year, but faced great opposition. The public needed the astronomical Chinese calendar to do things at a proper time, for example farming and fishing; also, a wide spectrum of festivals and customs observations have been based on the calendar. The government finally compromised and rebranded it as the agricultural calendar in 1947, depreciating the calendar to merely agricultural use.
After the end of the imperial era, there are some almanacs based upon the algorithm of the last Imperial calendar with longitude of Peking. Such almanacs were under the name of "universal book" 通書, or under Cantonese name 通勝, transcribed as Tung Shing. And later these almanacs moved to new calculation based on the location of Purple Mountain Observatory, with longitude of 120°E.
Epochs.
An epoch is a point in time chosen as the origin of a particular calendar era, thus serving as a reference point from which subsequent time or dates are measured. The use of epochs in Chinese calendar system allow for a chronological starting point from whence to begin point continuously numbering subsequent dates. Various epochs have been used. Similarly, nomenclature similar to that of the Christian era has occasionally been used:
No reference date is universally accepted. The most popular is the Gregorian calendar ().
During the 17th century, the Jesuit missionaries tried to determine the epochal year of the Chinese calendar. In his "Sinicae historiae decas prima" (published in Munich in 1658), Martino Martini (1614–1661) dated the Yellow Emperor's ascension at 2697 BCE and began the Chinese calendar with the reign of Fuxi (which, according to Martini, began in 2952 BCE).
Philippe Couplet's 1686 "Chronological table of Chinese monarchs" ("Tabula chronologica monarchiae sinicae") gave the same date for the Yellow Emperor. The Jesuits' dates provoked interest in Europe, where they were used for comparison with Biblical chronology. Modern Chinese chronology has generally accepted Martini's dates, except that it usually places the reign of the Yellow Emperor at 2698 BCE and omits his predecessors Fuxi and Shennong as "too legendary to include".
Publications began using the estimated birth date of the Yellow Emperor as the first year of the Han calendar in 1903, with newspapers and magazines proposing different dates. Jiangsu province counted 1905 as the year 4396 (using a year 1 of 2491 BCE, and implying that 2024 CE is 4515), and the newspaper "Ming Pao" () reckoned 1905 as 4603 (using a year 1 of 2698 BCE, and implying that 2024 CE is 4722). Liu Shipei (, 1884–1919) created the Yellow Emperor Calendar (), with year 1 as the birth of the emperor (which he determined as 2711 BCE, implying that 2024 CE is 4735). There is no evidence that this calendar was used before the 20th century. Liu calculated that the 1900 international expedition sent by the Eight-Nation Alliance to suppress the Boxer Rebellion entered Beijing in the 4611th year of the Yellow Emperor.
Taoists later adopted Yellow Emperor Calendar and named it Tao Calendar ().
On 2 January 1912, Sun Yat-sen announced changes to the official calendar and era. 1 January was 14 Shíyīyuè 4609 Huángdì year, assuming a year 1 of 2698 BCE, making 2024 CE year 4722. Many overseas Chinese communities like San Francisco's Chinatown adopted the change.
The modern Chinese standard calendar uses the epoch of the Gregorian calendar, which is on 1 January of the year 1 CE.
Calendar types.
Lunisolar.
Lunisolar calendars involve correlations of the cycles of the sun (solar) and the moon (lunar).
Solar and agricultural.
A solar calendar (also called the Tung Shing, the "Yellow Calendar" or "Imperial Calendar", both alluding to Yellow Emperor) keeps track of the seasons as the earth and the sun move in the solar system relatively to each other. A purely solar calendar may be useful in planning times for agricultural activities such as planting and harvesting. Solar calendars tend to use astronomically observable points of reference such as equinoxes and solstices, events which may be approximately predicted using fundamental methods of observation and basic mathematical analysis.
Modern Chinese calendar and horology.
The topic of the Chinese calendar also includes variations of the modern Chinese calendar, influenced by the Gregorian calendar. Variations include methodologies of the People's Republic of China and Taiwan.
Modern calendars.
In China, the modern calendar is defined by the Chinese national standard GB/T 33661–2017, "Calculation and Promulgation of the Chinese Calendar", issued by the Standardization Administration of China on 12 May 2017.
Influence of Gregorian calendar.
Although modern-day China uses the Gregorian calendar, the traditional Chinese calendar governs holidays, such as the Chinese New Year and Lantern Festival, in both China and overseas Chinese communities. It also provides the traditional Chinese nomenclature of dates within a year which people use to select auspicious days for weddings, funerals, moving or starting a business. The evening state-run news program "Xinwen Lianbo" in the People's Republic of China continues to announce the months and dates in both the Gregorian and the traditional lunisolar calendar.
History.
The Chinese calendar system has a long history, which has traditionally been associated with specific dynastic periods. Various individual calendar types have been developed with different names. In terms of historical development, some of the calendar variations are associated with dynastic changes along a spectrum beginning with a prehistorical/mythological time to and through well attested historical dynastic periods. Many individuals have been associated with the development of the Chinese calendar, including researchers into underlying astronomy; and, furthermore, the development of instruments of observation are historically important. Influences from India, Islam, and Jesuits also became significant.
Phenology.
Early calendar systems often were closely tied to natural phenomena. Phenology is the study of periodic events in biological life cycles and how these are influenced by seasonal and interannual variations in climate, as well as habitat factors (such as elevation). The plum-rains season (), the rainy season in late spring and early summer, begins on the first "bǐng" day after "Mangzhong" () and ends on the first "wèi" day after Xiaoshu (). The Three Fu () are three periods of hot weather, counted from the first "gēng" day after the summer solstice. The first "fu" () is 10 days long. The mid-"fu" () is 10 or 20 days long. The last "fu" () is 10 days from the first "gēng" day after the beginning of autumn. The Shujiu cold days () are the 81 days after the winter solstice (divided into nine sets of nine days), and are considered the coldest days of the year. Each nine-day unit is known by its order in the set, followed by "nine" (). In traditional Chinese culture, "nine" represents the infinity, which is also the number of "Yang". According to one belief nine times accumulation of "Yang" gradually reduces the "Yin", and finally the weather becomes warm.
Names of months.
Lunar months were originally named according to natural phenomena. Current naming conventions use numbers as the month names. Every month is also associated with one of the twelve Earthly Branches.
Chinese astronomy.
The Chinese calendar has been a development involving much observation and calculation of the apparent movements of the Sun, Moon, planets, and stars, as observed from Earth.
Chinese astronomers.
Many Chinese astronomers have contributed to the development of the Chinese calendar. Many were of the scholarly or "shi" class (), including writers of history, such as Sima Qian.
Notable Chinese astronomers who have contributed to the development of the calendar include Gan De, Shi Shen, and Zu Chongzhi
Technology.
Early technological developments aiding in calendar development include the development of the gnomon. Later technological developments useful to the calendar system include naming, numbering and mapping of the sky, the development of analog computational devices such as the armillary sphere and the water clock, and the establishment of observatories.
Chinese calendar names.
Ancient six calendars.
From the Warring States period (ending in 221 BCE), six especially significant calendar systems are known to have begun to be developed. Later on, during their future course in history, the modern names for the ancient six calendars were also developed, and can be translated into English as Huangdi, Yin, Zhou, Xia, Zhuanxu, and Lu.
Calendar variations.
There are various Chinese terms for calendar variations including:
Solar calendars.
The traditional Chinese calendar was developed between 771 BCE and 476 BCE, during the Spring and Autumn period of the Eastern Zhou dynasty. Solar calendars were used before the Zhou dynasty period, along with the basic sexagenary system.
Five-elements calendar.
One version of the solar calendar is the five-elements calendar (), which derives from the Wu Xing. A 365-day year was divided into five phases of 73 days, with each phase corresponding to a Day 1 Wu Xing element. A phase began with a governing-element day (), followed by six 12-day weeks. Each phase consisted of two three-week months, making each year ten months long. Years began on a "jiǎzǐ" () day (and a 72-day wood phase), followed by a "bǐngzǐ" day () and a 72-day fire phase; a "wùzǐ" () day and a 72-day earth phase; a "gēngzǐ" () day and a 72-day metal phase, and a "rénzǐ" day () followed by a water phase. Other days were tracked using the Yellow River Map ("He Tu").
Four-quarters calendar.
Another version is a four-quarters calendar (, or ). The weeks were ten days long, with one month consisting of three weeks. A year had 12 months, with a ten-day week intercalated in summer as needed to keep up with the tropical year. The 10 Heavenly Stems and 12 Earthly Branches were used to mark days.
Balanced calendar.
A third version is the balanced calendar (). A year was 365.25 days, and a month was 29.5 days. After every 16th month, a half-month was intercalated. According to oracle bone records, the Shang dynasty calendar (c. 1600 – c. 1046 BCE) was a balanced calendar with 12 to 14 months in a year; the month after the winter solstice was "Zhēngyuè".
Lunisolar calendars by dynasty.
Six ancient calendars.
Modern historical knowledge and records are limited for the earlier calendars. These calendars are known as the six ancient calendars (), or quarter-remainder calendars, (), since all calculate a year as <templatestyles src="Fraction/styles.css" />365+1⁄4 days long. Months begin on the day of the new moon, and a year has 12 or 13 months. Intercalary months (a 13th month) are added to the end of the year. The Qiang and Dai calendars are modern versions of the Zhuanxu calendar, used by mountain peoples.
Zhou dynasty.
The first lunisolar calendar was the Zhou calendar (), introduced under the Zhou dynasty (1046 BCE – 256 BCE). This calendar sets the beginning of the year at the day of the new moon before the winter solstice.
Competing Warring states calendars.
Several competing lunisolar calendars were also introduced as Zhou devolved into the Warring States, especially by states fighting Zhou control during the Warring States period (perhaps 475 BCE - 221 BCE). The state of Lu issued its own Lu calendar(). Jin issued the Xia calendar () with a year beginning on the day of the new moon nearest the March equinox. Qin issued the Zhuanxu calendar (), with a year beginning on the day of the new moon nearest the winter solstice. Song's Yin calendar () began its year on the day of the new moon after the winter solstice.
Qin and early Han dynasties.
After Qin Shi Huang unified China under the Qin dynasty in 221 BCE, the Qin calendar () was introduced. It followed most of the rules governing the Zhuanxu calendar, but the month order was that of the Xia calendar; the year began with month 10 and ended with month 9, analogous to a Gregorian calendar beginning in October and ending in September. The intercalary month, known as the second "Jiǔyuè" (), was placed at the end of the year. The Qin calendar was used going into the Han dynasty.
Han dynasty Tàichū calendar.
Emperor Wu of Han r. 141 – 87 BCE introduced reforms in the seventh of the eleven named eras of his reign, Tàichū (), 104 BCE – 101 BCE. His Tàichū Calendar () defined a solar year as <templatestyles src="Fraction/styles.css" />365+385⁄1539 days (365;06:00:14.035), and the lunar month had <templatestyles src="Fraction/styles.css" />29+43⁄81 days (29;12:44:44.444).
Since formula_0 the 19 years cycle used for the 7 additional months was taken as an exact one, and not as an approximation.
This calendar introduced the 24 solar terms, dividing the year into 24 equal parts of 15° each. Solar terms were paired, with the 12 combined periods known as climate terms. The first solar term of the period was known as a pre-climate (节气), and the second was a mid-climate (中气). Months were named for the mid-climate to which they were closest, and a month without a mid-climate was an intercalary month.
The Taichu calendar established a framework for traditional calendars, with later calendars adding to the basic formula.
Northern and Southern Dynasties Dàmíng calendar.
The Dàmíng Calendar (), created in the Northern and Southern Dynasties by Zu Chongzhi (429 CE – 500 CE), introduced the equinoxes.
Tang dynasty Wùyín Yuán calendar.
The use of syzygy to determine the lunar month was first described in the Tang dynasty Wùyín Yuán Calendar ().
Yuan dynasty Shòushí calendar.
The Yuan dynasty Shòushí calendar () used spherical trigonometry to find the length of the tropical year. The calendar had a 365.2425-day year, identical to the Gregorian calendar.
Although the Chinese calendar lost its place as the country's official calendar at the beginning of the 20th century, its use has continued. The "Republic of China Calendar" published by the Beiyang government of the Republic of China still listed the dates of the Chinese calendar in addition to the Gregorian calendar. In 1929, the Nationalist government tried to ban the traditional Chinese calendar. The "Kuómín Calendar" published by the government no longer listed the dates of the Chinese calendar. However, Chinese people were used to the traditional calendar and many traditional customs were based on the Chinese calendar. The ban failed and was lifted in 1934. The latest Chinese calendar was ""New Edition of Wànniánlì", revised edition", edited by Beijing Purple Mountain Observatory, People's Republic of China.
Shíxiàn calendar.
From 1645 to 1913 the Shíxiàn or Chongzhen was developed. During the late Ming dynasty, the Chinese Emperor appointed Xu Guangqi in 1629 to be the leader of the ShiXian calendar reform. Assisted by Jesuits, he translated Western astronomical works and introduced new concepts, such as those of Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, and Tycho Brahe; however, the new calendar was not released before the end of the dynasty. In the early Qing dynasty, Johann Adam Schall von Bell submitted the calendar which was edited by the lead of Xu Guangqi to the Shunzhi Emperor. The Qing government issued it as the Shíxiàn (seasonal) calendar. In this calendar, the solar terms are 15° each along the ecliptic and it can be used as a solar calendar. However, the length of the climate term near the perihelion is less than 30 days and there may be two mid-climate terms. The Shíxiàn calendar changed the mid-climate-term rule to "decide the month in sequence, except the intercalary month." The present traditional calendar follows the Shíxiàn calendar, except:
Proposals.
To optimize the Chinese calendar, astronomers have proposed a number of changes. Gao Pingzi (; 1888–1970), a Chinese astronomer who co-founded the Purple Mountain Observatory, proposed that month numbers be calculated before the new moon and solar terms to be rounded to the day. Since the intercalary month is determined by the first month without a mid-climate and the mid-climate time varies by time zone, countries that adopted the calendar but calculate with their own time could vary from the time in China.
Horology.
Horology, or chronometry, refers to the measurement of time. In the context of the Chinese calendar, horology involves the definition and mathematical measurement of terms or elements such observable astronomic movements or events such as are associated with days, months, years, hours, and so on. These measurements are based upon objective, observable phenomena. Calendar accuracy is based upon accuracy and precision of measurements.
The Chinese calendar is lunisolar, similar to the Hindu, Hebrew and ancient Babylonian calendars. In this case the calendar is in part based in objective, observable phenomena and in part by mathematical analysis to correlate the observed phenomena. Lunisolar calendars especially attempt to correlate the solar and lunar cycles, but other considerations can be agricultural and seasonal or phenological, or religious, or even political.
Basic horologic definitions include that days begin and end at midnight, and months begin on the day of the new moon. Years start on the second (or third) new moon after the winter solstice. Solar terms govern the beginning, middle, and end of each month. A sexagenary cycle, comprising the heavenly stems () and the earthly branches (), is used as identification alongside each year and month, including intercalary months or leap months. Months are also annotated as either long ( for months with 30 days) or short ( for months with 29 days). There are also other elements of the traditional Chinese calendar.
Day.
Days are Sun oriented, based upon divisions of the solar year. A day () is considered both traditionally and currently to be the time from one midnight to the next. Traditionally days (including the night-time portion) were divided into 12 double-hours, and in modern times the 24 hour system has become more standard.
Month.
Months are Moon oriented. "Month" (), the time from one new moon to the next. These synodic months are about <templatestyles src="Fraction/styles.css" />29+17⁄32 days long. This includes the "Date" (), when a day occurs in the month. Days are numbered in sequence from 1 to 29 (or 30). And, a "Calendar month" (), is when a month occurs within a year. Some months may be repeated.
Year.
A year () is based upon the time of one revolution of Earth around the Sun, rounded to whole days. Traditionally, the year is measured from the first day of spring (lunisolar year) or the winter solstice (solar year). A year is astronomically about <templatestyles src="Fraction/styles.css" />365+31⁄128 days. This includes the calendar () year, when it is authoritatively determined on which day one year ends and another begins. The year usually begins on the new moon closest to Lichun, the first day of spring. This is typically the second and sometimes third new moon after the winter solstice. A calendar year is 353–355 or 383–385 days long. Also includes "Zodiac", <templatestyles src="Fraction/styles.css" />1⁄12 year, or 30° on the ecliptic. A zodiacal year is about <templatestyles src="Fraction/styles.css" />30+7⁄16 days.
Solar terms.
"Solar term" (), <templatestyles src="Fraction/styles.css" />1⁄24 year, or 15° on the ecliptic. A solar term is about <templatestyles src="Fraction/styles.css" />15+7⁄32 days.
Planets.
The movements of the Sun, Moon, Mercury, Venus, Mars, Jupiter and Saturn (sometimes known as the seven luminaries) are the references for calendar calculations.
Stars.
Big Dipper.
The Big Dipper is the celestial compass, and its handle's direction indicates or some said determines the season and month.
3 Enclosures and 28 Mansions.
The stars are divided into Three Enclosures and 28 Mansions according to their location in the sky relative to Ursa Minor, at the center. Each mansion is named with a character describing the shape of its principal asterism. The Three Enclosures are Purple Forbidden, (), Supreme Palace (), and Heavenly Market. () The eastern mansions are , , , , , , . Southern mansions are , , , , , , . Western mansions are , , , , , , . Northern mansions are , , , , , , . The moon moves through about one lunar mansion per day, so the 28 mansions were also used to count days. In the Tang dynasty, Yuan Tiangang () matched the 28 mansions, seven luminaries and yearly animal signs to yield combinations such as "horn-wood-flood dragon" ().
List of lunar mansions.
The names and determinative stars of the mansions are:
Descriptive mathematics.
Several coding systems are used to avoid ambiguity. The Heavenly Stems is a decimal system. The Earthly Branches, a duodecimal system, mark dual hours ( or ) and climatic terms. The 12 characters progress from the first day with the same branch as the month (first "Yín" day () of "Zhēngyuè"; first "Mǎo" day () of "Èryuè"), and count the days of the month.
The stem-branches is a sexagesimal system. The Heavenly Stems and Earthly Branches make up 60 stem-branches. The stem branches mark days and years. The five Wu Xing elements are assigned to each stem, branch, or stem branch.
Sexagenary system.
Day.
China has used the Western hour-minute-second system to divide the day since the Qing dynasty. Several era-dependent systems had been in use; systems using multiples of twelve and ten were popular, since they could be easily counted and aligned with the Heavenly Stems and Earthly Branches.
Week.
As early as the Bronze Age Xia dynasty, days were grouped into nine- or ten-day weeks known as "xún" (). Months consisted of three "xún". The first 10 days were the "early xún" (), the middle 10 the "mid xún" (), and the last nine (or 10) days were the "late xún" (). Japan adopted this pattern, with 10-day-weeks known as . In Korea, they were known as "sun" (,).
The structure of "xún" led to public holidays every five or ten days. Officials of the Han dynasty were legally required to rest every five days (twice a "xún", or 5–6 times a month). The name of these breaks became "huan" (, "wash").
Grouping days into sets of ten is still used today in referring to specific natural events. "Three Fu" (), a 29–30-day period which is the hottest of the year, reflects its three-"xún" length. After the winter solstice, nine sets of nine days were counted to calculate the end of winter.
The seven-day week was adopted from the Hellenistic system by the 4th century CE, although its method of transmission into China is unclear. It was again transmitted to China in the 8th century by Manichaeans via Kangju (a Central Asian kingdom near Samarkand), and is the most-used system in modern China.
Month.
Months are defined by the time between new moons, which averages approximately <templatestyles src="Fraction/styles.css" />29+17⁄32 days. There is no specified length of any particular Chinese month, so the first month could have 29 days (short month, ) in some years and 30 days (long month, ) in other years.
A 12-month-year using this system has 354 days, which would drift significantly from the tropical year. To fix this, traditional Chinese years have a 13-month year approximately once every three years. The 13-month version has the same long and short months alternating, but adds a 30-day leap month (). Years with 12 months are called common years, and 13-month years are known as long years.
Although most of the above rules were used until the Tang dynasty, different eras used different systems to keep lunar and solar years aligned. The synodic month of the Taichu calendar was <templatestyles src="Fraction/styles.css" />29+43⁄81 days long. The 7th-century, Tang-dynasty Wùyín Yuán Calendar was the first to determine month length by synodic month instead of the cycling method. Since then, month lengths have primarily been determined by observation and prediction.
The days of the month are always written with two characters and numbered beginning with 1. Days one to 10 are written with the day's numeral, preceded by the character "Chū" (); "Chūyī" () is the first day of the month, and "Chūshí" () the 10th. Days 11 to 20 are written as regular Chinese numerals; "Shíwǔ" () is the 15th day of the month, and "Èrshí" () the 20th. Days 21 to 29 are written with the character "Niàn" () before the characters one through nine; "Niànsān" (), for example, is the 23rd day of the month. Day 30 (when applicable) is written as the numeral "Sānshí" ().
History books use days of the month numbered with the 60 stem-branches: <templatestyles src="Template:Blockquote/styles.css" />. "Tiānshèng" 1st year…"Èryuè"…"Dīngsì", the emperor's funeral was at his temple, and the imperial portrait was installed in Nanjing's "Hongqing Palace".
Because astronomical observation determines month length, dates on the calendar correspond to moon phases. The first day of each month is the new moon. On the seventh or eighth day of each month, the first-quarter moon is visible in the afternoon and early evening. On the 15th or 16th day of each month, the full moon is visible all night. On the 22nd or 23rd day of each month, the last-quarter moon is visible late at night and in the morning.
Since the beginning of the month is determined by when the new moon occurs, other countries using this calendar use their own time standards to calculate it; this results in deviations. The first new moon in 1968 was at 16:29 UTC on 29 January. Since North Vietnam used to calculate their Vietnamese calendar and South Vietnam used (Beijing time) to calculate theirs, North Vietnam began the Tết holiday at 29 January at 23:29 while South Vietnam began it on 30 January at 00:15. The time difference allowed asynchronous attacks in the Tet Offensive.
Names of months and lunar date conventions.
Current naming conventions use numbers as the month names, although Lunar months were originally named according to natural phenomena phenology. Each month is also associated with one of the twelve Earthly Branches. Correspondences with Gregorian dates are approximate and should be used with caution. Many years have intercalary months.
Though the numbered month names are often used for the corresponding month number in the Gregorian calendar, it is important to realize that the numbered month names are not interchangeable with the Gregorian months when talking about lunar dates.
One may identify the heavenly stem and earthly branch corresponding to a particular day in the month, and those corresponding to its month, and those to its year, to determine the Four Pillars of Destiny associated with it, for which the Tung Shing, also referred to as the Chinese Almanac of the year, or the Huangli, and containing the essential information concerning Chinese astrology, is the most convenient publication to consult. Days rotate through a sexagenary cycle marked by coordination between heavenly stems and earthly branches, hence the referral to the Four Pillars of Destiny as, "Bazi", or "Birth Time Eight Characters", with each pillar consisting of a character for its corresponding heavenly stem, and another for its earthly branch. Since Huangli days are sexagenaric, their order is quite independent of their numeric order in each month, and of their numeric order within a week (referred to as True Animals in relation to the Chinese zodiac). Therefore, it does require painstaking calculation for one to arrive at the Four Pillars of Destiny of a particular given date, which rarely outpaces the convenience of simply consulting the Huangli by looking up its Gregorian date.
Solar term.
The solar year (), the time between winter solstices, is divided into 24 solar terms known as jié qì (節氣). Each term is a 15° portion of the ecliptic. These solar terms mark both Western and Chinese seasons, as well as equinoxes, solstices, and other Chinese events. The even solar terms (marked with "Z", for , "Zhongqi") are considered the major terms, while the odd solar terms (marked with "J", for , "Jieqi") are deemed minor. The solar terms qīng míng (清明) on 5 April and dōng zhì (冬至) on 22 December are both celebrated events in China.
Solar year.
The calendar solar year, known as the "suì", () begins on the December solstice and proceeds through the 24 solar terms. Since the speed of the Sun's apparent motion in the elliptical is variable, the time between major solar terms is not fixed. This variation in time between major solar terms results in different solar year lengths. There are generally 11 or 12 complete months, plus two incomplete months around the winter solstice, in a solar year. The complete months are numbered from 0 to 10, and the incomplete months are considered the 11th month. If there are 12 complete months in the solar year, it is known as a leap solar year, or leap "suì".
Due to the inconsistencies in the length of the solar year, different versions of the traditional calendar might have different average solar year lengths. For example, one solar year of the 1st century BCE Tàichū calendar is <templatestyles src="Fraction/styles.css" />365+385⁄1539 (365.25016) days. A solar year of the 13th-century Shòushí calendar is <templatestyles src="Fraction/styles.css" />365+97⁄400 (365.2425) days, identical to the Gregorian calendar. The additional .00766 day from the Tàichū calendar leads to a one-day shift every 130.5 years.
Pairs of solar terms are climate terms, or solar months. The first solar term is "pre-climate" (), and the second is "mid-climate" ().
If there are 12 complete months within a solar year, the first month without a mid-climate is the leap, or intercalary, month. In other words, the first month that does not include a major solar term is the leap month. Leap months are numbered with "rùn" , the character for "intercalary", plus the name of the month they follow. In 2017, the intercalary month after month six was called "Rùn Liùyuè", or "intercalary sixth month" () and written as "6i" or "6+". The next intercalary month (in 2020, after month four) will be called "Rùn Sìyuè" () and written "4i" or "4+".
Lunisolar year.
The lunisolar year begins with the first spring month, "Zhēngyuè" (), and ends with the last winter month, "Làyuè" (). All other months are named for their number in the month order. See below on the timing of the Chinese New Year.
Years were traditionally numbered by the reign in ancient China, but this was abolished after founding the People's Republic of China in 1949. For example, the year from 12 February 2021 to 31 January 2022 was a "Xīnchǒu" year () of 12 months or 354 days.
The Tang dynasty used the Earthly Branches to mark the months from December 761 to May 762. Over this period, the year began with the winter solstice.
Age reckoning.
In modern China, a person's official age is based on the Gregorian calendar. For traditional use, age is based on the Chinese "Sui" calendar. A child is considered one year old at birth. After each Chinese New Year, one year is added to their traditional age. Their age therefore is the number of Chinese calendar years in which they have lived. Due to the potential for confusion, the age of infants is often given in months instead of years.
After the Gregorian calendar was introduced in China, the Chinese traditional-age was referred to as the "nominal age" () and the Gregorian age was known as the "real age" ().
Year-numbering systems.
Eras.
Ancient China numbered years from an emperor's ascension to the throne or his declaration of a new era name. The first recorded reign title was "Jiànyuán" (), from 140 BCE; the last reign title was "Xuāntǒng" (), from 1908 CE. The era system was abolished in 1912, after which the current or Republican era was used.
Stem-branches.
The 60 stem-branches have been used to mark the date since the Shang dynasty (1600 BCE – 1046 BCE). Astrologers knew that the orbital period of Jupiter is about 12×361 = 4332 days, which they divided period into 12 years () of 361 days each. The stem-branches system solved the era system's problem of unequal reign lengths.
Chinese New Year.
The date of the Chinese New Year accords with the patterns of the lunisolar calendar and hence is variable from year to year.
The invariant between years is that the winter solstice, Dongzhi is required to be in the eleventh month of the year This means that Chinese New Year will be on the second new moon after the previous winter solstice, unless there is a leap month 11 or 12 in the previous year.
This rule is accurate, however there are two other mostly (but not completely) accurate rules that are commonly stated:
It has been found that Chinese New Year moves back by either 10, 11, or 12 days in most years. If it falls on or before 31 January, then it moves forward in the next year by either 18, 19, or 20 days.
Chinese lunar date conventions.
Though the numbered month names are often used for the corresponding month number in the Gregorian calendar, it is important to realize that the numbered month names are not interchangeable with the Gregorian months when talking about lunar dates.
Holidays.
Various traditional and religious holidays shared by communities throughout the world use the Chinese (Lunisolar) calendar:
Holidays with the same day and same month.
The Chinese New Year (known as the Spring Festival/春節 in China) is on the first day of the first month and was traditionally called the Yuan Dan (元旦) or Zheng Ri (正日). In Vietnam it is known as Tết Nguyên Đán (). Traditionally it was the most important holiday of the year. It is an official holiday in China, Hong Kong, Macau, Taiwan, Vietnam, Korea, the Philippines, Malaysia, Singapore, Indonesia, and Mauritius. It is also a public holiday in Thailand's Narathiwat, Pattani, Yala and Satun provinces, and is an official public school holiday in New York City.
The Double Third Festival is on the third day of the third month.
The Dragon Boat Festival, or the Duanwu Festival (端午節), is on the fifth day of the fifth month and is an official holiday in China, Hong Kong, Macau, and Taiwan. It is also celebrated in Vietnam where it is known as Tết Đoan Ngọ (節端午)
The Qixi Festival (七夕節) is celebrated in the evening of the seventh day of the seventh month. It is also celebrated in Vietnam where it is known as Thất tịch (七夕) ).
The Double Ninth Festival (重陽節) is celebrated on the ninth day of the ninth month. It is also celebrated in Vietnam where it is known as Tết Trùng Cửu (節重九).
Full moon holidays (holidays on the fifteenth day).
The Lantern Festival is celebrated on the fifteenth day of the first month and was traditionally called the Yuan Xiao (元宵) or Shang Yuan Festival (上元節). In Vietnam, it is known as Tết Thượng Nguyên (節上元).
The Zhong Yuan Festival is celebrated on the fifteenth day of the seventh month. In Vietnam, it is celebrated as Tết Trung Nguyên (中元節) or Lễ Vu Lan (禮盂蘭) .
The Mid-Autumn Festival is celebrated on the fifteenth day of the eighth month. In Vietnam, it is celebrated as Tết Trung Thu (節中秋).
The Xia Yuan Festival is celebrated on the fifteenth day of the tenth month. In Vietnam, it is celebrated as Tết Hạ Nguyên (節下元).
Celebrations of the twelfth month.
The Laba Festival is on the eighth day of the twelfth month. It is the enlightenment day of Sakyamuni Buddha and in Vietnam is known as Lễ Vía Phật Thích Ca Thành Đạo.
The Kitchen God Festival is celebrated on the twenty-third day of the twelfth month in northern regions of China and on the twenty-fourth day of the twelfth month in southern regions of China.
Chinese New Year's Eve is also known as the Chuxi Festival and is celebrated on the evening of the last day of the lunar calendar. It is celebrated wherever the lunar calendar is observed.
Celebrations of solar-term holidays.
The Qingming Festival (清明节) is celebrated on the fifteenth day after the Spring Equinox. It is celebrated in Vietnam as Tết Thanh Minh (節清明).
The Dongzhi Festival (冬至) or the Winter Solstice is celebrated as Lễ hội Đông Chí (禮會冬至) in Vietnam.
Religious holidays based on the lunar calendar.
East Asian Mahayana, Daoist, and some Cao Dai holidays and/or vegetarian observances are based on the Lunar Calendar.
Celebrations in Japan.
Many of the above holidays of the lunar calendar are also celebrated in Japan, but since the Meiji era on the similarly numbered dates of the Gregorian calendar.
Double celebrations due to intercalary months.
In the case when there is a corresponding intercalary month, the holidays may be celebrated twice. For example, in the hypothetical situation in which there is an additional intercalary seventh month, the Zhong Yuan Festival will be celebrated in the seventh month followed by another celebration in the intercalary seventh month. (The next such occasion will be 2033, the first such since the calendar reform of 1645.
Similar calendars.
Like Chinese characters, variants of the Chinese calendar have been used in different parts of the Sinosphere throughout history: this includes Vietnam, Korea, Singapore, Japan and Ryukyu, Mongolia, and elsewhere.
Outlying areas of China.
Calendars of ethnic groups in mountains and plateaus of southwestern China and grasslands of northern China are based on their phenology and algorithms of traditional calendars of different periods, particularly the Tang and pre-Qin dynasties.
Non-Chinese areas.
Korea, Vietnam, and the Ryukyu Islands adopted the Chinese calendar. In the respective regions, the Chinese calendar has been adapted into the Korean, Vietnamese, and Ryukyuan calendars, with the main difference from the Chinese calendar being the use of different meridians due to geography, leading to some astronomical events — and calendar events based on them — falling on different dates. The traditional Japanese calendar was also derived from the Chinese calendar (based on a Japanese meridian), but Japan abolished its official use in 1873 after Meiji Restoration reforms. Calendars in Mongolia and Tibet have absorbed elements of the traditional Chinese calendar but are not direct descendants of it.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\left(365+\\frac{385}{1539}\\right)\\times19=\\left(29+\\frac{43}{81}\\right)\\times \\left(19\\times 12 + 7 \\right)\n "
}
]
| https://en.wikipedia.org/wiki?curid=6966 |
696619 | Binomial series | Mathematical series
In mathematics, the binomial series is a generalization of the polynomial that comes from a binomial formula expression like formula_0 for a nonnegative integer formula_1. Specifically, the binomial series is the MacLaurin series for the function formula_2, where formula_3 and formula_4. Explicitly,
where the power series on the right-hand side of (1) is expressed in terms of the (generalized) binomial coefficients
formula_5
Note that if α is a nonnegative integer n then the "x""n" + 1 term and all later terms in the series are 0, since each contains a factor of ("n" − "n"). Thus, in this case, the series is finite and gives the algebraic binomial formula.
Convergence.
Conditions for convergence.
Whether (1) converges depends on the values of the complex numbers α and x. More precisely:
In particular, if α is not a non-negative integer, the situation at the boundary of the disk of convergence, |"x"| = 1, is summarized as follows:
Identities to be used in the proof.
The following hold for any complex number α:
formula_6
Unless formula_7 is a nonnegative integer (in which case the binomial coefficients vanish as formula_8 is larger than formula_7), a useful asymptotic relationship for the binomial coefficients is, in Landau notation:
This is essentially equivalent to Euler's definition of the Gamma function:
formula_9
and implies immediately the coarser bounds
for some positive constants m and M .
Formula (2) for the generalized binomial coefficient can be rewritten as
Proof.
To prove (i) and (v), apply the ratio test and use formula (2) above to show that whenever formula_7 is not a nonnegative integer, the radius of convergence is exactly 1. Part (ii) follows from formula (5), by comparison with the p-series
formula_10
with formula_11. To prove (iii), first use formula (3) to obtain
and then use (ii) and formula (5) again to prove convergence of the right-hand side when formula_12 is assumed. On the other hand, the series does not converge if formula_13 and formula_14, again by formula (5). Alternatively, we may observe that for all formula_15, formula_16. Thus, by formula (6), for all formula_17. This completes the proof of (iii). Turning to (iv), we use identity (7) above with formula_18 and formula_19 in place of formula_7, along with formula (4), to obtain
formula_20
as formula_21. Assertion (iv) now follows from the asymptotic behavior of the sequence formula_22. (Precisely, formula_23
certainly converges to formula_24 if formula_25 and diverges to formula_26 if formula_27. If formula_28, then formula_29 converges if and only if the sequence formula_30 converges formula_31, which is certainly true if formula_32 but false if formula_33: in the latter case the sequence is dense formula_31, due to the fact that formula_34 diverges and formula_35 converges to zero).
Summation of the binomial series.
The usual argument to compute the sum of the binomial series goes as follows. Differentiating term-wise the binomial series within the disk of convergence and using formula (1), one has that the sum of the series is an analytic function solving the ordinary differential equation (1 + "x")"u"′("x") − "αu"("x") = 0 with initial condition "u"(0) = 1.
The unique solution of this problem is the function "u"("x") = (1 + "x")"α". Indeed, multiplying by the integrating factor (1 + "x")−"α"−1 gives
formula_36
so the function (1 + "x")"−α""u"("x") is a constant, which the initial condition tells us is 1. That is, "u"("x") = (1 + "x")"α" is the sum of the binomial series for .
The equality extends to |"x"| = 1 whenever the series converges, as a consequence of Abel's theorem and by continuity of (1 + "x")"α".
Negative binomial series.
Closely related is the "negative binomial series" defined by the MacLaurin series for the function formula_37, where formula_3 and formula_4. Explicitly,
formula_38
which is written in terms of the multiset coefficient
formula_39
When α is a positive integer, several common sequences are apparent. The case "α" = 1 gives the series 1 + "x" + "x"2 + "x"3 + ..., where the coefficient of each term of the series is simply 1. The case "α" = 2 gives the series 1 + 2"x" + 3"x"2 + 4"x"3 + ..., which has the counting numbers as coefficients. The case "α" = 3 gives the series 1 + 3"x" + 6"x"2 + 10"x"3 + ..., which has the triangle numbers as coefficients. The case "α" = 4 gives the series 1 + 4"x" + 10"x"2 + 20"x"3 + ..., which has the tetrahedral numbers as coefficients, and similarly for higher integer values of α.
The negative binomial series includes the case of the geometric series, the power series
formula_40
(which is the negative binomial series when formula_41, convergent in the disc formula_42) and, more generally, series obtained by differentiation of the geometric power series:
formula_43
with formula_44, a positive integer.
History.
The first results concerning binomial series for other than positive-integer exponents were given by Sir Isaac Newton in the study of areas enclosed under certain curves. John Wallis built upon this work by considering expressions of the form "y" = (1 − "x"2)"m" where m is a fraction. He found that (written in modern terms) the successive coefficients "c""k" of (−"x"2)"k" are to be found by multiplying the preceding coefficient by (as in the case of integer exponents), thereby implicitly giving a formula for these coefficients. He explicitly writes the following instances
formula_45
formula_46
formula_47
The binomial series is therefore sometimes referred to as Newton's binomial theorem. Newton gives no proof and is not explicit about the nature of the series. Later, on 1826 Niels Henrik Abel discussed the subject in a paper published on "Crelle's Journal", treating notably questions of convergence.
Footnotes.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(1+x)^n"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "f(x)=(1+x)^{\\alpha}"
},
{
"math_id": 3,
"text": "\\alpha \\in \\Complex"
},
{
"math_id": 4,
"text": "|x| < 1"
},
{
"math_id": 5,
"text": "\\binom{\\alpha}{k} := \\frac{\\alpha (\\alpha-1) (\\alpha-2) \\cdots (\\alpha-k+1)}{k!}. "
},
{
"math_id": 6,
"text": "{\\alpha \\choose 0} = 1,"
},
{
"math_id": 7,
"text": "\\alpha"
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "\\Gamma(z) = \\lim_{k \\to \\infty} \\frac{k! \\; k^z}{z \\; (z+1)\\cdots(z+k)}, "
},
{
"math_id": 10,
"text": " \\sum_{k=1}^\\infty \\; \\frac {1} {k^p}, "
},
{
"math_id": 11,
"text": "p=1+\\operatorname{Re}\\alpha"
},
{
"math_id": 12,
"text": " \\operatorname{Re} \\alpha> - 1 "
},
{
"math_id": 13,
"text": "|x|=1"
},
{
"math_id": 14,
"text": " \\operatorname{Re} \\alpha \\le - 1 "
},
{
"math_id": 15,
"text": "j"
},
{
"math_id": 16,
"text": " \\left |\\frac {\\alpha + 1}{j} - 1 \\right | \\ge 1 - \\frac {\\operatorname{Re} \\alpha + 1}{j} \\ge 1 "
},
{
"math_id": 17,
"text": " k, \\left|{\\alpha \\choose k} \\right| \\ge 1 "
},
{
"math_id": 18,
"text": "x=-1"
},
{
"math_id": 19,
"text": "\\alpha-1"
},
{
"math_id": 20,
"text": "\\sum_{k=0}^n \\; {\\alpha\\choose k} \\; (-1)^k = {\\alpha-1 \\choose n} \\;(-1)^n= \\frac{1} {\\Gamma(-\\alpha+1)n^{\\alpha}}(1+o(1))"
},
{
"math_id": 21,
"text": "n\\to\\infty"
},
{
"math_id": 22,
"text": "n^{-\\alpha} = e^{-\\alpha \\log(n)}"
},
{
"math_id": 23,
"text": " \\left|e^{-\\alpha\\log n}\\right| = e^{-\\operatorname{Re}\\alpha\\, \\log n}"
},
{
"math_id": 24,
"text": "0"
},
{
"math_id": 25,
"text": "\\operatorname{Re}\\alpha>0"
},
{
"math_id": 26,
"text": "+\\infty"
},
{
"math_id": 27,
"text": "\\operatorname{Re}\\alpha<0"
},
{
"math_id": 28,
"text": "\\operatorname{Re}\\alpha=0"
},
{
"math_id": 29,
"text": "n^{-\\alpha} = e^{-i \\operatorname{Im}\\alpha\\log n}"
},
{
"math_id": 30,
"text": " \\operatorname{Im}\\alpha\\log n "
},
{
"math_id": 31,
"text": "\\bmod{2\\pi}"
},
{
"math_id": 32,
"text": "\\alpha=0"
},
{
"math_id": 33,
"text": "\\operatorname{Im}\\alpha \\neq0"
},
{
"math_id": 34,
"text": "\\log n"
},
{
"math_id": 35,
"text": "\\log (n+1)-\\log n"
},
{
"math_id": 36,
"text": "0=(1+x)^{-\\alpha}u'(x) - \\alpha (1+x)^{-\\alpha-1} u(x)= \\big[(1+x)^{-\\alpha}u(x)\\big]'\\,,"
},
{
"math_id": 37,
"text": "g(x)=(1-x)^{-\\alpha}"
},
{
"math_id": 38,
"text": "\\begin{align}\n\\frac{1}{(1 - x)^\\alpha} &= \\sum_{k=0}^{\\infty} \\; \\frac{g^{(k)}(0)}{k!} \\; x^k \\\\\n &= 1 + \\alpha x + \\frac{\\alpha(\\alpha+1)}{2!} x^2 + \\frac{\\alpha(\\alpha+1)(\\alpha+2)}{3!} x^3 + \\cdots,\n \\end{align}"
},
{
"math_id": 39,
"text": "\\left(\\!\\!{\\alpha\\choose k}\\!\\!\\right) := {\\alpha+k-1 \\choose k} = \\frac{\\alpha (\\alpha+1) (\\alpha+2) \\cdots (\\alpha+k-1)}{k!}\\,."
},
{
"math_id": 40,
"text": "\\frac{1}{1-x} = \\sum_{n=0}^\\infty x^n"
},
{
"math_id": 41,
"text": "\\alpha=1"
},
{
"math_id": 42,
"text": "|x|<1"
},
{
"math_id": 43,
"text": "\\frac{1}{(1-x)^n} = \\frac{1}{(n-1)!}\\frac{d^{n-1}}{dx^{n-1}}\\frac{1}{1-x}"
},
{
"math_id": 44,
"text": "\\alpha=n"
},
{
"math_id": 45,
"text": "(1-x^2)^{1/2}=1-\\frac{x^2}2-\\frac{x^4}8-\\frac{x^6}{16}\\cdots"
},
{
"math_id": 46,
"text": "(1-x^2)^{3/2}=1-\\frac{3x^2}2+\\frac{3x^4}8+\\frac{x^6}{16}\\cdots"
},
{
"math_id": 47,
"text": "(1-x^2)^{1/3}=1-\\frac{x^2}3-\\frac{x^4}9-\\frac{5x^6}{81}\\cdots"
}
]
| https://en.wikipedia.org/wiki?curid=696619 |
69664162 | Empirical dynamic modeling |
Empirical dynamic modeling (EDM) is a framework for analysis and prediction of nonlinear dynamical systems. Applications include population dynamics, ecosystem service, medicine, neuroscience, dynamical systems, geophysics, and human-computer interaction. EDM was originally developed by Robert May and George Sugihara. It can be considered a methodology for data modeling, predictive analytics, dynamical system analysis, machine learning and time series analysis.
Description.
Mathematical models have tremendous power to describe observations of real-world systems. They are routinely used to test hypothesis, explain mechanisms and predict future outcomes. However, real-world systems are often nonlinear and multidimensional, in some instances rendering explicit equation-based modeling problematic. Empirical models, which infer patterns and associations from the data instead of using hypothesized equations, represent a natural and flexible framework for modeling complex dynamics.
Donald DeAngelis and Simeon Yurek illustrated that canonical statistical models are ill-posed when applied to nonlinear dynamical systems. A hallmark of nonlinear dynamics is state-dependence: system states are related to previous states governing transition from one state to another. EDM operates in this space, the multidimensional state-space of system dynamics rather than on one-dimensional observational time series. EDM does not presume relationships among states, for example, a functional dependence, but projects future states from localised, neighboring states. EDM is thus a state-space, nearest-neighbors paradigm where system dynamics are inferred from states derived from observational time series. This provides a model-free representation of the system naturally encompassing nonlinear dynamics.
A cornerstone of EDM is recognition that time series observed from a dynamical system can be transformed into higher-dimensional state-spaces by time-delay embedding with Takens's theorem. The state-space models are evaluated based on in-sample fidelity to observations, conventionally with Pearson correlation between predictions and observations.
Methods.
EDM is continuing to evolve. As of 2022, the main algorithms are Simplex projection, Sequential locally weighted global linear maps (S-Map) projection, Multivariate embedding in Simplex or S-Map,
Convergent cross mapping (CCM),
and Multiview Embeding, described below.
Nearest neighbors are found according to:
formula_1
Simplex.
Simplex projection is a nearest neighbor projection. It locates the formula_0 nearest neighbors to the location in the state-space from which a prediction is desired. To minimize the number of free parameters formula_0 is typically set to formula_2 defining an formula_2 dimensional simplex in the state-space. The prediction is computed as the average of the weighted phase-space simplex projected formula_3 points ahead. Each neighbor is weighted proportional to their distance to the projection origin vector in the state-space.
S-Map.
S-Map extends the state-space prediction in Simplex from an average of the formula_2 nearest neighbors to a linear regression fit to all neighbors, but localised with an exponential decay kernel. The exponential localisation function is formula_9, where formula_10 is the neighbor distance and formula_11 the mean distance. In this way, depending on the value of formula_12, neighbors close to the prediction origin point have a higher weight than those further from it, such that a local linear approximation to the nonlinear system is reasonable. This localisation ability allows one to identify an optimal local scale, in-effect quantifying the degree of state dependence, and hence nonlinearity of the system.
Another feature of S-Map is that for a properly fit model, the regression coefficients between variables have been shown to approximate the gradient (directional derivative) of variables along the manifold. These Jacobians represent the time-varying interaction strengths between system variables.
Multivariate Embedding.
Multivariate Embedding recognizes that time-delay embeddings are not the only valid state-space construction. In Simplex and S-Map one can generate a state-space from observational vectors, or time-delay embeddings of a single observational time series, or both.
Convergent Cross Mapping.
Convergent cross mapping (CCM) leverages a corollary to the Generalized Takens Theorem that it should be possible to cross predict or "cross map" between variables observed from the same system. Suppose that in some dynamical system involving variables formula_24 and formula_25, formula_24 causes formula_25. Since formula_24 and formula_25 belong to the same dynamical system, their reconstructions (via embeddings) formula_26, and formula_27, also map to the same system.
The causal variable formula_24 leaves a signature on the affected variable formula_25, and consequently, the reconstructed states based on formula_25 can be used to cross predict values of formula_24. CCM leverages this property to infer causality by predicting formula_24 using the formula_27 library of points (or vice versa for the other direction of causality), while assessing improvements in cross map predictability as larger and larger random samplings of formula_27 are used. If the prediction skill of formula_24 increases and saturates as the entire formula_27 is used, this provides evidence that formula_24 is casually influencing formula_25.
Multiview Embedding.
Multiview Embedding is a Dimensionality reduction technique where a large number of state-space time series vectors are combitorially assessed towards maximal model predictability.
Extensions.
Extensions to EDM techniques include:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "\\text{NN}(y, X, k) = \\| X_{N_i}^{E} - y\\| \\leq \\| X_{N_j}^{E} - y\\| \\text{ if } 1 \\leq i \\leq j \\leq k"
},
{
"math_id": 2,
"text": "E+1"
},
{
"math_id": 3,
"text": "Tp"
},
{
"math_id": 4,
"text": "N_k \\gets \\text{NN}(y, X, k)"
},
{
"math_id": 5,
"text": "d \\gets \\| X_{N_1}^{E} - y\\|"
},
{
"math_id": 6,
"text": "i=1,\\dots,k"
},
{
"math_id": 7,
"text": "w_i \\gets \\exp (-\\| X_{N_i}^{E} - y\\| / d )"
},
{
"math_id": 8,
"text": "\\hat{y} \\gets \\sum_{i = 1}^{k} \\left(w_iX_{N_i+T_p}\\right) / \\sum_{i = 1}^{k} w_i"
},
{
"math_id": 9,
"text": "F(\\theta) = \\text{exp}(-\\theta d/D)"
},
{
"math_id": 10,
"text": "d"
},
{
"math_id": 11,
"text": "D"
},
{
"math_id": 12,
"text": "\\theta"
},
{
"math_id": 13,
"text": "N \\gets \\text{NN}(y, X, k)"
},
{
"math_id": 14,
"text": "D \\gets \\frac{1}{k} \\sum_{i=1}^k \\| X_{N_i}^{E} - y\\|"
},
{
"math_id": 15,
"text": "w_i \\gets \\exp (-\\theta \\| X_{N_i}^{E} - y\\| / D )"
},
{
"math_id": 16,
"text": "W \\gets \\text{diag}(w_i)"
},
{
"math_id": 17,
"text": "A \\gets\n \\begin{bmatrix}\n 1 & X_{N_1} & X_{N_1- 1} & \\dots & X_{N_1 - E + 1} \\\\\n 1 & X_{N_2} & X_{N_2- 1} & \\dots & X_{N_2 - E + 1} \\\\\n \\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 1 & X_{N_k} & X_{N_k- 1} & \\dots & X_{N_k - E + 1} \n \\end{bmatrix}"
},
{
"math_id": 18,
"text": "A \\gets WA"
},
{
"math_id": 19,
"text": "b \\gets \n \\begin{bmatrix}\n X_{N_1 + T_p} \\\\\n X_{N_2 + T_p} \\\\\n \\vdots \\\\\n X_{N_k + T_p} \n \\end{bmatrix}"
},
{
"math_id": 20,
"text": "b \\gets Wb"
},
{
"math_id": 21,
"text": "\\hat{c} \\gets \\text{argmin}_{c}\\| Ac - b \\|_2^2"
},
{
"math_id": 22,
"text": "\\hat{c}"
},
{
"math_id": 23,
"text": "\\hat{y} \\gets \\hat{c}_0 + \\sum_{i=1}^E\\hat{c}_iy_i"
},
{
"math_id": 24,
"text": "X"
},
{
"math_id": 25,
"text": "Y"
},
{
"math_id": 26,
"text": "M_{x}"
},
{
"math_id": 27,
"text": "M_{y}"
}
]
| https://en.wikipedia.org/wiki?curid=69664162 |
69665405 | Macbeath region | Brief description on Macbeath Regions
In mathematics, a Macbeath region is an explicitly defined region in convex analysis on a bounded convex subset of "d"-dimensional Euclidean space formula_0. The idea was introduced by Alexander Macbeath (1952) and dubbed by G. Ewald, D. G. Larman and C. A. Rogers in 1970. Macbeath regions have been used to solve certain complex problems in the study of the boundaries of convex bodies. Recently they have been used in the study of convex approximations and other aspects of computational geometry.
Definition.
Let "K" be a bounded convex set in a Euclidean space. Given a point "x" and a scaler λ the λ-scaled the Macbeath region around a point "x" is:
formula_1
The scaled Macbeath region at "x" is defined as:
formula_2
This can be seen to be the intersection of "K" with the reflection of "K" around "x" scaled by λ.
formula_6
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\R^d"
},
{
"math_id": 1,
"text": " {M_K}(x)=K \\cap (2x - K) = x + ((K-x)\\cap(x-K)) = \\{k'\\in K| \\exists k \\in K \\text{ and } k'-x=x-k \\}"
},
{
"math_id": 2,
"text": " M_K^{\\lambda}(x)=x + \\lambda ((K-x)\\cap(x-K)) = \\{(1-\\lambda)x+\\lambda k'|k'\\in K, \\exists k\\in K \\text{ and } k'-x=x-k \\}\n"
},
{
"math_id": 3,
"text": "\\epsilon"
},
{
"math_id": 4,
"text": "O(\\log^{\\frac{d+1}{2}}(\\frac{1}{\\epsilon}))"
},
{
"math_id": 5,
"text": "0\\leq\\lambda<1"
},
{
"math_id": 6,
"text": "B_H\\left(x,\\frac{1}{2}\\ln(1+\\lambda)\\right)\\subset M^\\lambda (x) \\subset B_H\\left(x,\\frac{1}{2}\\ln\\frac{1+ \\lambda}{1-\\lambda}\\right) "
},
{
"math_id": 7,
"text": "M_K^{\\lambda}(x)"
},
{
"math_id": 8,
"text": " x,y \\in K"
},
{
"math_id": 9,
"text": "M^{\\frac{1}{2}}(x) \\cap M^{\\frac{1}{2}}(y) \\neq \\empty"
},
{
"math_id": 10,
"text": "M^{1}(y)\\subset M^{5}(x)"
},
{
"math_id": 11,
"text": "R^d"
},
{
"math_id": 12,
"text": "K \\cap H"
},
{
"math_id": 13,
"text": "\\frac{r}{2}"
},
{
"math_id": 14,
"text": "K \\cap H \\subset M^{3d(x)}"
},
{
"math_id": 15,
"text": "K \\subset R^d"
},
{
"math_id": 16,
"text": "\\frac{1}{6d}"
},
{
"math_id": 17,
"text": "C \\subset M^{3d}(x)"
},
{
"math_id": 18,
"text": "\\lambda>0"
},
{
"math_id": 19,
"text": "M^{\\lambda}(x) \\cap K \\subset C^{1+\\lambda}"
},
{
"math_id": 20,
"text": "\\lambda\\leq 1"
},
{
"math_id": 21,
"text": " M^{\\lambda}(x) \\subset C^{1+\\lambda}"
},
{
"math_id": 22,
"text": "C \\cap M'(x) \\neq \\empty"
},
{
"math_id": 23,
"text": "M'(x)\\subset C^2"
},
{
"math_id": 24,
"text": "\\epsilon>0"
},
{
"math_id": 25,
"text": "K\\subset R^d"
},
{
"math_id": 26,
"text": "O\\left(\\frac{1}{\\epsilon^{\\frac{d-1}{2}}}\\right)"
},
{
"math_id": 27,
"text": "R_1,....,R_k"
},
{
"math_id": 28,
"text": "C_1,....,C_k"
},
{
"math_id": 29,
"text": "\\beta"
},
{
"math_id": 30,
"text": "\\lambda"
},
{
"math_id": 31,
"text": "C_i"
},
{
"math_id": 32,
"text": "\\beta\\epsilon"
},
{
"math_id": 33,
"text": "R_i \\subset C_i \\subset R_i^{\\lambda}"
},
{
"math_id": 34,
"text": "R_i \\subset C"
},
{
"math_id": 35,
"text": "C_i^{\\frac{1}{\\beta^2}} \\subset C \\subset C_i"
}
]
| https://en.wikipedia.org/wiki?curid=69665405 |
6966559 | Nanocrystalline material | A nanocrystalline (NC) material is a polycrystalline material with a crystallite size of only a few nanometers. These materials fill the gap between amorphous materials without any long range order and conventional coarse-grained materials. Definitions vary, but nanocrystalline material is commonly defined as a crystallite (grain) size below 100 nm. Grain sizes from 100 to 500 nm are typically considered "ultrafine" grains.
The grain size of a NC sample can be estimated using x-ray diffraction. In materials with very small grain sizes, the diffraction peaks will be broadened. This broadening can be related to a crystallite size using the Scherrer equation (applicable up to ~50 nm), a Williamson-Hall plot, or more sophisticated methods such as the Warren-Averbach method or computer modeling of the diffraction pattern. The crystallite size can be measured directly using transmission electron microscopy.
Synthesis.
Nanocrystalline materials can be prepared in several ways. Methods are typically categorized based on the phase of matter the material transitions through before forming the nanocrystalline final product.
Solid-state processing.
Solid-state processes do not involve melting or evaporating the material and are typically done at relatively low temperatures. Examples of solid state processes include mechanical alloying using a high-energy ball mill and certain types of severe plastic deformation processes.
Liquid processing.
Nanocrystalline metals can be produced by rapid solidification from the liquid using a process such as melt spinning. This often produces an amorphous metal, which can be transformed into an nanocrystalline metal by annealing above the crystallization temperature.
Vapor-phase processing.
Thin films of nanocrystalline materials can be produced using vapor deposition processes such as MOCVD.
Solution processing.
Some metals, particularly nickel and nickel alloys, can be made into nanocrystalline foils using electrodeposition.
Mechanical properties.
Nanocrystalline materials show exceptional mechanical properties relative to their coarse-grained varieties. Because the volume fraction of grain boundaries in nanocrystalline materials can be as large as 30%, the mechanical properties of nanocrystalline materials are significantly influenced by this amorphous grain boundary phase. For example, the elastic modulus has been shown to decrease by 30% for nanocrystalline metals and more than 50% for nanocrystalline ionic materials. This is because the amorphous grain boundary regions are less dense than the crystalline grains, and thus have a larger volume per atom, formula_0. Assuming the interatomic potential, formula_1, is the same within the grain boundaries as in the bulk grains, the elastic modulus, formula_2, will be smaller in the grain boundary regions than in the bulk grains. Thus, via the rule of mixtures, a nanocrystalline material will have a lower elastic modulus than its bulk crystalline form.
Nanocrystalline metals.
The exceptional yield strength of nanocrystalline metals is due to grain boundary strengthening, as grain boundaries are extremely effective at blocking the motion of dislocations. Yielding occurs when the stress due to dislocation pileup at a grain boundary becomes sufficient to activate slip of dislocations in the adjacent grain. This critical stress increases as the grain size decreases, and these physics are empirically captured by the Hall-Petch relationship,
formula_3
where formula_4 is the yield stress, formula_5 is a material-specific constant that accounts for the effects of all other strengthening mechanisms, formula_6 is a material-specific constant that describes the magnitude of the metal's response to grain size strengthening, and formula_7 is the average grain size. Additionally, because nanocrystalline grains are too small to contain a significant number of dislocations, nanocrystalline metals undergo negligible amounts of strain-hardening, and nanocrystalline materials can thus be assumed to behave with perfect plasticity.
As the grain size continues to decrease, a critical grain size is reached at which intergranular deformation, i.e. grain boundary sliding, becomes more energetically favorable than intragranular dislocation motion. Below this critical grain size, often referred to as the “reverse” or “inverse” Hall-Petch regime, any further decrease in the grain size weakens the material because an increase in grain boundary area results in increased grain boundary sliding. Chandross & Argibay modeled grain boundary sliding as viscous flow and related the yield strength of the material in this regime to material properties as
formula_8
where formula_9 is the enthalpy of fusion, formula_10 is the atomic volume in the amorphous phase, formula_11 is the melting temperature, and formula_12 is the volume fraction of material in the grains vs the grain boundaries, given by formula_13, where formula_14 is the grain boundary thickness and typically on the order of 1 nm. The maximum strength of a metal is given by the intersection of this line with the Hall-Petch relationship, which typically occurs around a grain size of formula_7 = 10 nm for BCC and FCC metals.
Due to the large amount of interfacial energy associated with a large volume fraction of grain boundaries, nanocrystalline metals are thermally unstable. In nanocrystalline samples of low-melting point metals (i.e. aluminum, tin, and lead), the grain size of the samples was observed to double from 10 to 20 nm after 24 hours of exposure to ambient temperatures. Although materials with higher melting points are more stable at room temperatures, consolidating nanocrystalline feedstock into a macroscopic component often requires exposing the material to elevated temperatures for extended periods of time, which will result in coarsening of the nanocrystalline microstructure. Thus, thermally stable nanocrystalline alloys are of considerable engineering interest. Experiments have shown that traditional microstructural stabilization techniques such as grain boundary pinning via solute segregation or increasing solute concentrations have proven successful in some alloy systems, such as Pd-Zr and Ni-W.
Nanocrystalline ceramics.
While the mechanical behavior of ceramics is often dominated by flaws, i.e. porosity, instead of grain size, grain-size strengthening is also observed in high-density ceramic specimens. Additionally, nanocrystalline ceramics have been shown to sinter more rapidly than bulk ceramics, leading to higher densities and improved mechanical properties, although extended exposure to the high pressures and elevated temperatures required to sinter the part to full density can result in coarsening of the nanostructure.
The large volume fraction of grain boundaries associated with nanocrystalline materials causes interesting behavior in ceramic systems, such as superplasticity in otherwise brittle ceramics. The large volume fraction of grain boundaries allows for a significant diffusional flow of atoms via Coble creep, analogous to the grain boundary sliding deformation mechanism in nanocrystalline metals. Because the diffusional creep rate scales as formula_15 and linearly with the grain boundary diffusivity, refining the grain size from 10 μm to 10 nm can increase the diffusional creep rate by approximately 11 orders of magnitude. This superplasticity could prove invaluable for the processing of ceramic components, as the material may be converted back into a conventional, coarse-grained material via additional thermal treatment after forming.
Processing.
While the synthesis of nanocrystalline feedstocks in the form of foils, powders, and wires is relatively straightforward, the tendency of nanocrystalline feedstocks to coarsen upon extended exposure to elevated temperatures means that low-temperature and rapid densification techniques are necessary to consolidate these feedstocks into bulk components. A variety of techniques show potential in this respect, such as spark plasma sintering or ultrasonic additive manufacturing, although the synthesis of bulk nanocrystalline components on a commercial scale remains untenable.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Omega"
},
{
"math_id": 1,
"text": "U(\\Omega)"
},
{
"math_id": 2,
"text": "E \\propto \\partial^2 U/\\partial \\Omega^2"
},
{
"math_id": 3,
"text": "\\sigma_y = \\sigma_0 + Kd^{-1/2},"
},
{
"math_id": 4,
"text": "\\sigma_y"
},
{
"math_id": 5,
"text": "\\sigma_0"
},
{
"math_id": 6,
"text": "K"
},
{
"math_id": 7,
"text": "d"
},
{
"math_id": 8,
"text": " \\tau = \\bigg(L\\frac{\\rho_L}{M}\\bigg)\\bigg(1-\\frac{T}{T_m}\\bigg)f_g,"
},
{
"math_id": 9,
"text": "L"
},
{
"math_id": 10,
"text": "\\rho_L/M"
},
{
"math_id": 11,
"text": "T_m"
},
{
"math_id": 12,
"text": "f_g"
},
{
"math_id": 13,
"text": "f_g = (1-\\delta/d)^3"
},
{
"math_id": 14,
"text": "\\delta"
},
{
"math_id": 15,
"text": "d^{-3}"
}
]
| https://en.wikipedia.org/wiki?curid=6966559 |
6967420 | Heinz mean | Mean in mathematics
In mathematics, the Heinz mean (named after E. Heinz) of two non-negative real numbers "A" and "B", was defined by Bhatia as:
formula_0
with 0 ≤ "x" ≤ .
For different values of "x", this Heinz mean interpolates between the arithmetic ("x" = 0) and geometric ("x" = 1/2) means such that for 0 < "x" < :
formula_1
The Heinz means appear naturally when symmetrizing
formula_2-divergences.
It may also be defined in the same way for positive semidefinite matrices, and satisfies a similar interpolation formula.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{H}_x(A, B) = \\frac{A^x B^{1-x} + A^{1-x} B^x}{2},"
},
{
"math_id": 1,
"text": "\\sqrt{A B} = \\operatorname{H}_\\frac{1}{2}(A, B) < \\operatorname{H}_x(A, B) < \\operatorname{H}_0(A, B) = \\frac{A + B}{2}."
},
{
"math_id": 2,
"text": "\\alpha"
}
]
| https://en.wikipedia.org/wiki?curid=6967420 |
69680802 | Recursive largest first algorithm | Algorithm for graph coloring
The Recursive Largest First (RLF) algorithm is a heuristic for the NP-hard graph coloring problem. It was originally proposed by Frank Leighton in 1979.
The RLF algorithm assigns colors to a graph’s vertices by constructing each color class one at a time. It does this by identifying a maximal independent set of vertices in the graph, assigning these to the same color, and then removing these vertices from the graph. These actions are repeated on the remaining subgraph until no vertices remain.
To form high-quality solutions (solutions using few colors), the RLF algorithm uses specialized heuristic rules to try to identify "good quality" independent sets. These heuristics make the RLF algorithm exact for bipartite, cycle, and wheel graphs. In general, however, the algorithm is approximate and may well return solutions that use more colors than the graph’s chromatic number.
Description.
The algorithm can be described by the following three steps. At the end of this process, formula_0 gives a partition of the vertices representing a feasible formula_1-colouring of the graph formula_2.
Example.
Consider the graph formula_4 shown on the right. This is a wheel graph and will therefore be optimally colored by RLF. Executing the algorithm results in the vertices being selected and colored in the following order:
This gives the final three-colored solution formula_17.
Performance.
Let formula_18 be the number of vertices in the graph and let formula_19 be the number of edges. Using big O notation, in his original publication Leighton states the complexity of RLF to be formula_20; however, this can be improved upon. Much of the expense of this algorithm is due to Step 2, where vertex selection is made according to the heuristic rules stated above. Indeed, each time a vertex is selected for addition to the independent set formula_8, information regarding the neighbors needs to be recalculated for each uncolored vertex. These calculations can be performed in formula_21 time, meaning that the overall complexity of RLF is formula_22.
If the heuristics of Step 2 are replaced with random selection, then the complexity of this algorithm reduces to formula_23; however, the resultant algorithm will usually return lower quality solutions compared to those of RLF. It will also now be inexact for bipartite, cycle, and wheel graphs.
In an empirical comparison by Lewis in 2021, RLF was shown to produce significantly better vertex colorings than alternative heuristics such as the formula_23 greedy algorithm and the formula_24 DSatur algorithm on random graphs. However, runtimes with RLF were also seen to be higher than these alternatives due to its higher overall complexity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{S}"
},
{
"math_id": 1,
"text": "|\\mathcal{S}|"
},
{
"math_id": 2,
"text": "G"
},
{
"math_id": 3,
"text": "\\mathcal{S}=\\emptyset"
},
{
"math_id": 4,
"text": "G=(V,E)"
},
{
"math_id": 5,
"text": "V"
},
{
"math_id": 6,
"text": "E"
},
{
"math_id": 7,
"text": "S\\subseteq V"
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "\\mathcal{S}=\\mathcal{S}\\cup \\{S\\}"
},
{
"math_id": 10,
"text": "g"
},
{
"math_id": 11,
"text": "a"
},
{
"math_id": 12,
"text": "c"
},
{
"math_id": 13,
"text": "e"
},
{
"math_id": 14,
"text": "b"
},
{
"math_id": 15,
"text": "d"
},
{
"math_id": 16,
"text": "f"
},
{
"math_id": 17,
"text": "\\mathcal{S} = \\{\\{g\\}, \\{a, c, e\\}, \\{b, d, f\\}\\}"
},
{
"math_id": 18,
"text": "n"
},
{
"math_id": 19,
"text": "m"
},
{
"math_id": 20,
"text": "\\mathcal{O}(n^3)"
},
{
"math_id": 21,
"text": "\\mathcal{O}(m)"
},
{
"math_id": 22,
"text": "\\mathcal{O}(mn)"
},
{
"math_id": 23,
"text": "\\mathcal{O}(n+m)"
},
{
"math_id": 24,
"text": "\\mathcal{O}((n + m) \\lg n)"
}
]
| https://en.wikipedia.org/wiki?curid=69680802 |
696817 | Roothaan equations | The Roothaan equations are a representation of the Hartree–Fock equation in a non orthonormal basis set which can be of Gaussian-type or Slater-type. It applies to closed-shell molecules or atoms where all molecular orbitals or atomic orbitals, respectively, are doubly occupied. This is generally called Restricted Hartree–Fock theory.
The method was developed independently by Clemens C. J. Roothaan and George G. Hall in 1951, and is thus sometimes called the "Roothaan-Hall equations". The Roothaan equations can be written in a form resembling generalized eigenvalue problem, although they are not a standard eigenvalue problem because they are nonlinear:
formula_0
where F is the Fock matrix (which depends on the coefficients C due to electron-electron interactions), C is a matrix of coefficients, S is the overlap matrix of the basis functions, and formula_1 is the (diagonal, by convention) matrix of orbital energies. In the case of an orthonormalised basis set the overlap matrix, S, reduces to the identity matrix. These equations are essentially a special case of a Galerkin method applied to the Hartree–Fock equation using a particular basis set.
In contrast to the Hartree–Fock equations - which are integro-differential equations - the Roothaan–Hall equations have a matrix-form. Therefore, they can be solved using standard techniques. | [
{
"math_id": 0,
"text": "\\mathbf{F} \\mathbf{C} = \\mathbf{S} \\mathbf{C} \\mathbf{\\epsilon}"
},
{
"math_id": 1,
"text": "\\epsilon"
}
]
| https://en.wikipedia.org/wiki?curid=696817 |
69682704 | The Existence of God (book) | 1979 book by Richard Swinburne
The Existence of God is a 1979 book by British philosopher of religion Richard Swinburne, claiming the existence of the Abrahamic God on rational grounds. The argument rests on an updated version of natural theology with biological evolution using scientific inference, mathematical probability theory, such as Bayes' theorem, and of inductive logic. In 2004, a second edition was released under the same title.
Swinburne discusses the intrinsic probability of theism, with an everlastingly omnipotent, omniscient and perfectly free God. He states various reasons for the existence of God, such as cosmological and teleological arguments, arguments from the consciousness of the higher vertebrates including humans, morality, providence, history, miracles and religious experience. Swinburne claims that the occurrence of evil does not diminish the probability of God, and that the hiddenness of God can be explained by his allowing free choice to humans. He concludes that on balance it is more probable than not that God exists, with a probability larger than 0.5, on a scale of 0.0 (impossible) to 1.0 (absolutely sure).
Swinburne summarised the same argument in his later and shorter book "Is There a God?", omitting the use of Bayes' theorem and inductive logic, but including a discussion of multiple universes and cosmological inflation in the 2010 edition.
Arguments in inductive logic.
Central to the argument of Swinburne is the use of inductive logic. He defines a "correct C-inductive argument" as an argument where the premisses merely add to the probability of the conclusion, and a stronger "correct P-inductive argument" when the premisses make the conclusion probable with a probability larger than .
Probability of God according to theism using Bayes' theorem.
Swinburne applies mathematical conditional probability logic to various hypotheses related to the existence of God
and defines
formula_0 as the available evidence,
formula_1 as the hypothesis to be tested, and
formula_2 as the so-called "tautological" background knowledge.
The notation formula_3 is used for the conditional probability of an event formula_0 occurring given that another event formula_2 occurred previously. This is also termed the posterior probability of formula_0 given formula_2.
The probability of the present evidence formula_0 given background knowledge formula_2 can be written as the sum of the evidence with God existing (formula_4, e and h) and the evidence without God (formula_5, e and not h):
formula_6, with formula_7, and formula_8.
Application of Bayes' theorem to formula_9, the probability of the God hypothesis formula_1 given evidence formula_0 and background knowledge formula_2, results in
formula_10
The probability of a universe of our kind, as evidenced by formula_0 without a single omnipotent god (formula_11) formula_12 can be written as the sum of the probabilities of several optional hypotheses formula_13 without a god, i = 1, 2, 3:
The sum of probabilities becomes formula_17
Swinburne then claims to refute these three hypotheses:
Admittedly this hypothesis formula_16 can explain the present state of affairs in the universe - the evidence formula_0 - without the need of a God, that means the probability is 1.0: formula_21.
However, Swinburne estimates that the probability formula_22 given the background knowledge is infinitesimally low.
Then the sum of probabilities of the various hypotheses without God
formula_23 will not exceed
formula_24.
So formula_25, the posterior probability of theism or God formula_1 on the evidence formula_0 considered with background knowledge formula_2, will be or more, by a "correct P-inductive argument". Swinburne states that it is impossible to give exact numerical values for the probabilities used.
Swinburne concludes that deductive proofs of God fail, but claims that on the basis of the above P-inductive argument, theism is probably true. He notes that in his calculation the evidence from religious experience and historical evidence of life, death and resurrection of Jesus were ignored: its addition would be sufficient to make theism overall probable with a probability larger than .
Reception.
In 2005 Joshua Golding reviewed "The Existence of God" and noted that the lack of justification for the afterlife leads to skepticism about whether God exists due to the problem of evil. The principle of credulity cannot be relied on without caution. Golding would prefer a priori proof that God exists, a better inductive argument for God's existence, or an argument assuming for practical purposes, that God exists.
In 2009 Jeremy Gwiazda, a philosopher at The City University of New York argued that Swinburne did not prove his starting point that God is simple and thus likely to exist. The arguments from mathematical simplicity and scientists' preferences both fail.
Gabe Czobel analysed Swinburne's arguments including his use of Bayesian statistics and pointed out errors in reasoning. Even if Swinburne's logic were right, a theist could not derive much consolation from it.
Dutch philosopher Herman Philipse (Utrecht University) debated Swinburne in front of an academic audience at Amsterdam in 2017. He praised Swinburne for attempting a scientific approach to the probablity of God's existence, at variance with Dutch theologians who refused rational arguments. A large number of points were raised, for instance Philipse claimed that a religious explanation for the universe presupposes a finite history. A class of cyclical "bouncing universe" theories, which could be tested, features an infinite history of the universe. According to Philipse's 2012 book "God in the Age of Science?" attributing mental properties to a being requires observing its bodily behaviour, so God could not be bodiless. Swinburne replied that universe itself can be viewed as God's body. According to Philipse, a hypothesis is tested scientifically not only for simplicity, but also for accordance with extensive background knowledge. Furthermore, Bayesian statistics cannot be applied if God is unfathomable.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e"
},
{
"math_id": 1,
"text": "h"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "P(e|k)"
},
{
"math_id": 4,
"text": "e \\& h"
},
{
"math_id": 5,
"text": "e \\& \\sim h"
},
{
"math_id": 6,
"text": "P(e|k) = P(e \\& h|k) + P(e \\& \\sim h|k)"
},
{
"math_id": 7,
"text": "P(e \\& h|k) = P(h|k) P(e|h \\& k)"
},
{
"math_id": 8,
"text": "P(e \\& \\sim h|k) = P(e| \\sim h\\&k) P(\\sim h|k)"
},
{
"math_id": 9,
"text": "P(h|e \\& k)"
},
{
"math_id": 10,
"text": "P(h|e \\& k) = \\frac{ P(e|h \\& k) P(h|k) }{ P(e|h \\& k)P(h|k) + P(e|\\sim h \\& k) P(\\sim h|k) }\\cdot "
},
{
"math_id": 11,
"text": "\\sim h"
},
{
"math_id": 12,
"text": "P(e| \\sim h \\& k) P(\\sim h|k)"
},
{
"math_id": 13,
"text": "h_i"
},
{
"math_id": 14,
"text": "h_1"
},
{
"math_id": 15,
"text": "h_2"
},
{
"math_id": 16,
"text": "h_3"
},
{
"math_id": 17,
"text": " P(e| \\sim h \\& k) P(\\sim h|k) = P(e| h1 \\& k) P(h_1|k) + .. + P(e|h_3 \\& k) P(h_3|k) "
},
{
"math_id": 18,
"text": " P(e|h \\& k) P(h|k) >> P(e|h_1 \\& k) P(h_1|k)"
},
{
"math_id": 19,
"text": " P(e|h_2 \\& k) < P(e|h \\& k)"
},
{
"math_id": 20,
"text": "P(h_2|k) < P(h|k)."
},
{
"math_id": 21,
"text": "P(e|h_3 \\& k) = 1"
},
{
"math_id": 22,
"text": "P(h_3|k)"
},
{
"math_id": 23,
"text": " P(e|h_1 \\& k) P(h_1|k) + .. + P(e|h_3 \\& k) P(h_3|k) = P(e| \\sim h \\& k) P(\\sim h|k) "
},
{
"math_id": 24,
"text": " P(e|h \\& k)P(h|k) "
},
{
"math_id": 25,
"text": " P(h|e \\& k) "
}
]
| https://en.wikipedia.org/wiki?curid=69682704 |
69684500 | Serena Dipierro | Italian mathematician
Serena Dipierro is an Italian mathematician whose research involves partial differential equations, the regularity of their solution, their phase transitions, nonlocal operators, and free boundary problems, with applications including population dynamics, quantum mechanics, crystallography, and mathematical finance. She is a professor in the School of Physics, Mathematics and Computing at the University of Western Australia, where she heads the department of mathematics and statistics.
Education and career.
After earning a laurea at the University of Bari in 2006, and a master's degree with Lorenzo D’Ambrosio at the same university in 2008, Dipierro finished a Ph.D. in mathematics at the International School for Advanced Studies in Trieste in 2012. Her dissertation, "Concentration phenomena for singularly perturbed elliptic problems and related topics", was supervised by Andrea Malchiodi.
She was a postdoctoral researcher at the University of Chile and University of Edinburgh, and a Humboldt Fellow, and a faculty member at the University of Melbourne and University of Milan before taking her present position at the University of Western Australia in 2018.
Book.
With María Medina de la Torre and Enrico Valdinoci, Dipierro is a coauthor of the monograph "Fractional Elliptic Problems with Critical Growth in the Whole of formula_0" (arXiv:1506.01748; Edizioni Della Normale, 2017).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{R}^n"
}
]
| https://en.wikipedia.org/wiki?curid=69684500 |
69684786 | 1 Samuel 22 | First Book of Samuel chapter
1 Samuel 22 is the twenty-second chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's escape from Saul's repeated attempts to kill him and the massacre of the priests in Nob. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 23 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 10–11 and 4Q52 (4QSamb; 250 BCE) with extant verses 8–9.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
David in Adullam and Moab (22:1–5).
David was now an outlaw (continued throughout chapters 26) and hid in Adullam, where he was joined by his family and 'all those who were deprived and embittered', so he became a leader of a group of 'malcontents'.
Verses 1–2.
1David therefore departed from there and escaped to the cave of Adullam. So when his brothers and all his father’s house heard it, they went down there to him. 2And everyone who was in distress, everyone who was in debt, and everyone who was discontented gathered to him. So he became captain over them. And there were about four hundred men with him."
Verses 3–4.
3Then David went from there to Mizpah of Moab; and he said to the king of Moab, "Please let my father and mother come here with you, till I know what God will do for me." 4So he brought them before the king of Moab, and they dwelt with him all the time that David was in the stronghold."
Due to the uncertainty of his life as outlaw and how it would affect his family, David sought asylum for his parents in Moab, for some reasons: his ancestry connections with Moab through Ruth (–), and the likely support from an enemy of Saul, who had defeated Moab in battle (1 Samuel 14:47). This action proved to be correct because Saul was very suspicious of any conspiracy and would kill anyone he suspected to do so, including their families (verses 8, 13).
Verse 5.
Then the prophet Gad said to David, “Do not remain in the stronghold; depart, and go into the land of Judah.” So David departed and went into the forest of Hereth.
Massacre of the priests of Nob (22:6–23).
The sequel narrative to David's visit to Nob opened with Saul sitting in council at Gibeah (cf. 14:2) accusing the members of his own tribe ('you Benjaminites') of conspiracy and of not disclosing to him the pact between David and Jonathan (verse 8); therefore immediately isolated himself from his own clan. Doeg the Edomite, the 'chief of the shepherds' in , now was titled 'in charge of Saul's servants' (KJV: "set over the servants of Saul"), reported Ahimelech's assistance to David by giving him bread and Goliath's sword. Ahimelech, who was called to come before Saul, protested his innocence by claiming that he only treated David as Saul's obedient servant and son-in-law without knowledge a change in David's status. No matter what, Saul commanded his servants to kill the priests, but they refused to obey, so Saul commanded Doeg, who being an Edomite dared to execute the entire priesthood of Nob and commit blood revenge on the whole city (verse 19). One priest, Abiathar son of Ahimelech son of Ahitub, escaped and attached himself to David, thereby fulfilling the prophecy of 2:27–36, as well as securing for David the service of a priest with ephod (including "Urim and Thummim"; cf. 1 Samuel 23:9–12). The priest Abiathar remained with him as high priest until he was eventually banished by Solomon (1 Kings 2:26–27). In the end this narrative contrasts Saul, whose act of reprisal had lost for him the service of a priesthood, and David, who had access to YHWH through the only priest left.
Verses 9–10.
9Then answered Doeg the Edomite, who was set over the servants of Saul, and said, “I saw the son of Jesse going to Nob, to Ahimelech the son of Ahitub. 10And he inquired of the Lord for him, gave him provisions, and gave him the sword of Goliath the Philistine.”
The reference to Doeg the Edomite in becomes meaningful in this part of narrative, which may add to the long-standing animosity between Israel and Edom (Genesis 25:25, 30; Numbers 20:1–21; Judges 3:7-11).
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69684786 |
69684794 | 1 Samuel 23 | First Book of Samuel chapter
1 Samuel 23 is the twenty-third chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's escape from Saul's repeated attempts to kill him. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 29 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q52 (4QSamb; 250 BCE) with extant verses 8–23.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
David saved the city of Keilah (23:1–13).
The narrative describes how David acted like a good king to protect the territory of Israel from foreign aggressor (cf. 1 Samuel 9:16), although he was on the run from the actual king, Saul. At this time David was shown to have access to YHWH through the oracle (before the arrival of Abiathar and the ephod), so he inquired YHWH twice, once on his own initiative and a second time to calm his men's uncertainty, for David was given an encouraging response and an assurance of divine participation (verse 5). David's next inquiry with YHWH was by means of Abiathar and the ephod after the liberation of Keilah (verse 6), asking two questions: 'Will Saul come to Keilah? Will the inhabitants of Keilah betray him?' (verses 11–12; set out clearly in 4QSamb compared to Masoretic Text) to obtain an affirmative answer to both. David and his men immediately left the city, thwarting Saul's plan to capture David easily in a closed-in town such as Keilah (Saul delusionally believed that God had 'given him' into his hand, following the Greek and Targum, in preference to the Masoretic Text which renders 'made a stranger of him'). Saul mustered a big army as he did before, but instead of directing it against foreign attackers (1 Samuel 11:7–8; 13:3–4), he misused 'the armies of the living God' (17:26) for his own selfish purpose, to capture David. However, it is clear in the narrative that David had an advantage over Saul: David had access to YHWH, whereas Saul didn't (1 Samuel 14:37).
"Then they told David, saying, "Look, the Philistines are fighting against Keilah, and they are robbing the threshing floors.""
David in wilderness strongholds (23:14–29).
David left Keilah with six hundred soldiers (up from 400 people in 1 Samuel 22) to move from place to place, avoiding Saul's pursuit. When David was in Ziph, which was on the edge of the wilderness of Judah, Jonathan met him to reaffirm the pact between them that Jonathan was content with being second to David, so now David has the priesthood and the house of Saul behind him. However, as verses 19–23 show, David was still in constant danger, as the local people where he stayed with (the Ziphites) were willing to deliver him into Saul's hand and provided the necessary information Although David was acknowledged by Saul as 'cunning' in escaping, he was eventually cornered when reaching 'the wilderness of Maon' (verse 24), southwards of Hebron, that Saul and his troops were 'closing in' on him from both sides. David was saved at a critical moment because Saul was informed of a Philistine attack that he had to face as the king of Israel, and so he had to abandon his plan to capture David.
"Therefore Saul returned from pursuing David, and went against the Philistines; so they called that place the Rock of Escape."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69684794 |
69684798 | 1 Samuel 24 | First Book of Samuel chapter
1 Samuel 24 is the twenty-fourth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's escape from Saul's repeated attempts to kill him. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 22 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 3–5, 8–10, 14–23.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
David spared Saul (24:1–15).
1 Samuel 23:29 (24:1 in the Hebrew Bible) reports David's move to Engedi in the hilly area around the Dead Sea, while Saul, returning from a battle with the Philistines, was pursuing. The section emphasizes two points: (1) David could have easily killed Saul and thereby seized the kingship, but (2) he resisted the temptation to kill 'the LORD'S anointed', even prevented his men from harming Saul (verse 7). David elaborated in his speech (verses 8–15) that instead of taking vengeance on Saul (for 'treating him like an insignificant dog or flea'), he duly acknowledged Saul's position as a God-chosen king (verse 8) while entrusted vengeance to God (verse 12). Another similar account of sparing Saul's life is found in 26:1–25.
"And he came to the sheepcotes by the way, where was a cave; and Saul went in to cover his feet: and David and his men remained in the sides of the cave."
David’s oath to Saul (24:16–22).
This section contrasts David's uprightness in submitting to the will of God and not taking matters into his own hands against Saul's pitiful figure. All three parts of Saul's speech reflects his weak position: (1) Saul conceded that his actions had been evil and that David was more 'righteous' than he (verse 17); (2) Saul acknowledged that David would become king (cf. Jonathan's words to David at Horesh in 1 Samuel 23:17); (3) Saul pled that David would preserve his name and not cut off his descendants (echoing Jonathan's pact with David concerning the house of Saul in 1 Samuel 20:14–15).
"And David swore this to Saul. Then Saul went home, but David and his men went up to the stronghold."
Verse 22.
Commonly a new king (of a new dynasty) killed all the descendants of the king he replaced to get rid of potential rivals, but David swore an oath not to wipe out Saul's dynasty, which he fulfilled by his treatment of Mephibosheth, son of Jonathan (2 Samuel 9). When Saul went home, David wisely knew Saul's double dealing ways not to follow Saul, but to remain as a fugitive in the wilderness.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69684798 |
69684816 | 1 Samuel 26 | First Book of Samuel chapter
1 Samuel 26 is the twenty-sixth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's escape from Saul's repeated attempts to kill him. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 25 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 9–12, 21–24.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
David spares Saul again (26:1–12).
There are many points of similarity between this narrative and the one contained in 1 Samuel 23:19-24 and 1 Samuel 24:1-22, although the multiple differences prove that these are two separate events.
Points of similarities:
Points of differences:
"Now the Ziphites came to Saul at Gibeah, saying, “Is David not hiding in the hill of Hachilah, opposite Jeshimon?” "
Verse 1.
David's return to Maon could be related with some business of Abigail, now David's wife, who may still control some properties in the area. The Ziphites reported David's presence to Saul, probably out of fear that David may attack them for their former betrayal of him. The fact that Saul at that time was in Gibeah and only 'arose' after hearing the report suggests that he did not pursue David any more until being excited by the Ziphites of a good opportunity to ambush David.
David reproved Abner as Saul acknowledged his sin against David (26:13–25).
After leaving Saul's camp undetected and standing in a safe distance, David berated Abner for failing to protect the king, while also implying to the failure to recognize the king-elect (verse 14). However, Saul did recognize David's voice, and became regretful of his own action to pursue. In view of his decision to leave Israelite territory, David pled with Saul not to 'let his blood fall to the earth' while in exile. This was actually the last time Saul met David, and as they parted for good, Saul gave his final blessings to David.
"Now therefore, let not my blood fall to the earth away from the presence of the LORD, for the king of Israel has come out to seek a single flea like one who hunts a partridge in the mountains."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69684816 |
69684829 | 1 Samuel 27 | First Book of Samuel chapter
1 Samuel 27 is the twenty-seventh chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's escape from Saul's repeated attempts to kill him. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 12 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–2, 8–12.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
David in Gath (27:1–4).
David decided to cross over into Philistine territory to escape from Saul (verse 1), which was
immediately achieved (verse 4), and stayed as a vassal of King Achish of Gath for one year and four months (verse 7). When the first time David was in Gath, he had to feign insanity to escape (1 Samuel 21:10–15), but this time, with 600 loyal soldiers and the report of his fallout with Saul, David was well received as a group of mercenaries for the Philistines, a common practice in the ancient Near East as documented in various sources.
"And David arose, and he passed over with the six hundred men that were with him unto Achish, the son of Maoch, king of Gath."
David in Ziklag (27:5–12).
For a brief period he and his army lived "in the royal city" with Achish (which is in Gath), but by his own request, he later settled in Ziklag, presumably was given by Achish to him in return for military service and since then became a crown property of Judean kings. From Ziklag, David attacked Israel's enemies, the Geshurites, the Girzites, and the Amalekites, but giving the impression to Achish that he was attacking enemies of the Philistines. By conquering these prospective enemies and collecting booty, David actually was making preparations for his kingship.
"Then Achish gave him Ziklag that day. Therefore, Ziklag has belonged to the kings of Judah to this day."
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69684829 |
69684841 | 1 Samuel 28 | First Book of Samuel chapter
1 Samuel 28 is the twenty-eighth chapter of the First Book of Samuel in the Old Testament of the Christian Bible or the first part of the Books of Samuel in the Hebrew Bible. According to Jewish tradition the book was attributed to the prophet Samuel, with additions by the prophets Gad and Nathan, but modern scholars view it as a composition of a number of independent texts of various ages from c. 630–540 BCE. This chapter contains the account of David's escape from Saul's repeated attempts to kill him. This is within a section comprising 1 Samuel 16 to 2 Samuel 5 which records the rise of David as the king of Israel.
Text.
This chapter was originally written in the Hebrew language. It is divided into 25 verses.
Textual witnesses.
Some early manuscripts containing the text of this chapter in Hebrew are of the Masoretic Text tradition, which includes the Codex Cairensis (895), Aleppo Codex (10th century), and Codex Leningradensis (1008). Fragments containing parts of this chapter in Hebrew were found among the Dead Sea Scrolls including 4Q51 (4QSama; 100–50 BCE) with extant verses 1–3, 22–25.
Extant ancient manuscripts of a translation into Koine Greek known as the Septuagint (originally was made in the last few centuries BCE) include Codex Vaticanus (B; formula_0B; 4th century) and Codex Alexandrinus (A; formula_0A; 5th century).
The Philistines gather against Israel (28:1–2).
Verses 1–2 continue the story of David's time among the Philistines, which will be picked up again in chapters 29–30. As the Philistines prepared for another war against Israel, David was placed in an awkward position to prove his loyalty to Achish by going to fight against his own people.
Saul and the Medium of Endor (28:3–25).
At his camp at Gilboa, facing the big army of Philistines at Shunem, Saul was in utter fear because he had no access to divine guidance, as described in verses 3–6:
This caused Saul to desperately turn to prohibited means of getting to know the divine will, going against his own laws. Because Endor was located northeast of Shunem, thus behind enemy lines, Saul had to go in disguise and at night. The narrative about Saul's visit to the woman in Endor was 'one of the most bizarre texts in Scripture', as it claimed that Samuel's spirit could be called to speak through using witchcraft. It is debatable whether it was really Samuel's spirit or the woman impersonating Samuel, because there was no new information was given other than what was already known from Samuel's speech long ago. The text does say that the woman "saw a figure coming up", whom Saul assumed to be "Samuel" (verse 14), and was in terror (as perhaps she never had this result before), as well as got the knowledge that Saul was the one requesting this (verse 12). The main point of the narrative is to show how Saul was totally cut off from YHWH, and failed as a king to protect Israel as he himself and his heirs would die at the hands of the Philistines.
"Now Samuel had died, and all Israel had lamented for him and buried him in Ramah, in his own city. And Saul had put the mediums and the spiritists out of the land."
Verse 3.
The first sentence is a repetition of .
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
Commentaries on Samuel.
<templatestyles src="Refbegin/styles.css" />
General.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathfrak{G}"
}
]
| https://en.wikipedia.org/wiki?curid=69684841 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.