id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1184376 | Bargmann's limit | In quantum mechanics, Bargmann's limit, named for Valentine Bargmann, provides an upper bound on the number formula_0 of bound states with azimuthal quantum number formula_1 in a system with central potential formula_2. It takes the form
formula_3
This limit is the best possible upper bound in such a way that for a given formula_1, one can always construct a potential formula_4 for which formula_0 is arbitrarily close to this upper bound. Note that the Dirac delta function potential attains this limit. After the first proof of this inequality by Valentine Bargmann in 1953, Julian Schwinger presented an alternative way of deriving it in 1961.
Rigorous formulation and proof.
Stated in a formal mathematical way, Bargmann's limit goes as follows. Let formula_5 be a spherically symmetric potential, such that it is piecewise continuous in formula_6, formula_7 for formula_8 and formula_9 for formula_10, where formula_11 and formula_12. If
formula_13
then the number of bound states formula_0 with azimuthal quantum number formula_1 for a particle of mass formula_14 obeying the corresponding Schrödinger equation, is bounded from above by
formula_15
Although the original proof by Valentine Bargmann is quite technical, the main idea follows from two general theorems on ordinary differential equations, the Sturm Oscillation Theorem and the Sturm-Picone Comparison Theorem. If we denote by formula_16 the wave function subject to the given potential with total energy formula_17 and azimuthal quantum number formula_1, the Sturm Oscillation Theorem implies that formula_0 equals the number of nodes of formula_16. From the Sturm-Picone Comparison Theorem, it follows that when subject to a stronger potential formula_18 (i.e. formula_19 for all formula_20), the number of nodes either grows or remains the same. Thus, more specifically, we can replace the potential formula_2 by formula_21. For the corresponding wave function with total energy formula_17 and azimuthal quantum number formula_1, denoted by formula_22, the radial Schrödinger equation becomes
formula_23
with formula_24. By applying variation of parameters, one can obtain the following implicit solution
formula_25
where formula_26 is given by
formula_27
If we now denote all successive nodes of formula_22 by formula_28, one can show from the implicit solution above that for consecutive nodes formula_29 and formula_30
formula_31
From this, we can conclude that
formula_32
proving Bargmann's limit. Note that as the integral on the right is assumed to be finite, so must be formula_33 and formula_0. Furthermore, for a given value of formula_1, one can always construct a potential formula_4 for which formula_0 is arbitrarily close to Bargmann's limit. The idea to obtain such a potential, is to approximate Dirac delta function potentials, as these attain the limit exactly. An example of such a construction can be found in Bargmann's original paper.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N_\\ell"
},
{
"math_id": 1,
"text": "\\ell"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "N_\\ell < \\frac{1}{2\\ell+1} \\frac{2m}{\\hbar^2} \\int_0^\\infty r |V(r)|\\, dr"
},
{
"math_id": 4,
"text": "V_\\ell"
},
{
"math_id": 5,
"text": "V:\\mathbb{R}^3\\to\\mathbb{R}:\\mathbf{r}\\mapsto V(r)"
},
{
"math_id": 6,
"text": "r"
},
{
"math_id": 7,
"text": "V(r)=O(1/r^a)"
},
{
"math_id": 8,
"text": "r\\to0"
},
{
"math_id": 9,
"text": "V(r)=O(1/r^b)"
},
{
"math_id": 10,
"text": "r\\to+\\infty"
},
{
"math_id": 11,
"text": "a\\in(2,+\\infty)"
},
{
"math_id": 12,
"text": "b\\in(-\\infty,2)"
},
{
"math_id": 13,
"text": "\\int_0^{+\\infty}r|V(r)|dr<+\\infty,"
},
{
"math_id": 14,
"text": "m"
},
{
"math_id": 15,
"text": "N_\\ell<\\frac{1}{2\\ell+1}\\frac{2m}{\\hbar^2}\\int_0^{+\\infty}r|V(r)|dr."
},
{
"math_id": 16,
"text": "u_{0\\ell}"
},
{
"math_id": 17,
"text": "E=0"
},
{
"math_id": 18,
"text": "W"
},
{
"math_id": 19,
"text": "W(r)\\leq V(r)"
},
{
"math_id": 20,
"text": "r\\in\\mathbb{R}_0^+"
},
{
"math_id": 21,
"text": "-|V|"
},
{
"math_id": 22,
"text": "\\phi_{0\\ell}"
},
{
"math_id": 23,
"text": "\\frac{d^{2}}{d r^{2}} \\phi_{0\\ell}(r)-\\frac{\\ell(\\ell+1)}{r^{2}} \\phi_{0\\ell}(r)=-W(r) \\phi_{0\\ell}(r),"
},
{
"math_id": 24,
"text": "W=2m|V|/\\hbar^2"
},
{
"math_id": 25,
"text": "\\phi_{0\\ell}(r)=r^{\\ell+1}-\\int_{0}^{p} G(r, \\rho) \\phi_{0\\ell}(\\rho) W(\\rho) d \\rho,"
},
{
"math_id": 26,
"text": "G(r,\\rho)"
},
{
"math_id": 27,
"text": "G(r, \\rho)=\\frac{1}{2 \\ell+1}\\left[r\\bigg(\\frac{r}{\\rho}\\bigg)^{\\ell}-\\rho\\bigg(\\frac{\\rho}{r}\\bigg)^{\\ell}\\right]."
},
{
"math_id": 28,
"text": "0=\\nu_1<\\nu_2<\\dots<\\nu_{N}"
},
{
"math_id": 29,
"text": "\\nu_{i}"
},
{
"math_id": 30,
"text": "\\nu_{i+1}"
},
{
"math_id": 31,
"text": "\\frac{2m}{\\hbar^2}\\int_{\\nu_{i}}^{\\nu_{i+1}} r|V(r)|dr>2\\ell+1."
},
{
"math_id": 32,
"text": "\\frac{2m}{\\hbar^2}\\int_{0}^{+\\infty}r|V(r)|dr\\geq\\frac{2m}{\\hbar^2}\\int_{0}^{\\nu_N}r|V(r)|dr>N(2\\ell+1)\\geq N_\\ell(2\\ell+1),"
},
{
"math_id": 33,
"text": "N"
}
] | https://en.wikipedia.org/wiki?curid=1184376 |
11844294 | Geometric spanner | Weighted undirected graph with graph distances linearly bounded w.r.t. Euclidean distances
A geometric spanner or a t-spanner graph or a t-spanner was initially introduced as a weighted graph over a set of points as its vertices for which there is a t-path between any pair of vertices for a fixed parameter t. A t-path is defined as a path through the graph with weight at most t times the spatial distance between its endpoints. The parameter t is called the stretch factor or dilation factor of the spanner.
In computational geometry, the concept was first discussed by L.P. Chew in 1986, although the term "spanner" was not used in the original paper.
The notion of graph spanners has been known in graph theory: t-spanners are spanning subgraphs of graphs with similar dilation property, where distances between graph vertices are defined in graph-theoretical terms. Therefore geometric spanners are graph spanners of complete graphs embedded in the plane with edge weights equal to the distances between the embedded vertices in the corresponding metric.
Spanners may be used in computational geometry for solving some proximity problems. They have also found applications in other areas, such as in motion planning, telecommunication networks, network reliability, optimization of roaming in mobile networks, etc.
Different spanners and quality measures.
There are different measures which can be used to analyze the quality of a spanner. The most common measures are edge count, total weight and maximum vertex degree. Asymptotically optimal values for these measures are formula_0 edges, formula_1 weight and formula_2 maximum degree (here MST denotes the weight of the minimum spanning tree).
Finding a "spanner" in the Euclidean plane with minimal dilation over "n" points with at most "m" edges is known to be NP-hard.
Many spanner algorithms exist which excel in different quality measures. Fast algorithms include the WSPD spanner and the Theta graph which both construct spanners with a linear number of edges in formula_3 time. If better weight and vertex degree is required the Greedy spanner can be computed in near quadratic time.
The Theta graph.
The "Theta graph" or "formula_4-graph" belongs to the family of cone-based spanners. The basic method of construction involves partitioning the space around each vertex into a set of cones, which themselves partition the remaining vertices of the graph. Like Yao Graphs, a formula_4-graph contains at most one edge per cone; where they differ is how that edge is selected. Whereas Yao Graphs will select the nearest vertex according to the metric space of the graph, the formula_4-graph defines a fixed ray contained within each cone (conventionally the bisector of the cone) and selects the nearest neighbour with respect to orthogonal projections to that ray.
The greedy spanner.
The "greedy spanner" or "greedy graph" is defined as the graph resulting from repeatedly adding an edge between the closest pair of points without a "t"-path. Algorithms which compute this graph are referred to as greedy spanner algorithms. From the construction it trivially follows that the greedy graph is a "t"-spanner.
The greedy spanner was first described in the PhD thesis of Gautam Das and conference paper and subsequent journal paper by Ingo Althöfer et al. These sources also credited Marshall Bern (unpublished) with the independent discovery of the same construction.
The greedy spanner achieves asymptotically optimal edge count, total weight and maximum vertex degree and also performs best on these measures in practice.
It can be constructed in formula_5 time using formula_6 space.
The Delaunay triangulation.
Chew's main result was that for a set of points in the plane there is a triangulation of this pointset such that for any two points there is a path along the edges of the triangulation with length at most formula_7 the Euclidean distance between the two points. The result was applied in motion planning for finding reasonable approximations of shortest paths among obstacles.
The best upper bound known for the Euclidean Delaunay triangulation is that it is a formula_8-spanner for its vertices.
The lower bound has been increased from formula_9 to just over that, to 1.5846
Well-separated pair decomposition.
A spanner may be constructed from a well-separated pair decomposition in the following way. Construct the graph with the point set formula_10 as vertex set and for each pair formula_11 in a WSPD, add an edge from an arbitrary point formula_12 to an arbitrary point formula_13. Note that the resulting graph has a linear number of edges because a WSPD has a linear number of pairs.
It is possible to obtain an arbitrary value for formula_14 by choosing the separation parameter of the well-separated pair decomposition accordingly.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(n)"
},
{
"math_id": 1,
"text": "O(MST)"
},
{
"math_id": 2,
"text": "O(1)"
},
{
"math_id": 3,
"text": "O(n \\log n)"
},
{
"math_id": 4,
"text": "\\Theta"
},
{
"math_id": 5,
"text": "O(n^2 \\log n)"
},
{
"math_id": 6,
"text": "O(n^2)"
},
{
"math_id": 7,
"text": "\\sqrt{10}"
},
{
"math_id": 8,
"text": "1.998"
},
{
"math_id": 9,
"text": "{{\\pi}/2}"
},
{
"math_id": 10,
"text": "S"
},
{
"math_id": 11,
"text": "\\{A, B\\}"
},
{
"math_id": 12,
"text": "a \\in A"
},
{
"math_id": 13,
"text": "b \\in B"
},
{
"math_id": 14,
"text": "t"
}
] | https://en.wikipedia.org/wiki?curid=11844294 |
11846 | Guitar | Fretted string instrument
The guitar is a stringed musical instrument that is usually fretted (with some exceptions) and typically has six or twelve strings. It is usually held flat against the player's body and played by strumming or plucking the strings with the dominant hand, while simultaneously pressing selected strings against frets with the fingers of the opposite hand. A guitar pick may also be used to strike the strings. The sound of the guitar is projected either acoustically, by means of a resonant hollow chamber on the guitar, or amplified by an electronic pickup and an amplifier.
The guitar is classified as a chordophone, meaning the sound is produced by a vibrating string stretched between two fixed points. Historically, a guitar was constructed from wood, with its strings made of catgut. Steel guitar strings were introduced near the end of the nineteenth century in the United States, but nylon and steel strings became mainstream only following World War II. The guitar's ancestors include the gittern, the vihuela, the four-course Renaissance guitar, and the five-course baroque guitar, all of which contributed to the development of the modern six-string instrument.
There are three main types of modern guitar: the classical guitar (Spanish guitar); the steel-string acoustic guitar or electric guitar; and the Hawaiian guitar (played across the player's lap). Traditional acoustic guitars include the flat top guitar (typically with a large sound hole) or the archtop guitar, which is sometimes called a "jazz guitar". The tone of an acoustic guitar is produced by the strings' vibration, amplified by the hollow body of the guitar, which acts as a resonating chamber. The classical Spanish guitar is often played as a solo instrument using a comprehensive fingerstyle technique where each string is plucked individually by the player's fingers, as opposed to being strummed. The term "finger-picking" can also refer to a specific tradition of folk, blues, bluegrass, and country guitar playing in the United States.
Electric guitars, first patented in 1937, use a pickup and amplifier that made the instrument loud enough to be heard, but also enabled manufacturing guitars with a solid block of wood needing a resonant chamber. A wide array of electronic effects units became possible including reverb and distortion (or "overdrive"). Solid-body guitars began to dominate the guitar market during the 1960s and 1970s; they are less prone to unwanted acoustic feedback. As with acoustic guitars, there are a number of types of electric guitars, including hollowbody guitars, archtop guitars (used in jazz guitar, blues and rockabilly) and solid-body guitars, which are widely used in rock music.
The loud, amplified sound and sonic power of the electric guitar played through a guitar amp have played a key role in the development of blues and rock music, both as an accompaniment instrument (playing riffs and chords) and performing guitar solos, and in many rock subgenres, notably heavy metal music and punk rock. The electric guitar has had a major influence on popular culture. The guitar is used in a wide variety of musical genres worldwide. It is recognized as a primary instrument in genres such as blues, bluegrass, country, flamenco, folk, jazz, jota, ska, mariachi, metal, punk, funk, reggae, rock, grunge, soul, acoustic music, disco, new wave, new age, adult contemporary music, and pop, occasionally used as a sample in hip-hop, dubstep, or trap music.
<templatestyles src="Template:TOC limit/styles.css" />
History.
The modern word "guitar" and its antecedents have been applied to a wide variety of chordophones since classical times, sometimes causing confusion. The English word "guitar", the German ', and the French ' were all adopted from the Spanish ', which comes from the Andalusian Arabic (') and the Latin "", which in turn came from the Ancient Greek . This Greek word may also come from the Persian word sihtar. "Kithara" appears in the Bible four times (1 Cor. 14:7, Rev. 5:8, 14:2, and 15:2), and is usually translated into English as "harp".
The origins of the modern guitar are not known. Before the development of the electric guitar and the use of synthetic materials, a guitar was defined as being an instrument having "a long, fretted neck, flat wooden soundboard, ribs, and a flat back, most often with incurved sides." The term is used to refer to a number of chordophones that were developed and used across Europe, beginning in the 12th century and, later, in the Americas. A 3,300-year-old stone carving of a Hittite bard playing a stringed instrument is the oldest iconographic representation of a chordophone, and clay plaques from Babylonia show people playing a lute-like instrument which is similar to the guitar.
Several scholars cite varying influences as antecedents to the modern guitar. Although the development of the earliest "guitar" is lost to the history of medieval Spain, two instruments are commonly claimed as influential predecessors: the four-string oud and its precursor, the European lute; the former was brought to Iberia by the Moors in the 8th century. It has often been assumed that the guitar is a development of the lute, or of the ancient Greek kithara. However, many scholars consider the lute an offshoot or separate line of development which did not influence the evolution of the guitar in any significant way.
At least two instruments called "guitars" were in use in Spain by 1200: the ' (Latin guitar) and the so-called ' (Moorish guitar). The guitarra morisca had a rounded back, a wide fingerboard, and several sound holes. The guitarra Latina had a single sound hole and a narrower neck. By the 14th century the qualifiers "moresca" or "morisca" and "latina" had been dropped, and these two chordophones were simply referred to as guitars.
The Spanish vihuela, called in Italian the , a guitar-like instrument of the 15th and 16th centuries, is widely considered to have been the single most important influence in the development of the baroque guitar. It had six courses (usually), lute-like tuning in fourths and a guitar-like body, although early representations reveal an instrument with a sharply cut waist. It was also larger than the contemporary four-course guitars. By the 16th century, the vihuela's construction had more in common with the modern guitar, with its curved one-piece ribs, than with the viols, and more like a larger version of the contemporary four-course guitars. The vihuela enjoyed only a relatively short period of popularity in Spain and Italy during an era dominated elsewhere in Europe by the lute; the last surviving published music for the instrument appeared in 1576.
Meanwhile, the five-course baroque guitar, which was documented in Spain from the middle of the 16th century, enjoyed popularity, especially in Spain, Italy and France from the late 16th century to the mid-18th century. In Portugal, the word "viola" referred to the guitar, as "guitarra" meant the "Portuguese guitar", a variety of cittern.
There were many different plucked instruments that were being invented and used in Europe during the Middle Ages. By the 16th century, most of the forms of guitar had fallen off, to never be seen again. However, midway through the 16th century, the five-course guitar was established. It was not a straightforward process. There were two types of five-course guitars, differing in the location of the major third and in the interval pattern. The fifth course can be inferred because the instrument was known to play more than the sixteen notes possible with four. The guitar's strings were tuned in unison, so, in other words, it was tuned by placing a finger on the second fret of the thinnest string and tuning the guitar bottom to top. The strings were a whole octave apart from one another, which is the reason for the different method of tuning. Because it was so different, there was major controversy as to who created the five course guitar. A literary source, Lope de Vega's Dorotea, gives the credit to the poet and musician Vicente Espinel. This claim was also repeated by Nicolas Doizi de Velasco in 1640, however this claim has been refuted by others who state that Espinel's birth year (1550) make it impossible for him to be responsible for the tradition. He believed that the tuning was the reason the instrument became known as the Spanish guitar in Italy. Even later, in the same century, Gaspar Sanz wrote that other nations such as Italy or France added to the Spanish guitar. All of these nations even imitated the five-course guitar by "recreating" their own.
Finally, c. 1850, the form and structure of the modern guitar were developed by different Spanish makers such as Manuel de Soto y Solares and, perhaps the most important of all guitar makers, Antonio Torres Jurado, who increased the size of the guitar body, altered its proportions, and invented the breakthrough fan-braced pattern. Bracing, the internal pattern of wood reinforcements used to secure the guitar's top and back and prevent the instrument from collapsing under tension, is an important factor in how the guitar sounds. Torres' design greatly improved the volume, tone, and projection of the instrument, and it has remained essentially unchanged since.
Types.
Guitars are often divided into two broad categories: acoustic and electric guitars. Within each category, there are further sub-categories that are nearly endless in quantity and are always evolving. For example, an electric guitar can be purchased in a six-string model (the most common model) or in seven- or twelve-string formats. An instruments overall design, internal construction and components, wood type or species, hardware and electronic appointments all add to the abundant nature of sub-categories and its unique tonal & functional property.
Acoustic.
Acoustic guitars form several notable subcategories within the acoustic guitar group: classical and flamenco guitars; steel-string guitars, which include the flat-topped, or "folk", guitar; twelve-string guitars; and the arched-top guitar. The acoustic guitar group also includes unamplified guitars designed to play in different registers, such as the acoustic bass guitar, which has a similar tuning to that of the electric bass guitar.
Renaissance and Baroque.
Renaissance and Baroque guitars are the ancestors of the modern classical and flamenco guitar. They are substantially smaller, more delicate in construction, and generate less volume. The strings are paired in courses as in a modern 12-string guitar, but they only have four or five courses of strings rather than six single strings normally used now. They were more often used as rhythm instruments in ensembles than as solo instruments, and can often be seen in that role in early music performances. (Gaspar Sanz's "Instrucción de Música sobre la Guitarra Española" of 1674 contains his whole output for the solo guitar.) Renaissance and Baroque guitars are easily distinguished, because the Renaissance guitar is very plain and the Baroque guitar is very ornate, with ivory or wood inlays all over the neck and body, and a paper-cutout inverted "wedding cake" inside the hole.
Classical.
Classical guitars, also known as "Spanish" guitars, are typically strung with nylon strings, plucked with the fingers, played in a seated position and are used to play a diversity of musical styles including classical music. The classical guitar's wide, flat neck allows the musician to play scales, arpeggios, and certain chord forms more easily and with less adjacent string interference than on other styles of guitar. Flamenco guitars are very similar in construction, but they are associated with a more percussive tone. In Portugal, the same instrument is often used with steel strings particularly in its role within fado music. The guitar is called viola, or violão in Brazil, where it is often used with an extra seventh string by choro musicians to provide extra bass support.
In Mexico, the popular mariachi band includes a range of guitars, from the small "requinto" to the "guitarrón", a guitar larger than a cello, which is tuned in the bass register. In Colombia, the traditional quartet includes a range of instruments too, from the small "bandola" (sometimes known as the Deleuze-Guattari, for use when traveling or in confined rooms or spaces), to the slightly larger tiple, to the full-sized classical guitar. The requinto also appears in other Latin-American countries as a complementary member of the guitar family, with its smaller size and scale, permitting more projection for the playing of single-lined melodies. Modern dimensions of the classical instrument were established by the Spaniard Antonio de Torres Jurado (1817–1892).
Flat-top.
Flat-top guitars with steel strings are similar to the classical guitar, however, the flat-top body size is usually significantly larger than a classical guitar, and has a narrower, reinforced neck and stronger structural design. The robust X-bracing typical of flat-top guitars was developed in the 1840s by German-American luthiers, of whom Christian Friedrich "C. F." Martin is the best known. Originally used on gut-strung instruments, the strength of the system allowed the later guitars to withstand the additional tension of steel strings. Steel strings produce a brighter tone and a louder sound. The acoustic guitar is used in many kinds of music including folk, country, bluegrass, pop, jazz, and blues. Many variations are possible from the roughly classical-sized OO and Parlour to the large Dreadnought (the most commonly available type) and Jumbo. Ovation makes a modern variation, with a rounded back/side assembly molded from artificial materials.
Archtop.
Archtop guitars are steel-string instruments in which the top (and often the back) of the instrument are carved from a solid billet, into a curved, rather than flat, shape. This violin-like construction is usually credited to the American Orville Gibson. Lloyd Loar of the Gibson Mandolin-Guitar Mfg. Co introduced the violin-inspired F-shaped hole design now usually associated with archtop guitars, after designing a style of mandolin of the same type. The typical archtop guitar has a large, deep, hollow body whose form is much like that of a mandolin or a violin-family instrument. Nowadays, most archtops are equipped with magnetic pickups, and they are therefore both acoustic and electric. F-hole archtop guitars were immediately adopted, upon their release, by both jazz and country musicians, and have remained particularly popular in jazz music, usually with flatwound strings.
Resonator, resophonic or Dobros.
All three principal types of resonator guitars were invented by the Slovak-American John Dopyera (1893–1988) for the National and Dobro (Dopyera Brothers) companies. Similar to the flat top guitar in appearance, but with a body that may be made of brass, nickel-silver, or steel as well as wood, the sound of the resonator guitar is produced by one or more aluminum resonator cones mounted in the middle of the top. The physical principle of the guitar is therefore similar to the loudspeaker.
The original purpose of the resonator was to produce a very loud sound; this purpose has been largely superseded by electrical amplification, but the resonator guitar is still played because of its distinctive tone. Resonator guitars may have either one or three resonator cones. The method of transmitting sound resonance to the cone is either a "biscuit" bridge, made of a small piece of hardwood at the vertex of the cone (Nationals), or a "spider" bridge, made of metal and mounted around the rim of the (inverted) cone (Dobros). Three-cone resonators always use a specialized metal bridge. The type of resonator guitar with a neck with a square cross-section—called "square neck" or "Hawaiian"—is usually played face up, on the lap of the seated player, and often with a metal or glass slide. The round neck resonator guitars are normally played in the same fashion as other guitars, although slides are also often used, especially in blues.
Steel guitar.
A steel guitar is any guitar played while moving a polished steel bar or similar hard object against plucked strings. The bar itself is called a "steel" and is the source of the name "steel guitar". The instrument differs from a conventional guitar in that it does not use frets; conceptually, it is somewhat akin to playing a guitar with one finger (the bar). Known for its portamento capabilities, gliding smoothly over every pitch between notes, the instrument can produce a sinuous crying sound and deep vibrato emulating the human singing voice. Typically, the strings are plucked (not strummed) by the fingers of the dominant hand, while the steel tone bar is pressed lightly against the strings and moved by the opposite hand. The instrument is played while sitting, placed horizontally across the player's knees or otherwise supported. The horizontal playing style is called "Hawaiian style".
Twelve-string.
The twelve-string guitar usually has steel strings, and it is widely used in folk music, blues, and rock and roll. Rather than having only six strings, the 12-string guitar has six courses made up of two strings each, like a mandolin or lute. The highest two courses are tuned in unison, while the others are tuned in octaves. The 12-string guitar is also made in electric forms. The chime-like sound of the 12-string electric guitar was the basis of jangle pop.
Acoustic bass.
The acoustic bass guitar is a bass instrument with a hollow wooden body similar to, though usually somewhat larger than, that of a six-string acoustic guitar. Like the traditional electric bass guitar and the double bass, the acoustic bass guitar commonly has four strings, which are normally tuned E-A-D-G, an octave below the lowest four strings of the six-string guitar, which is the same tuning pitch as an electric bass guitar. It can, more rarely, be found with five or six strings, which provides a wider range of notes to be played with less movement up and down the neck.
Electric.
Electric guitars can have solid, semi-hollow, or hollow bodies; solid bodies produce little sound without amplification. In contrast to a standard acoustic guitar, electric guitars instead rely on electromagnetic pickups, and sometimes piezoelectric pickups, that convert the vibration of the steel strings into signals, which are fed to an amplifier through a patch cable or radio transmitter. The sound is frequently modified by other electronic devices (effects units) or the natural distortion of valves (vacuum tubes) or the pre-amp in the amplifier. There are two main types of magnetic pickups, single- and double-coil (or humbucker), each of which can be passive or active. The electric guitar is used extensively in jazz, blues, R & B, and rock and roll. The first successful magnetic pickup for a guitar was invented by George Beauchamp, and incorporated into the 1931 Ro-Pat-In (later Rickenbacker) "Frying Pan" lap steel; other manufacturers, notably Gibson, soon began to install pickups in archtop models. After World War II the completely solid-body electric was popularized by Gibson in collaboration with Les Paul, and independently by Leo Fender of Fender Music. The lower fretboard action (the height of the strings from the fingerboard), lighter (thinner) strings, and its electrical amplification lend the electric guitar to techniques less frequently used on acoustic guitars. These include tapping, extensive use of legato through pull-offs and hammer-ons (also known as slurs), pinch harmonics, volume swells, and use of a tremolo arm or effects pedals.
Some electric guitar models feature piezoelectric pickups, which function as transducers to provide a sound closer to that of an acoustic guitar with the flip of a switch or knob, rather than switching guitars. Those that combine piezoelectric pickups and magnetic pickups are sometimes known as hybrid guitars.
Hybrids of acoustic and electric guitars are also common. There are also more exotic varieties, such as guitars with two, three, or rarely four necks, all manner of alternate string arrangements, fretless fingerboards (used almost exclusively on bass guitars, meant to emulate the sound of a stand-up bass), 5.1 surround guitar, and such.
Seven-string and eight-string.
Solid-body seven-string guitars were popularized in the 1980s and 1990s. Other artists go a step further, by using an eight-string guitar with two extra low strings. Although the most common seven-string has a low B string, Roger McGuinn (of The Byrds and Rickenbacker) uses an octave G string paired with the regular G string as on a 12-string guitar, allowing him to incorporate chiming 12-string elements in standard six-string playing. In 1982 Uli Jon Roth developed the "Sky Guitar", with a vastly extended number of frets, which was the first guitar to venture into the upper registers of the violin. Roth's seven-string and "Mighty Wing" guitar features a wider octave range.
Electric bass.
The bass guitar (also called an "electric bass", or simply a "bass") is similar in appearance and construction to an electric guitar, but with a longer neck and scale length, and four to six strings. The four-string bass, by far the most common, is usually tuned the same as the double bass, which corresponds to pitches one octave lower than the four lowest pitched strings of a guitar (E, A, D, and G). The bass guitar is a transposing instrument, as it is notated in bass clef an octave higher than it sounds (as is the double bass) to avoid excessive ledger lines being required below the staff. Like the electric guitar, the bass guitar has pickups and it is plugged into an amplifier and speaker for live performances.
Construction.
Handedness.
Modern guitars can be constructed to suit both left- and right-handed players. Typically the dominant hand is used to pluck or strum the strings. This is similar to the violin family of instruments where the dominant hand controls the bow. Left-handed players usually play a mirror image instrument manufactured especially for left-handed players. There are other options, some unorthodox, including learn to play a right-handed guitar as if the player is right-handed or playing an unmodified right-handed guitar reversed. Guitarist Jimi Hendrix played a right-handed guitar strung in reverse (the treble strings and bass strings reversed). The problem with doing this is that it reverses the guitar's saddle angle. The saddle is the strip of material on top of the bridge where the strings rest. It is normally slanted slightly, making the bass strings longer than the treble strings. In part, the reason for this is the difference in the thickness of the strings. Physical properties of the thicker bass strings require them to be slightly longer than the treble strings to correct intonation. Reversing the strings, therefore, reverses the orientation of the saddle, adversely affecting intonation.
Components.
Head.
The headstock is located at the end of the guitar neck farthest from the body. It is fitted with machine heads that adjust the tension of the strings, which in turn affects the pitch. The traditional tuner layout is "3+3", in which each side of the headstock has three tuners (such as on Gibson Les Pauls). In this layout, the headstocks are commonly symmetrical. Many guitars feature other layouts, including six-in-line tuners (featured on Fender Stratocasters) or even "4+2" (e.g. Ernie Ball Music Man). Some guitars (such as Steinbergers) do not have headstocks at all, in which case the tuning machines are located elsewhere, either on the body or the bridge.
The nut is a small strip of bone, plastic, brass, corian, graphite, stainless steel, or other medium-hard material, at the joint where the headstock meets the fretboard. Its grooves guide the strings onto the fretboard, giving consistent lateral string placement. It is one of the endpoints of the strings' vibrating length. It must be accurately cut, or it can contribute to tuning problems due to string slippage or string buzz. To reduce string friction in the nut, which can adversely affect tuning stability, some guitarists fit a roller nut. Some instruments use a zero fret just in front of the nut. In this case the nut is used only for lateral alignment of the strings, the string height and length being dictated by the zero fret.
Neck.
A guitar's frets, fretboard, tuners, headstock, and truss rod, all attached to a long wooden extension, collectively constitute its neck. The wood used to make the fretboard usually differs from the wood in the rest of the neck. The bending stress on the neck is considerable, particularly when heavier gauge strings are used (see Tuning), and the ability of the neck to resist bending (see Truss rod) is important to the guitar's ability to hold a constant pitch during tuning or when strings are fretted. The rigidity of the neck with respect to the body of the guitar is one determinant of a good instrument versus a poor-quality one.
The cross-section of the neck can also vary, from a gentle "C" curve to a more pronounced "V" curve. There are many different types of neck profiles available, giving the guitarist many options. Some aspects to consider in a guitar neck may be the overall width of the fretboard, scale (distance between the frets), the neck wood, the type of neck construction (for example, the neck may be glued in or bolted on), and the shape (profile) of the back of the neck. Other types of material used to make guitar necks are graphite (Steinberger guitars), aluminum (Kramer Guitars, Travis Bean and Veleno guitars), or carbon fiber (Modulus Guitars and ThreeGuitars). Double neck electric guitars have two necks, allowing the musician to quickly switch between guitar sounds.
The neck joint or heel is the point at which the neck is either bolted or glued to the body of the guitar. Almost all acoustic steel-string guitars, with the primary exception of Taylors, have glued (otherwise known as set) necks, while electric guitars are constructed using both types. Most classical guitars have a neck and headblock carved from one piece of wood, known as a "Spanish heel". Commonly used set neck joints include mortise and tenon joints (such as those used by C. F. Martin & Co.), dovetail joints (also used by C. F. Martin on the D-28 and similar models) and Spanish heel neck joints, which are named after the shoe they resemble and commonly found in classical guitars. All three types offer stability.
Bolt-on necks, though they are historically associated with cheaper instruments, do offer greater flexibility in the guitar's set-up, and allow easier access for neck joint maintenance and repairs. Another type of neck, only available for solid-body electric guitars, is the neck-through-body construction. These are designed so that everything from the machine heads down to the bridge is located on the same piece of wood. The sides (also known as wings) of the guitar are then glued to this central piece. Some luthiers prefer this method of construction as they claim it allows better sustain of each note. Some instruments may not have a neck joint at all, having the neck and sides built as one piece and the body built around it.
The fingerboard, also called the fretboard, is a piece of wood embedded with metal frets that comprises the top of the neck. It is flat on classical guitars and slightly curved crosswise on acoustic and electric guitars. The curvature of the fretboard is measured by the fretboard radius, which is the radius of a hypothetical circle of which the fretboard's surface constitutes a segment. The smaller the fretboard radius, the more noticeably curved the fretboard is. Most modern guitars feature a 12" neck radius, while older guitars from the 1960s and 1970s usually feature a 6-8" neck radius. Pinching a string against a fret on the fretboard effectively shortens the vibrating length of the string, producing a higher pitch.
Fretboards are most commonly made of rosewood, ebony, maple, and sometimes manufactured using composite materials such as HPL or resin. See the section "Neck" below for the importance of the length of the fretboard in connection to other dimensions of the guitar. The fingerboard plays an essential role in the treble tone for acoustic guitars. The quality of vibration of the fingerboard is the principal characteristic for generating the best treble tone. For that reason, ebony wood is better, but because of high use, ebony has become rare and extremely expensive. Most guitar manufacturers have adopted rosewood instead of ebony.
Frets.
Almost all guitars have frets, which are metal strips (usually nickel alloy or stainless steel) embedded along the fretboard and located at exact points that divide the scale length in accordance with a specific mathematical formula. The exceptions include fretless bass guitars and very rare fretless guitars. Pressing a string against a fret determines the strings' vibrating length and therefore its resultant pitch. The pitch of each consecutive fret is defined at a half-step interval on the chromatic scale. Standard classical guitars have 19 frets and electric guitars between 21 and 24 frets, although guitars have been made with as many as 27 frets. Frets are laid out to accomplish an equal tempered division of the octave. Each set of twelve frets represents an octave. The twelfth fret divides the scale length exactly into two halves, and the 24th fret position divides one of those halves in half again.
The ratio of the spacing of two consecutive frets is formula_0 (twelfth root of two). In practice, luthiers determine fret positions using the constant 17.817—an approximation to 1/(1-1/formula_0). If the nth fret is a distance x from the bridge, then the distance from the (n+1)th fret to the bridge is x-(x/17.817). Frets are available in several different gauges and can be fitted according to player preference. Among these are "jumbo" frets, which have a much thicker gauge, allowing for use of a slight vibrato technique from pushing the string down harder and softer. "Scalloped" fretboards, where the wood of the fretboard itself is "scooped out" between the frets, allow a dramatic vibrato effect. Fine frets, much flatter, allow a very low string-action, but require that other conditions, such as curvature of the neck, be well-maintained to prevent buzz.
Truss rod.
The truss rod is a thin, strong metal rod that runs along the inside of the neck. It is used to correct changes to the neck's curvature caused by aging of the neck timbers, changes in humidity, or to compensate for changes in the tension of strings. The tension of the rod and neck assembly is adjusted by a hex nut or an allen-key bolt on the rod, usually located either at the headstock, sometimes under a cover, or just inside the body of the guitar underneath the fretboard and accessible through the sound hole. Some truss rods can only be accessed by removing the neck. The truss rod counteracts the immense amount of tension the strings place on the neck, bringing the neck back to a straighter position. Turning the truss rod clockwise tightens it, counteracting the tension of the strings and straightening the neck or creating a backward bow. Turning the truss rod counter-clockwise loosens it, allowing string tension to act on the neck and creating a forward bow.
Adjusting the truss rod affects the intonation of a guitar as well as the height of the strings from the fingerboard, called the action. Some truss rod systems, called "double action" truss systems, tighten both ways, pushing the neck both forward and backward (standard truss rods can only release to a point beyond which the neck is no longer compressed and pulled backward). The artist and luthier Irving Sloane pointed out, in his book "Steel-String Guitar Construction", that truss rods are intended primarily to remedy concave bowing of the neck, but cannot correct a neck with "back bow" or one that has become twisted. Classical guitars do not require truss rods, as their nylon strings exert a lower tensile force with lesser potential to cause structural problems. However, their necks are often reinforced with a strip of harder wood, such as an ebony strip that runs down the back of a cedar neck. There is no tension adjustment on this form of reinforcement.
Inlays.
Inlays are visual elements set into the exterior surface of a guitar, both for decoration and artistic purposes and, in the case of the markings on the 3rd, 5th, 7th and 12th fret (and in higher octaves), to provide guidance to the performer about the location of frets on the instrument. The typical locations for inlay are on the fretboard, headstock, and on acoustic guitars around the soundhole, known as the rosette. Inlays range from simple plastic dots on the fretboard to intricate works of art covering the entire exterior surface of a guitar (front and back). Some guitar players have used LEDs in the fretboard to produce unique lighting effects onstage. Fretboard inlays are most commonly shaped like dots, diamond shapes, parallelograms, or large blocks in between the frets.
Dots are usually inlaid into the upper edge of the fretboard in the same positions, small enough to be visible only to the player. These usually appear on the odd-numbered frets, but also on the 12th fret (the one-octave mark) instead of the 11th and 13th frets. Some older or high-end instruments have inlays made of mother of pearl, abalone, ivory, colored wood or other exotic materials and designs. Simpler inlays are often made of plastic or painted. High-end classical guitars seldom have fretboard inlays as a well-trained player is expected to know his or her way around the instrument. In addition to fretboard inlay, the headstock and soundhole surround are also frequently inlaid. The manufacturer's logo or a small design is often inlaid into the headstock. Rosette designs vary from simple concentric circles to delicate fretwork mimicking the historic rosette of lutes. Bindings that edge the finger and soundboards are sometimes inlaid. Some instruments have a filler strip running down the length and behind the neck, used for strength or to fill the cavity through which the truss rod was installed in the neck.
Body.
In acoustic guitars, string vibration is transmitted through the bridge and saddle to the body via sound board. The sound board is typically made of tonewoods such as spruce or cedar. Timbers for tonewoods are chosen for both strength and ability to transfer mechanical energy from the strings to the air within the guitar body. Sound is further shaped by the characteristics of the guitar body's resonant cavity. In expensive instruments, the entire body is made of wood. In inexpensive instruments, the back may be made of plastic.
In an acoustic instrument, the body of the guitar is a major determinant of the overall sound quality. The guitar top, or soundboard, is a finely crafted and engineered element made of tonewoods such as spruce and red cedar. This thin piece of wood, often only 2 or 3 mm thick, is strengthened by differing types of internal bracing. Many luthiers consider the top the dominant factor in determining the sound quality. The majority of the instrument's sound is heard through the vibration of the guitar top as the energy of the vibrating strings is transferred to it. The body of an acoustic guitar has a sound hole through which sound projects. The sound hole is usually a round hole in the top of the guitar under the strings. The air inside the body vibrates as the guitar top and body is vibrated by the strings, and the response of the air cavity at different frequencies is characterized, like the rest of the guitar body, by a number of resonance modes at which it responds more strongly.
The top, back and ribs of an acoustic guitar body are very thin (1–2 mm), so a flexible piece of wood called lining is glued into the corners where the rib meets the top and back. This interior reinforcement provides 5 to 20 mm of solid gluing area for these corner joints. Solid linings are often used in classical guitars, while kerfed lining is most often found in steel-string acoustics. Kerfed lining is also called kerfing because it is scored, or "kerfed"(incompletely sawn through), to allow it to bend with the shape of the rib). During final construction, a small section of the outside corners is carved or routed out and filled with binding material on the outside corners and decorative strips of material next to the binding, which is called purfling. This binding serves to seal off the end grain of the top and back. Purfling can also appear on the back of an acoustic guitar, marking the edge joints of the two or three sections of the back. Binding and purfling materials are generally made of either wood or plastic.
Body size, shape and style have changed over time. 19th-century guitars, now known as salon guitars, were smaller than modern instruments. Differing patterns of internal bracing have been used over time by luthiers. Torres, Hauser, Ramirez, Fleta, and C. F. Martin were among the most influential designers of their time. Bracing not only strengthens the top against potential collapse due to the stress exerted by the tensioned strings but also affects the resonance characteristics of the top. The back and sides are made out of a variety of timbers such as mahogany, Indian rosewood and highly regarded Brazilian rosewood ("Dalbergia nigra"). Each one is primarily chosen for their aesthetic effect and can be decorated with inlays and purfling.
Instruments with larger areas for the guitar top were introduced by Martin in an attempt to create greater volume levels. The popularity of the larger "dreadnought" body size amongst acoustic performers is related to the greater sound volume produced.
Most electric guitar bodies are made of wood and include a plastic pickguard. Boards wide enough to use as a solid body are very expensive due to the worldwide depletion of hardwood stock since the 1970s, so the wood is rarely one solid piece. Most bodies are made from two pieces of wood with some of them including a seam running down the center line of the body. The most common woods used for electric guitar body construction include maple, basswood, ash, poplar, alder, and mahogany. Many bodies consist of good-sounding, but inexpensive woods, like ash, with a "top", or thin layer of another, more attractive wood (such as maple with a natural "flame" pattern) glued to the top of the basic wood. Guitars constructed like this are often called "flame tops". The body is usually carved or routed to accept the other elements, such as the bridge, pickup, neck, and other electronic components. Most electrics have a polyurethane or nitrocellulose lacquer finish. Other alternative materials to wood are used in guitar body construction. Some of these include carbon composites, plastic material, such as polycarbonate, and aluminum alloys.
Bridge.
The main purpose of the bridge on an acoustic guitar is to transfer the vibration from the strings to the soundboard, which vibrates the air inside of the guitar, thereby amplifying the sound produced by the strings. On all electric, acoustic and original guitars, the bridge holds the strings in place on the body. There are many varied bridge designs. There may be some mechanism for raising or lowering the bridge saddles to adjust the distance between the strings and the fretboard (action), or fine-tuning the intonation of the instrument. Some are spring-loaded and feature a "whammy bar", a removable arm that lets the player modulate the pitch by changing the tension on the strings. The whammy bar is sometimes also called a "tremolo bar". (The effect of rapidly changing pitch is properly called "vibrato". See Tremolo for further discussion of this term.) Some bridges also allow for alternate tunings at the touch of a button.
On almost all modern electric guitars, the bridge has saddles that are adjustable for each string so that intonation stays correct up and down the neck. If the open string is in tune, but sharp or flat when frets are pressed, the bridge saddle position can be adjusted with a screwdriver or hex key to remedy the problem. In general, flat notes are corrected by moving the saddle forward and sharp notes by moving it backward. On an instrument correctly adjusted for intonation, the actual length of each string from the nut to the bridge saddle is slightly, but measurably longer than the scale length of the instrument. This additional length is called compensation, which flattens all notes a bit to compensate for the sharping of all fretted notes caused by stretching the string during fretting.
Saddle.
The saddle of a guitar is the part of the bridge that physically supports the strings. It may be one piece (typically on acoustic guitars) or separate pieces, one for each string (electric guitars and basses). The saddle's basic purpose is to provide the endpoint for the string's vibration at the correct location for proper intonation, and on acoustic guitars to transfer the vibrations through the bridge into the top wood of the guitar. Saddles are typically made of plastic or bone for acoustic guitars, though synthetics and some exotic animal tooth variations (e.g. fossilized tooth, ivory, etc. ) have become popular with some players. Electric guitar saddles are typically metal, though some synthetic saddles are available.
Pickguard.
The pickguard, also known as the scratch plate, is usually a piece of laminated plastic or other material that protects the finish of the top of the guitar from damage due to the use of a plectrum ("pick") or fingernails. Electric guitars sometimes mount pickups and electronics on the pickguard. It is a common feature on steel-string acoustic guitars. Some performance styles that use the guitar as a percussion instrument (tapping the top or sides between notes, etc.), such as flamenco, require that a scratchplate or pickguard be fitted to nylon-string instruments.
Strings.
The standard guitar has six strings, but four-, seven-, eight-, nine-, ten-, eleven-, twelve-, thirteen- and eighteen-string guitars are also available. Classical and flamenco guitars historically used gut strings, but these have been superseded by polymer materials, such as nylon and fluorocarbon. Modern guitar strings are constructed from metal, polymers, or animal or plant product materials. "Steel" strings may be made from alloys incorporating steel, nickel or phosphor bronze. Bass strings for both instruments are wound rather than monofilament.
Pickups and electronics.
Pickups are transducers attached to a guitar that detect (or "pick up") string vibrations and convert the mechanical energy of the string into electrical energy. The resultant electrical signal can then be electronically amplified. The most common type of pickup is electromagnetic in design. These contain magnets that are within a coil, or coils, of copper wire. Such pickups are usually placed directly underneath the guitar strings. Electromagnetic pickups work on the same principles and in a similar manner to an electric generator. The vibration of the strings creates a small electric current in the coils surrounding the magnets. This signal current is carried to a guitar amplifier that drives a loudspeaker.
Traditional electromagnetic pickups are either single-coil or double-coil. Single-coil pickups are susceptible to noise induced by stray electromagnetic fields, usually mains-frequency (60 or 50 hertz) hum. The introduction of the double-coil humbucker in the mid-1950s solved this problem through the use of two coils, one of which is wired in opposite polarity to cancel or "buck" stray fields.
The types and models of pickups used can greatly affect the tone of the guitar. Typically, humbuckers, which are two magnet-coil assemblies attached to each other, are traditionally associated with a heavier sound. Single-coil pickups, one magnet wrapped in copper wire, are used by guitarists seeking a brighter, twangier sound with greater dynamic range.
Modern pickups are tailored to the sound desired. A commonly applied approximation used in the selection of a pickup is that less wire (lower electrical impedance) gives a brighter sound, more wire gives a "fat" tone. Other options include specialized switching that produces coil-splitting, in/out of phase and other effects. Guitar circuits are either active, needing a battery to power their circuit, or, as in most cases, equipped with a passive circuit.
Fender Stratocaster-type guitars generally have three single-coil pickups, while most Gibson Les Paul types have humbucker pickups.
Piezoelectric, or piezo, pickups represent another class of pickup. These employ piezoelectricity to generate the musical signal and are popular in hybrid electro-acoustic guitars. A crystal is located under each string, usually in the saddle. When the string vibrates, the shape of the crystal is distorted, and the stresses associated with this change produce tiny voltages across the crystal that can be amplified and manipulated. Piezo pickups usually require a powered pre-amplifier to lift their output to match that of electromagnetic pickups. Power is typically delivered by an on-board battery.
Most pickup-equipped guitars feature onboard controls, such as volume or tone, or pickup selection. At their simplest, these consist of passive components, such as potentiometers and capacitors, but may also include specialized integrated circuits or other active components requiring batteries for power, for preamplification and signal processing, or even for electronic tuning. In many cases, the electronics have some sort of shielding to prevent pickup of external interference and noise.
Guitars may be shipped or retrofitted with a hexaphonic pickup, which produces a separate output for each string, usually from a discrete piezoelectric or magnetic pickup. This arrangement lets on-board or external electronics process the strings individually for modeling or Musical Instrument Digital Interface (MIDI) conversion. Roland makes "GK" hexaphonic pickups for guitar and bass, and a line of guitar modeling and synthesis products. Line 6's hexaphonic-equipped Variax guitars use on-board electronics to model the sound after various vintage instruments, and vary pitch on individual strings.
MIDI converters use a hexaphonic guitar signal to determine pitch, duration, attack, and decay characteristics. The MIDI sends the note information to an internal or external sound bank device. The resulting sound closely mimics numerous instruments. The MIDI setup can also let the guitar be used as a game controller (i.e., Rock Band Squier) or as an instructional tool, as with the Fretlight Guitar.
Tuning.
Standard.
By the 16th century, the guitar tuning of ADGBE had already been adopted in Western culture; a lower E was later added on the bottom as a sixth string.
The result, known as "standard tuning", has the strings tuned from a low E to a high E, traversing a two-octave range: EADGBE.
This tuning is a series of ascending fourths (and a single major third) from low to high.
The reason for ascending fourths is to accommodate four fingers on four frets up a scale before moving to the next string.
This is musically convenient and physically comfortable, and it eased the transition between fingering chords and playing scales.
If the tuning contained all perfect fourths, the range would be two octaves plus one semitone; the high string would be an F, a dissonant half-step from the low E and much out of place.
The pitches are as follows:
The table below shows a pitch's name found over the six strings of a guitar in standard tuning, from the nut (zero), to the twelfth fret.
For four strings, the 5th fret on one string is the same open-note as the next string; for example, a 5th-fret note on the sixth string is the same note as the open fifth string. However, between the second and third strings, an irregularity occurs: The "4th"-fret note on the third string is equivalent to the open second string.
Alternative.
Standard tuning has evolved to provide a good compromise between simple fingering for many chords and the ability to play common scales with reasonable left-hand movement. There are also a variety of commonly used alternative tunings, for example, the classes of "open", "regular", and "dropped" tunings.
"Open tuning" refers to a guitar tuned so that strumming the open strings produces a chord, typically a major chord. The base chord consists of at least 3 notes and may include all the strings or a subset. The tuning is named for the open chord, Open D, open G, and open A are popular tunings. All similar chords in the chromatic scale can then be played by barring a single fret. Open tunings are common in blues music and folk music, and they are used in the playing of slide and bottleneck guitars. Many musicians use open tunings when playing slide guitar.
For the standard tuning, there is exactly one interval of a major third between the second and third strings, and all the other intervals are fourths. The irregularity has a price – chords cannot be shifted around the fretboard in the standard tuning E-A-D-G-B-E, which requires four chord-shapes for the major chords. There are separate chord-forms for chords having their root note on the third, fourth, fifth, and sixth strings.
In contrast, "regular" tunings have equal intervals between the strings, and so they have symmetrical scales all along the fretboard. This makes it simpler to translate chords. For the regular tunings, chords may be moved diagonally around the fretboard. The diagonal movement of chords is especially simple for the regular tunings that are repetitive, in which case chords can be moved vertically: Chords can be moved three strings up (or down) in major-thirds tuning and chords can be moved two strings up (or down) in augmented-fourths tuning. Regular tunings thus appeal to new guitarists and also to jazz-guitarists, whose improvisation is simplified by regular intervals.
On the other hand, some chords are more difficult to play in a regular tuning than in standard tuning. It can be difficult to play conventional chords, especially in augmented-fourths tuning and all-fifths tuning, in which the large spacings require hand stretching. Some chords, which are conventional in folk music, are difficult to play even in all-fourths and major-thirds tunings, which do not require more hand-stretching than standard tuning.
Another class of alternative tunings is called drop tunings, because the tuning "drops down" the lowest string. Dropping down the lowest string a whole tone results in the "drop-D" (or "dropped D") tuning. Its open-string notes DADGBE (from low to high) allow for a deep bass D note, which can be used in keys such as D major, d minor and G major. It simplifies the playing of simple fifths (powerchords). Many contemporary rock bands re-tune all strings down, making, for example, Drop-C or Drop-B tunings.
Scordatura.
Many scordatura (alternate tunings) modify the standard tuning of the lute, especially when playing Renaissance music repertoire originally written for that instrument. Some scordatura drop the pitch of one or more strings, giving access to new lower notes. Some scordatura makes it easier to play in unusual keys.
Accessories.
Though a guitar may be played on its own, there are a variety of common accessories used for holding and playing the guitar.
Capotasto.
A capo (short for "capotasto") is used to change the pitch of open strings. Capos are clipped onto the fretboard with the aid of spring tension or, in some models, elastic tension. To raise the guitar's pitch by one semitone, the player would clip the capo onto the fretboard just below the first fret. Its use allows players to play in different keys without having to change the chord formations they use. For example, if a folk guitar player wanted to play a song in the key of B Major, they could put a capo on the second fret of the instrument, and then play the song as if it were in the key of A Major, but with the capo the instrument would make the sounds of B Major. This is because, with the capo barring the entire second fret, open chords would all sound two semitones (in other words, one tone) higher in pitch. For example, if a guitarist played an open A Major chord (a very common open chord), it would sound like a B Major chord. All of the other open chords would be similarly modified in pitch. Because of the ease with which they allow guitar players to change keys, they are sometimes referred to with pejorative names, such as "cheaters" or the "hillbilly crutch". Despite this negative viewpoint, another benefit of the capo is that it enables guitarists to obtain the ringing, resonant sound of the common keys (C, G, A, etc.) in "harder" and less-commonly used keys. Classical performers are known to use them to enable modern instruments to match the pitch of historical instruments such as the Renaissance music lute.
Slides.
A slide or a steel is a hard smooth object (a steel bar, round metal or glass bar or cylinder, neck of a bottle) commonly used in country music or blues music, to create a glissando effect made popular in Hawaiian music at the beginning of the 20th century. The slide is pressed against the strings by the non-dominant hand, instead of using player's fingers on frets; the strings are then plucked by the dominant hand. The characteristic use of the slide is to move up to the intended pitch by, as the name implies, sliding up the neck to the desired note. Historically, necks of bottles were often used in blues and country music as improvised slides, giving the name "bottleneck guitar" to a style of blues music. Modern slides are constructed of glass, plastic, ceramic, chrome, brass or steel bars or cylinders, depending on the weight and tone desired. An instrument that is played exclusively in this manner (using a metal bar) is called a steel guitar or pedal steel. In such case, the hard object is called a "steel" instead of a slide, and is the reason for the name "steel guitar". A resonator guitar is a steel guitar built with a metal cone under the strings to make the instrument louder.
Plectrum.
A "guitar pick" or "plectrum" is a small piece of hard material generally held between the thumb and first finger of the picking hand and is used to "pick" the strings. Though most classical players pick with a combination of fingernails and fleshy fingertips, the pick is most often used for electric and steel-string acoustic guitars. Though today they are mainly plastic, variations do exist, such as bone, wood, steel or tortoise shell. Tortoise shell was the most commonly used material in the early days of pick-making, but as tortoises and turtles became endangered, the practice of using their shells for picks or anything else was banned. Tortoise-shell picks made before the ban are often coveted for a supposedly superior tone and ease of use, and their scarcity has made them valuable.
Picks come in many shapes and sizes. Picks vary from the small jazz pick to the large bass pick. The thickness of the pick often determines its use. A thinner pick (between 0.2 and 0.5 mm) is usually used for strumming or rhythm playing, whereas thicker picks (between 0.7 and 1.5+ mm) are usually used for single-note lines or lead playing. The distinctive guitar sound of Billy Gibbons is attributed to using a quarter or peso as a pick. Similarly, Brian May is known to use a sixpence coin as a pick, while noted 1970s and early 1980s session musician David Persons is known for using old credit cards, cut to the correct size, as plectrums.
Thumb picks and finger picks that attach to the fingertips are sometimes employed in finger-picking styles on steel strings. These allow the fingers and thumb to operate independently, whereas a flat pick requires the thumb and one or two fingers to manipulate.
Straps.
A guitar strap is a strip of material with an attachment mechanism on each end, made to hold a guitar via the shoulders at an adjustable length. Guitars have varying accommodations for attaching a strap. The most common are strap buttons, also called strap pins, which are flanged steel posts anchored to the guitar with screws. Two strap buttons come pre-attached to virtually all electric guitars, and many steel-string acoustic guitars. Strap buttons are sometimes replaced with "strap locks", which connect the guitar to the strap more securely.
The lower strap button is usually located at the bottom (bridge end) of the body. The upper strap button is usually located near or at the top (neck end) of the body: on the upper body curve, at the tip of the upper "horn" (on a double cutaway), or at the neck joint (heel). Some electrics, especially those with odd-shaped bodies, have one or both strap buttons on the back of the body. Some Steinberger electric guitars, owing to their minimalist and lightweight design, have both strap buttons at the bottom of the body. Rarely, on some acoustics, the upper strap button is located on the headstock. Some acoustic and classical guitars only have a single strap button at the bottom of the body—the other end must be tied onto the headstock, above the nut and below the machine heads.
Amplifiers, effects and speakers.
Electric guitars and bass guitars have to be used with a guitar amplifier and loudspeaker or a bass amplifier and speaker, respectively, in order to make enough sound to be heard by the performer and audience. Electric guitars and bass guitars almost always use magnetic pickups, which generate an electric signal when the musician plucks, strums or otherwise plays the instrument. The amplifier and speaker strengthen this signal using a power amplifier and a loudspeaker. Acoustic guitars that are equipped with a piezoelectric pickup or microphone can also be plugged into an instrument amplifier, acoustic guitar amp or PA system to make them louder. With electric guitar and bass, the amplifier and speaker are not just used to make the instrument louder; by adjusting the equalizer controls, the preamplifier, and any onboard effects units (reverb, distortion/overdrive, etc.) the player can also modify the tone (also called the timbre or "colour") and sound of the instrument. Acoustic guitar players can also use the amp to change the sound of their instrument, but in general, acoustic guitar amps are used to make the natural acoustic sound of the instrument louder without significantly changing its sound.
Notes and references.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
Books, journals.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "\\sqrt[12]{2}"
}
] | https://en.wikipedia.org/wiki?curid=11846 |
11848801 | Keldysh formalism | In non-equilibrium physics, the Keldysh formalism or Keldysh–Schwinger formalism is a general framework for describing the quantum mechanical evolution of a system in a non-equilibrium state or systems subject to time varying external fields (electrical field, magnetic field etc.). Historically, it was foreshadowed by the work of Julian Schwinger and proposed almost simultaneously by Leonid Keldysh and, separately, Leo Kadanoff and Gordon Baym. It was further developed by later contributors such as O. V. Konstantinov and V. I. Perel.
Extensions to driven-dissipative open quantum systems is given not only for bosonic systems, but also for fermionic systems.
The Keldysh formalism provides a systematic way to study non-equilibrium systems, usually based on the two-point functions corresponding to excitations in the system. The main mathematical object in the Keldysh formalism is the non-equilibrium Green's function (NEGF), which is a two-point function of particle fields. In this way, it resembles the Matsubara formalism, which is based on equilibrium Green functions in imaginary-time and treats only equilibrium systems.
Time evolution of a quantum system.
Consider a general quantum mechanical system. This system has the Hamiltonian formula_0. Let the initial state of the system be the pure state formula_1. If we now add a time-dependent perturbation to this Hamiltonian, say formula_2, the full Hamiltonian is formula_3 and hence the system will evolve in time under the full Hamiltonian. In this section, we will see how time evolution actually works in quantum mechanics.
Consider a Hermitian operator formula_4. In the Heisenberg picture of quantum mechanics, this operator is time-dependent and the state is not. The expectation value of the operator formula_5 is given by
formula_6
where, due to time evolution of operators in the Heisenberg picture, formula_7. The time-evolution unitary operator formula_8 is the time-ordered exponential of an integral, formula_9 (Note that if the Hamiltonian at one time commutes with the Hamiltonian at different times, then this can be simplified to formula_10.)
For perturbative quantum mechanics and quantum field theory, it is often more convenient to use the interaction picture. The interaction picture operator is
formula_11
where formula_12. Then, defining formula_13 we have
formula_14
Since the time-evolution unitary operators satisfy formula_15, the above expression can be rewritten as
formula_16,
or with formula_17 replaced by any time value greater than formula_18.
Path ordering on the Keldysh contour.
We can write the above expression more succinctly by, purely formally, replacing each operator formula_19 with a contour-ordered operator formula_20 , such that formula_21 parametrizes the contour path on the time axis starting at formula_22, proceeding to formula_23, and then returning to formula_22. This path is known as the Keldysh contour. formula_24 has the same operator action as formula_19 (where formula_18 is the time value corresponding to formula_21) but also has the additional information of formula_21 (that is, strictly speaking formula_25 if formula_26, even if for the corresponding times formula_27).
Then we can introduce notation of path ordering on this contour, by defining formula_28, where formula_29 is a permutation such that formula_30, and the plus and minus signs are for bosonic and fermionic operators respectively. Note that this is a generalization of time ordering.
With this notation, the above time evolution is written as
formula_31
Where formula_21 corresponds to the time formula_18 on the forward branch of the Keldysh contour, and the integral over formula_32 goes over the entire Keldysh contour. For the rest of this article, as is conventional, we will usually simply use the notation formula_19 for formula_24 where formula_18 is the time corresponding to formula_21, and whether formula_21 is on the forward or reverse branch is inferred from context.
Keldysh diagrammatic technique for Green's functions.
The non-equilibrium Green's function is defined as formula_33.
Or, in the interaction picture, formula_34 . We can expand the exponential as a Taylor series to obtain the perturbation series
formula_35.
This is the same procedure as in equilibrium diagrammatic perturbation theory, but with the important difference that both forward and reverse contour branches are included.
If, as is often the case, formula_36 is a polynomial or series as a function of the elementary fields formula_37, we can organize this perturbation series into monomial terms and apply all possible Wick pairings to the fields in each monomial, obtaining a summation of Feynman diagrams. However, the edges of the Feynman diagram correspond to different propagators depending on whether the paired operators come from the forward or reverse branches. Namely,
formula_38
formula_39
formula_40
formula_41
where the anti-time ordering formula_42 orders operators in the opposite way as time ordering and the formula_43 sign in formula_44 is for bosonic or fermionic fields. Note that formula_45 is the propagator used in ordinary ground state theory.
Thus, Feynman diagrams for correlation functions can be drawn and their values computed the same way as in ground state theory, except with the following modifications to the Feynman rules: Each internal vertex of the diagram is labeled with either formula_46 or formula_47, while external vertices are labelled with formula_48. Then each (unrenormalized) edge directed from a vertex formula_49 (with position formula_50, time formula_51 and sign formula_52) to a vertex formula_53 (with position formula_54, time formula_55 and sign formula_56) corresponds to the propagator formula_57. Then the diagram values for each choice of formula_43 signs (there are formula_58 such choices, where formula_59 is the number of internal vertices) are all added up to find the total value of the diagram.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_0"
},
{
"math_id": 1,
"text": "|n \\rangle"
},
{
"math_id": 2,
"text": "H'(t)"
},
{
"math_id": 3,
"text": "H(t) = H_0+H'(t)"
},
{
"math_id": 4,
"text": "\\mathcal{O}"
},
{
"math_id": 5,
"text": "\\mathcal{O}(t)"
},
{
"math_id": 6,
"text": "\\begin{align}\n\\langle \\mathcal{O}(t) \\rangle &= \\langle n | {U}^{\\dagger}(t,0) \\, \\mathcal{O}(0) \\, U(t,0) | n \\rangle\\\\\n\\end{align}"
},
{
"math_id": 7,
"text": "\\mathcal{O}(t) = U^{\\dagger}(t,0) \\mathcal{O}(0) U(t, 0)"
},
{
"math_id": 8,
"text": "U(t_2, t_1)"
},
{
"math_id": 9,
"text": "U(t_2,t_1)=T(e^{-i\\int_{t_1}^{t_2} H(t') dt'})."
},
{
"math_id": 10,
"text": "U(t_2,t_1)=e^{-i\\int_{t_1}^{t_2} H(t') dt'}"
},
{
"math_id": 11,
"text": "\\begin{align}\n \\mathcal{O_I}(t) &= {U_0}^{\\dagger}(t,0) \\, \\mathcal{O}(0) \\, U_0(t,0),\n\\end{align}"
},
{
"math_id": 12,
"text": " U_0(t_1,t_2) = e^{-iH_0(t_1-t_2)}\n"
},
{
"math_id": 13,
"text": " S(t_1,t_2) = U_0^{\\dagger}(t_1,t_2)U(t_1, t_2),"
},
{
"math_id": 14,
"text": "\\begin{align}\n\\langle \\mathcal{O}(t) \\rangle &= \\langle n | {S}^{\\dagger}(t,0) \\mathcal{O_I}(t) S(t,0) | n \\rangle\\\\\n\\end{align}"
},
{
"math_id": 15,
"text": "U(t_3, t_2) U(t_2, t_1) = U(t_3, t_1)"
},
{
"math_id": 16,
"text": "\\begin{align}\n\\langle \\mathcal{O}(t) \\rangle &= \\langle n | {S}^{\\dagger}(\\infty,0) S(\\infty, t) \\mathcal{O_I}(t) \\, S(t,0) | n \\rangle\\\\\n\\end{align}"
},
{
"math_id": 17,
"text": "\\infty"
},
{
"math_id": 18,
"text": "t"
},
{
"math_id": 19,
"text": "X(t)"
},
{
"math_id": 20,
"text": "X(c)\n"
},
{
"math_id": 21,
"text": "c"
},
{
"math_id": 22,
"text": "t=0"
},
{
"math_id": 23,
"text": "t=\\infty"
},
{
"math_id": 24,
"text": "X(c)"
},
{
"math_id": 25,
"text": "X(c_1) \\neq X(c_2) "
},
{
"math_id": 26,
"text": "c_1 \\neq c_2 "
},
{
"math_id": 27,
"text": "X(t_1) = X(t_2) "
},
{
"math_id": 28,
"text": "\\mathcal{T_c} ( X^{(1)}(c_1) X^{(2)}(c_2)\\ldots X^{(n)}(c_n) ) = (\\pm 1)^{\\sigma}X^{(\\sigma(1))}(c_{\\sigma(1)}) X^{(\\sigma(2))}(c_{\\sigma(2)})\\ldots X^{(\\sigma(n))}(c_{\\sigma(n)}) "
},
{
"math_id": 29,
"text": "\\sigma"
},
{
"math_id": 30,
"text": "c_{\\sigma(1)} < c_{\\sigma(2)} < \\ldots c_{\\sigma(n)} "
},
{
"math_id": 31,
"text": "\\begin{align}\n\\langle \\mathcal{O}(t) \\rangle &= \\langle n | \\mathcal{T_c}( \\mathcal{O(c)} e^{-i\\int dc' H'(c')}) | n \\rangle\n\\end{align}"
},
{
"math_id": 32,
"text": "c'"
},
{
"math_id": 33,
"text": "\\begin{align}\niG(x_1, t_1, x_2, t_2)= \\langle n | T \\psi(x_1,t_1) \\psi(x_2,t_2) | n \\rangle\n\\end{align}"
},
{
"math_id": 34,
"text": "\\begin{align}\niG(x_1, t_1, x_2, t_2)= \\langle n | \\mathcal{T_c} (e^{-i\\int_c H'(t')dt'} \\psi(x_1,t_1) \\psi(x_2,t_2)) | n \\rangle\n\\end{align}"
},
{
"math_id": 35,
"text": "\\sum_{j=0}^{\\infty}\\langle n | \\mathcal{T_c} ((-i\\int_t (H'(t', +)+ H'(t',-) )dt')^j \\psi(x_1,t_1) \\psi(x_2,t_2)) | n \\rangle / j! "
},
{
"math_id": 36,
"text": " H' "
},
{
"math_id": 37,
"text": "\\psi"
},
{
"math_id": 38,
"text": " \\langle n | \\mathcal{T_c} \\psi (x_1, t_1, +) \\psi (x_2, t_2, +)|n \\rangle \\equiv G_0^{++}(x_1,t_1 , x_2, t_2)= \\langle n|\\mathcal{T}\\psi (x_1,t_1) \\psi (x_2,t_2)|n \\rangle "
},
{
"math_id": 39,
"text": " \\langle n | \\mathcal{T_c} \\psi (x_1, t_1, +) \\psi (x_2, t_2, -)|n \\rangle \\equiv G_0^{+-}(x_1,t_1 , x_2, t_2)= \\langle n|\\psi (x_1,t_1) \\psi (x_2,t_2)|n \\rangle "
},
{
"math_id": 40,
"text": " \\langle n | \\mathcal{T_c} \\psi (x_1, t_1, -) \\psi (x_2, t_2, +)|n \\rangle \\equiv G_0^{-+}(x_1,t_1 , x_2, t_2)= \\pm \\langle n| \\psi (x_2,t_2)\\psi (x_1,t_1)|n \\rangle "
},
{
"math_id": 41,
"text": " \\langle n | \\mathcal{T_c} \\psi (x_1, t_1, -) \\psi (x_2, t_2, -)|n \\rangle \\equiv G_0^{--}(x_1,t_1 , x_2, t_2)= \\langle n|\\mathcal{\\overline{T}}\\psi (x_1,t_1) \\psi (x_2,t_2)|n \\rangle "
},
{
"math_id": 42,
"text": "\\mathcal{\\overline{T}}"
},
{
"math_id": 43,
"text": " \\pm "
},
{
"math_id": 44,
"text": " G_0^{-+} "
},
{
"math_id": 45,
"text": "G_0^{--}"
},
{
"math_id": 46,
"text": " + "
},
{
"math_id": 47,
"text": " - "
},
{
"math_id": 48,
"text": "-"
},
{
"math_id": 49,
"text": " a "
},
{
"math_id": 50,
"text": "x_a"
},
{
"math_id": 51,
"text": "t_a"
},
{
"math_id": 52,
"text": " s_a"
},
{
"math_id": 53,
"text": "b"
},
{
"math_id": 54,
"text": "x_b"
},
{
"math_id": 55,
"text": "t_b"
},
{
"math_id": 56,
"text": "s_b"
},
{
"math_id": 57,
"text": "G_0^{s_as_b}(x_a,t_a , x_b, t_b)"
},
{
"math_id": 58,
"text": "2^{v}"
},
{
"math_id": 59,
"text": "v"
}
] | https://en.wikipedia.org/wiki?curid=11848801 |
1185139 | Resolved sideband cooling | Resolved sideband cooling is a laser cooling technique allowing cooling of tightly bound atoms and ions beyond the Doppler cooling limit, potentially to their motional ground state. Aside from the curiosity of having a particle at zero point energy, such preparation of a particle in a definite state with high probability (initialization) is an essential part of state manipulation experiments in quantum optics and quantum computing.
Historical notes.
As of the writing of this article, the scheme behind what we refer to as "resolved sideband cooling" today is attributed to D.J. Wineland and H. Dehmelt, in their article ‘‘Proposed formula_0 laser fluorescence spectroscopy on Tl+ mono-ion oscillator III (sideband cooling).’’ The clarification is important as at the time of the latter article, the term also designated what we call today Doppler cooling, which was experimentally realized with atomic ion clouds in 1978 by W. Neuhauser and independently by D.J. Wineland. An experiment that demonstrates resolved sideband cooling unequivocally in its contemporary meaning is that of Diedrich et al. Similarly unequivocal realization with non-Rydberg neutral atoms was demonstrated in 1998 by S. E. Hamann et al. via Raman cooling.
Conceptual description.
Resolved sideband cooling is a laser cooling technique that can be used to cool strongly trapped atoms to the quantum ground state of their motion. The atoms are usually precooled using the Doppler laser cooling. Subsequently, the resolved sideband cooling is used to cool the atoms beyond the Doppler cooling limit.
A cold trapped atom can be treated to a good approximation as a quantum mechanical harmonic oscillator. If the spontaneous decay rate is much smaller than the vibrational frequency of the atom in the trap, the energy levels of the system will be an evenly spaced frequency ladder, with adjacent levels spaced by an energy formula_1. Each level is denoted by a motional quantum number n, which describes the amount of motional energy present at that level. These motional quanta can be understood in the same way as for the quantum harmonic oscillator. A ladder of levels will be available for each internal state of the atom. For example, in the figure at right both the ground (g) and excited (e) states have their own ladder of vibrational levels.
Suppose a two-level atom whose ground state is denoted by "g" and excited state by "e". Efficient laser cooling occurs when the frequency of the laser beam is tuned to the red sideband i.e.
formula_2,
where formula_3 is the internal atomic transition frequency corresponding to at transition between "g" and "e" and formula_4 is the harmonic oscillation frequency of the atom. In this case the atom undergoes the transition
formula_5,
where formula_6 represents the state of an ion whose internal atomic state is "a" and the motional state is "m".
If the recoil energy of the atom is negligible compared with the vibrational quantum energy, subsequent spontaneous emission occurs predominantly at the carrier frequency. This means that the vibrational quantum number remains constant. This transition is
formula_7
The overall effect of one of these cycles is to reduce the vibrational quantum number of the atom by one. To cool to the ground state, this cycle is repeated many times until formula_8 is reached with a high probability.
Theoretical basis.
The core process that provides the cooling assumes a two level system that is well localized compared to the wavelength (formula_9) of the transition (Lamb-Dicke regime), such as a trapped and sufficiently cooled ion or atom. Modeling the system as a harmonic oscillator interacting with a classical monochromatic electromagnetic field yields (in the rotating wave approximation) the Hamiltonian
formula_10
with
formula_11
formula_12
and where
formula_13 is the number operator
formula_4 is the frequency spacing of the oscillator
formula_14 is the Rabi frequency due to the atom-light interaction
formula_15 is the laser detuning from formula_16
formula_17 is the laser wave vector
That is, incidentally, the Jaynes-Cummings Hamiltonian used to describe the phenomenon of an atom coupled to a cavity in cavity QED. The absorption(emission) of photons by the atom is then governed by the off-diagonal elements, with probability of a transition between vibrational states formula_18 proportional to formula_19, and for each formula_13 there is a manifold, formula_20, coupled to its neighbors with strength proportional to formula_21. Three such manifolds are shown in the picture.
If the formula_16 transition linewidth formula_22 satisfies formula_23, a sufficiently narrow laser can be tuned to a red sideband, formula_24. For an atom starting at formula_25, the predominantly probable transition will be to formula_26. This process is depicted by arrow "1" in the picture. In the Lamb-Dicke regime, the spontaneously emitted photon (depicted by arrow "2") will be, on average, at frequency formula_16, and the net effect of such a cycle, on average, will be the removing of formula_27 motional quanta. After some cycles, the average phonon number is formula_28, where formula_29 is the ratio of the intensities of the red to blue formula_27−th sidebands. In practice, this process is normally done on the first motional sideband formula_30 for optimal efficiency. Repeating the processes many times while ensuring that spontaneous emission occurs provides cooling to formula_31. More rigorous mathematical treatment is given in Turchette et al. and Wineland et al. Specific treatment of cooling multiple ions can be found in Morigi et al.
Experimental implementations.
For resolved sideband cooling to be effective, the process needs to start at sufficiently low formula_32. To that end, the particle is usually first cooled to the Doppler limit, then some sideband cooling cycles are applied, and finally, a measurement is taken or state manipulation is carried out. A more or less direct application of this scheme was demonstrated by Diedrich et al. with the caveat that the narrow quadrupole transition used for cooling connects the ground state to a long-lived state, and the latter had to be pumped out to achieve optimal cooling efficiency. It is not uncommon, however, that additional steps are needed in the process, due to the atomic structure of the cooled species. Examples of that are the cooling of Ca+ ions and the Raman sideband cooling of Cs atoms.
Example: cooling of Ca+ ions.
The energy levels relevant to the cooling scheme for Ca+ ions are the S1/2, P1/2, P3/2, D3/2, and D5/2, which are additionally split by a static magnetic field to their Zeeman manifolds. Doppler cooling is applied on the dipole S1/2 - P1/2 transition (397 nm), however, there is about 6% probability of spontaneous decay to the long-lived D3/2 state, so that state is simultaneously pumped out (at 866 nm) to improve Doppler cooling. Sideband cooling is performed on the narrow quadrupole transition S1/2 - D5/2 (729 nm), however, the long-lived D5/2 state needs to be pumped out to the short lived P3/2 state (at 854 nm) to recycle the ion to the ground S1/2 state and maintain cooling performance. One possible implementation was carried out by Leibfried et al. and a similar one is detailed by Roos. For each data point in the 729 nm absorption spectrum, a few hundred iterations of the following are executed:
Variations of this scheme relaxing the requirements or improving the results are being investigated/used by several ion-trapping groups.
Example: Raman sideband cooling of Cs atoms.
A Raman transition replaces the one-photon transition used in the sideband above by a two-photon process via a virtual level. In the Cs cooling experiment carried out by Hamann et al., trapping is provided by an isotropic optical lattice in a magnetic field, which also provides Raman coupling to the red sideband of the Zeeman manifolds. The process followed in is:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "10^{14}\\delta\\nu/\\nu"
},
{
"math_id": 1,
"text": " \\hbar \\nu "
},
{
"math_id": 2,
"text": "\\omega = \\omega_{0} - \\nu"
},
{
"math_id": 3,
"text": "\\omega_{0}"
},
{
"math_id": 4,
"text": "\\nu"
},
{
"math_id": 5,
"text": "\\vert g, n \\rangle \\rightarrow \\vert e, n-1 \\rangle"
},
{
"math_id": 6,
"text": "\\vert a, m \\rangle"
},
{
"math_id": 7,
"text": "\\vert e, n-1 \\rangle \\rightarrow \\vert g, n-1 \\rangle."
},
{
"math_id": 8,
"text": "\\vert g,n=0 \\rangle"
},
{
"math_id": 9,
"text": "2\\pi c/\\omega_0"
},
{
"math_id": 10,
"text": "H = H_{HO}+H_{AL}"
},
{
"math_id": 11,
"text": "H_{HO} = \\hbar\\nu\\left(n+\\frac 1 2\\right)"
},
{
"math_id": 12,
"text": "H_{AL} = -\\hbar\\Delta\\left|e\\right\\rangle\\left\\langle e\\right|+\\hbar\\frac \\Omega 2 \\left(\\left|e\\right\\rangle\\left\\langle g\\right|e^{i\\mathbf k\\cdot\\mathbf r}+\\left|g\\right\\rangle\\left\\langle e\\right|e^{-i\\mathbf k\\cdot\\mathbf r}\\right)"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "\\Omega"
},
{
"math_id": 15,
"text": "\\Delta"
},
{
"math_id": 16,
"text": "\\omega_0"
},
{
"math_id": 17,
"text": "\\mathbf k"
},
{
"math_id": 18,
"text": "m, n"
},
{
"math_id": 19,
"text": "\\left |\\left\\langle m\\right|e^{i\\mathbf k\\cdot\\mathbf r}\\left|n\\right\\rangle\\right |^2"
},
{
"math_id": 20,
"text": "\\{\\left|g, n\\right\\rangle, \\left|e, n\\right\\rangle\\}"
},
{
"math_id": 21,
"text": "\\left|\\left\\langle m\\right|e^{i\\mathbf k\\cdot\\mathbf r}\\left|n\\right\\rangle\\right|"
},
{
"math_id": 22,
"text": "\\Gamma"
},
{
"math_id": 23,
"text": "\\Gamma\\ll\\nu"
},
{
"math_id": 24,
"text": "\\omega_0-q\\nu, q\\in\\{1,2,3,..\\}"
},
{
"math_id": 25,
"text": "\\left|g, n\\right\\rangle"
},
{
"math_id": 26,
"text": "\\left|e, n-q\\right\\rangle"
},
{
"math_id": 27,
"text": "q"
},
{
"math_id": 28,
"text": "\\bar n = \\frac {R_q^{1/q}}{1-R_q^{1/q}}"
},
{
"math_id": 29,
"text": "R_q"
},
{
"math_id": 30,
"text": " q=1 "
},
{
"math_id": 31,
"text": "\\bar n \\approx (\\Gamma/\\nu)^2\\ll 1"
},
{
"math_id": 32,
"text": "\\bar n"
},
{
"math_id": 33,
"text": "\\sigma^-"
},
{
"math_id": 34,
"text": "10^6"
}
] | https://en.wikipedia.org/wiki?curid=1185139 |
11854075 | Mathieu wavelet | The Mathieu equation is a linear second-order differential equation with periodic coefficients. The French mathematician, E. Léonard Mathieu, first introduced this family of differential equations, nowadays termed Mathieu equations, in his “Memoir on vibrations of an elliptic membrane” in 1868. "Mathieu functions are applicable to a wide variety of physical phenomena, e.g., diffraction, amplitude distortion, inverted pendulum, stability of a floating body, radio frequency quadrupole, and vibration in a medium with modulated density"
Elliptic-cylinder wavelets.
This is a wide family of wavelet system that provides a multiresolution analysis. The magnitude of the detail and smoothing filters corresponds to first-kind Mathieu functions with odd characteristic exponent. The number of notches of these filters can be easily designed by choosing the characteristic exponent. Elliptic-cylinder wavelets derived by this method possess potential application in the fields of optics and electromagnetism due to its symmetry.
Mathieu differential equations.
Mathieu's equation is related to the wave equation for the elliptic cylinder. In 1868, the French mathematician Émile Léonard Mathieu introduced a family of differential equations nowadays termed Mathieu equations.
Given formula_0, the Mathieu equation is given by
formula_1
The Mathieu equation is a linear second-order differential equation with periodic coefficients. For "q" = 0, it reduces to the well-known harmonic oscillator, "a" being the square of the frequency.
The solution of the Mathieu equation is the elliptic-cylinder harmonic, known as Mathieu functions. They have long been applied on a broad scope of wave-guide problems involving elliptical geometry, including:
Mathieu functions: cosine-elliptic and sine-elliptic functions.
In general, the solutions of Mathieu equation are not periodic. However, for a given "q", periodic solutions exist for infinitely many special values (eigenvalues) of "a". For several physically relevant solutions "y" must be periodic of period formula_3 or formula_4. It is convenient to distinguish even and odd periodic solutions, which are termed Mathieu functions of first kind.
One of four simpler types can be considered: Periodic solution (formula_3 or formula_4) symmetry (even or odd).
For formula_5, the only periodic solutions "y" corresponding to any characteristic value formula_6 or formula_7 have the following notations:
"ce" and "se" are abbreviations for cosine-elliptic and sine-elliptic, respectively.
where the sums are taken over even (respectively odd) values of "m" if the period of "y" is formula_3 (respectively formula_4).
Given "r", we denote henceforth formula_10 by formula_11, for short.
Interesting relationships are found when formula_12, formula_13:
formula_14
formula_15
Figure 1 shows two illustrative waveform of elliptic cosines, whose shape strongly depends on the parameters formula_2 and "q".
Multiresolution analysis filters and Mathieu's equation.
Wavelets are denoted by formula_16 and scaling functions by formula_17, with corresponding spectra formula_18 and formula_19, respectively.
The equation formula_20, which is known as the "dilation" or "refinement equation", is the chief relation determining a Multiresolution Analysis (MRA).
formula_21 is the transfer function of the smoothing filter.
formula_22 is the transfer function of the detail filter.
The transfer function of the "detail filter" of a Mathieu wavelet is
formula_23
The transfer function of the "smoothing filter" of a Mathieu wavelet is
formula_24
The characteristic exponent formula_2 should be chosen so as to guarantee suitable initial conditions, i.e. formula_25 and formula_26, which are compatible with wavelet filter requirements. Therefore, formula_2 must be odd.
The magnitude of the transfer function corresponds exactly to the modulus of an elliptic-sine:
Examples of filter transfer function for a Mathieu MRA are shown in the figure 2. The value of "a" is adjusted to an "eigenvalue" in each case, leading to a periodic solution. Such solutions present a number of formula_2 zeroes in the interval formula_27.
The "G" and "H" filter coefficients of Mathieu MRA can be expressed in terms of the values formula_28 of the Mathieu function as:
formula_29
formula_30
There exist recurrence relations among the coefficients:
formula_31
formula_32
for formula_33, "m" odd.
It is straightforward to show that formula_34, formula_35.
Normalising conditions are formula_36 and formula_37.
Waveform of Mathieu wavelets.
Mathieu wavelets can be derived from the lowpass reconstruction filter by the cascade algorithm. Infinite Impulse Response filters (IIR filter) should be use since Mathieu wavelet has no compact support. Figure 3 shows emerging pattern that progressively looks like the wavelet's shape. Depending on the parameters "a" and "q" some waveforms (e.g. fig. 3b) can present a somewhat unusual shape.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a \\in \\mathbb{R}, q \\in \\mathbb{C}"
},
{
"math_id": 1,
"text": "\\frac {d^2 y} {dw^2} +(a-2q \\cos 2w )y=0."
},
{
"math_id": 2,
"text": "\\nu"
},
{
"math_id": 3,
"text": "\\pi"
},
{
"math_id": 4,
"text": "2\\pi"
},
{
"math_id": 5,
"text": "q \\ne 0"
},
{
"math_id": 6,
"text": "a=a_r(q)"
},
{
"math_id": 7,
"text": "a=b_r(q)"
},
{
"math_id": 8,
"text": "ce_r(\\omega,q)= \\sum_m A_{r,m} \\cos {m \\omega}\\text{ for }a = a_r(q)"
},
{
"math_id": 9,
"text": "se_r(\\omega,q)= \\sum_m A_{r,m} \\sin {m \\omega}\\text{ for }a = b_r(q)"
},
{
"math_id": 10,
"text": "A_{r,m}"
},
{
"math_id": 11,
"text": "A_m"
},
{
"math_id": 12,
"text": "q \\to 0"
},
{
"math_id": 13,
"text": "r \\ne 0"
},
{
"math_id": 14,
"text": "\\lim_{q \\to 0} ce_r(\\omega,q)= \\cos {r \\omega}"
},
{
"math_id": 15,
"text": "\\lim_{q \\to 0} se_r(\\omega,q)= \\sin {r \\omega}"
},
{
"math_id": 16,
"text": "\\psi(t)"
},
{
"math_id": 17,
"text": "\\phi(t)"
},
{
"math_id": 18,
"text": "\\Psi(\\omega)"
},
{
"math_id": 19,
"text": "\\Phi(\\omega)"
},
{
"math_id": 20,
"text": "\\phi(t)= \\sqrt {2} \\sum_{n \\in Z} h_n \\phi(2t-n)"
},
{
"math_id": 21,
"text": "H(\\omega)= \\frac {1} {\\sqrt 2} \\sum_{k \\in Z} h_k e^{j \\omega k}"
},
{
"math_id": 22,
"text": "G(\\omega)= \\frac {1} {\\sqrt 2} \\sum_{k \\in Z} g_k e^{j \\omega k}"
},
{
"math_id": 23,
"text": "G_{\\nu}(\\omega)=e^{j(\\nu-2)[ \\frac {\\omega - \\pi} {2}]}. \\frac {ce_{\\nu} ( \\frac {\\omega-\\pi} {2},q)} {{ce_{\\nu}(0,q)}}."
},
{
"math_id": 24,
"text": "H_{\\nu}(\\omega)=-e^{j\\nu [ \\frac {\\omega} {2}]}. \\frac {ce_{\\nu}( \\frac {\\omega} {2},q)} {{ce_{\\nu}(0,q)}}."
},
{
"math_id": 25,
"text": "G_{\\nu}(0)=0"
},
{
"math_id": 26,
"text": "G_{\\nu}(\\pi)=1"
},
{
"math_id": 27,
"text": "0 \\le |\\omega | \\le \\pi"
},
{
"math_id": 28,
"text": "\\{ A_{2 l +1} \\}_{l \\in Z}"
},
{
"math_id": 29,
"text": "\\frac {h_l} {\\sqrt{2}}=- \\frac {A_{|2l+1|}/2} {ce_{\\nu}(0,q)}"
},
{
"math_id": 30,
"text": "\\frac {g_l} {\\sqrt{2}}=(-1)^l \\frac {A_{|2l-3|}/2} {ce_{\\nu}(0,q)}"
},
{
"math_id": 31,
"text": "(a-1-q)A_1-qA_3 =0"
},
{
"math_id": 32,
"text": "(a-m^2)A_m-q(A_{m-2}+A_{m+2} =0"
},
{
"math_id": 33,
"text": "m \\ge 3"
},
{
"math_id": 34,
"text": "h_{-l}=h_{|l|-1}"
},
{
"math_id": 35,
"text": " \\forall l>0"
},
{
"math_id": 36,
"text": "\\sum_{k=- \\infty}^{k=+ \\infty} {h_k =-1}"
},
{
"math_id": 37,
"text": "\\sum_{k=- \\infty}^{k=+ \\infty} {(-1)^k h_k =0}"
}
] | https://en.wikipedia.org/wiki?curid=11854075 |
1185498 | Fugacity | Effective partial pressure
In chemical thermodynamics, the fugacity of a real gas is an effective partial pressure which replaces the mechanical partial pressure in an accurate computation of chemical equilibrium. It is equal to the pressure of an ideal gas which has the same temperature and molar Gibbs free energy as the real gas.
Fugacities are determined experimentally or estimated from various models such as a Van der Waals gas that are closer to reality than an ideal gas. The real gas pressure and fugacity are related through the dimensionless fugacity coefficient
formula_0
For an ideal gas, fugacity and pressure are equal, and so "φ" = 1. Taken at the same temperature and pressure, the difference between the molar Gibbs free energies of a real gas and the corresponding ideal gas is equal to "RT" ln "φ".
The fugacity is closely related to the thermodynamic activity. For a gas, the activity is simply the fugacity divided by a reference pressure to give a dimensionless quantity. This reference pressure is called the standard state and normally chosen as 1 atmosphere or 1 bar.
Accurate calculations of chemical equilibrium for real gases should use the fugacity rather than the pressure. The thermodynamic condition for chemical equilibrium is that the total chemical potential of reactants is equal to that of products. If the chemical potential of each gas is expressed as a function of fugacity, the equilibrium condition may be transformed into the familiar reaction quotient form (or law of mass action) except that the pressures are replaced by fugacities.
For a condensed phase (liquid or solid) in equilibrium with its vapor phase, the chemical potential is equal to that of the vapor, and therefore the fugacity is equal to the fugacity of the vapor. This fugacity is approximately equal to the vapor pressure when the vapor pressure is not too high.
Pure substance.
Fugacity is closely related to the chemical potential "μ". In a pure substance, "μ" is equal to the Gibbs energy "G"m for a mole of the substance, and
formula_1
where "T" and "P" are the temperature and pressure, "V"m is the volume per mole and "S"m is the entropy per mole.
Gas.
For an ideal gas the equation of state can be written as
formula_2
where "R" is the ideal gas constant. The differential change of the chemical potential between two states of slightly different pressures but equal temperature (i.e., d"T" = 0) is given by
formula_3where ln p is the natural logarithm of p.
For real gases the equation of state will depart from the simpler one, and the result above derived for an ideal gas will only be a good approximation provided that (a) the typical size of the molecule is negligible compared to the average distance between the individual molecules, and (b)
the short range behavior of the inter-molecular potential can be neglected, i.e., when the molecules can be considered to rebound elastically off each other during molecular collisions. In other words, real gases behave like ideal gases at low pressures and high temperatures. At moderately high pressures, attractive interactions between molecules reduce the pressure compared to the ideal gas law; and at very high pressures, the sizes of the molecules are no longer negligible and repulsive forces between molecules increases the pressure. At low temperatures, molecules are more likely to stick together instead of rebounding elastically.
The ideal gas law can still be used to describe the behavior of a real gas if the pressure is replaced by a "fugacity" "f", defined so that
formula_4
and
formula_5
That is, at low pressures "f" is the same as the pressure, so it has the same units as pressure. The ratio
formula_6
is called the "fugacity coefficient".
If a reference state is denoted by a zero superscript, then integrating the equation for the chemical potential gives
formula_7
Note this can also be expressed with formula_8, a dimensionless quantity, called the "activity".
Numerical example: Nitrogen gas (N2) at 0 °C and a pressure of "P" = 100 atmospheres (atm) has a fugacity of "f" = 97.03 atm. This means that the molar Gibbs energy of real nitrogen at a pressure of 100 atm is equal to the molar Gibbs energy of nitrogen as an ideal gas at 97.03 atm. The fugacity coefficient is = 0.9703.
The contribution of nonideality to the molar Gibbs energy of a real gas is equal to "RT" ln "φ". For nitrogen at 100 atm, "G"m = "G"m,id + "RT" ln 0.9703, which is less than the ideal value "G"m,id because of intermolecular attractive forces. Finally, the activity is just 97.03 without units.
Condensed phase.
The fugacity of a condensed phase (liquid or solid) is defined the same way as for a gas:
formula_9
and
formula_10
It is difficult to measure fugacity in a condensed phase directly; but if the condensed phase is "saturated" (in equilibrium with the vapor phase), the chemical potentials of the two phases are equal ("μ"c = "μ"g). Combined with the above definition, this implies that
formula_11
When calculating the fugacity of the compressed phase, one can generally assume the volume is constant. At constant temperature, the change in fugacity as the pressure goes from the saturation press "P"sat to P is
formula_12
This fraction is known as the Poynting factor. Using "f"sat = "φ"sat "p"sat, where "φ"sat is the fugacity coefficient,
formula_13
This equation allows the fugacity to be calculated using tabulated values for saturated vapor pressure. Often the pressure is low enough for the vapor phase to be considered an ideal gas, so the fugacity coefficient is approximately equal to 1.
Unless pressures are very high, the Poynting factor is usually small and the exponential term is near 1. Frequently, the fugacity of the pure liquid is used as a reference state when defining and using mixture activity coefficients.
Mixture.
The fugacity is most useful in mixtures. It does not add any new information compared to the chemical potential, but it has computational advantages. As the molar fraction of a component goes to zero, the chemical potential diverges but the fugacity goes to zero. In addition, there are natural reference states for fugacity (for example, an ideal gas makes a natural reference state for gas mixtures since the fugacity and pressure converge at low pressure).
Gases.
In a mixture of gases, the fugacity of each component "i" has a similar definition, with partial molar quantities instead of molar quantities (e.g., "G""i" instead of "G"m and "V""i" instead of "V"m):
formula_14
and
formula_15
where "Pi" is the partial pressure of component "i". The partial pressures obey Dalton's law:
formula_16
where P is the total pressure and "yi" is the mole fraction of the component (so the partial pressures add up to the total pressure). The fugacities commonly obey a similar law called the Lewis and Randall rule:
formula_17
where "f"* is the fugacity that component "i" would have if the entire gas had that composition at the same temperature and pressure. Both laws are expressions of an assumption that the gases behave independently.
Liquids.
In a liquid mixture, the fugacity of each component is equal to that of a vapor component in equilibrium with the liquid. In an ideal solution, the fugacities obey the Lewis-Randall rule:
formula_18
where "xi" is the mole fraction in the liquid and "f" is the fugacity of the pure liquid phase. This is a good approximation when the component molecules have similar size, shape and polarity.
In a dilute solution with two components, the component with the larger molar fraction (the solvent) may still obey Raoult's law even if the other component (the solute) has different properties. That is because its molecules experience essentially the same environment that they do in the absence of the solute. By contrast, each solute molecule is surrounded by solvent molecules, so it obeys a different law known as Henry's law. By Henry's law, the fugacity of the solute is proportional to its concentration. The constant of proportionality (a measured Henry's constant) depends on whether the concentration is represented by the mole fraction, molality or molarity.
Temperature and pressure dependence.
The pressure dependence of fugacity (at constant temperature) is given by
formula_19
and is always positive.
The temperature dependence at constant pressure is
formula_20
where Δ"H"m is the change in molar enthalpy as the gas expands, liquid vaporizes, or solid sublimates into a vacuum. Also, if the pressure is "P"0, then
formula_21
Since the temperature and entropy are positive, ln decreases with increasing temperature.
Measurement.
The fugacity can be deduced from measurements of volume as a function of pressure at constant temperature. In that case,
formula_22
This integral can also be calculated using an equation of state.
The integral can be recast in an alternative form using the compressibility factor
formula_23
Then
formula_24
This is useful because of the theorem of corresponding states: If the pressure and temperature at the critical point of the gas are "P"c and "T"c, we can define reduced properties "P"r = and "T"r =. Then, to a good approximation, most gases have the same value of "Z" for the same reduced temperature and pressure. However, in geochemical applications, this principle ceases to be accurate at pressures where metamorphism occurs.
For a gas obeying the van der Waals equation, the explicit formula for the fugacity coefficient is
formula_25
This formula is based on the molar volume. Since the pressure and the molar volume are related through the equation of state; a typical procedure would be to choose a volume, calculate the corresponding pressure, and then evaluate the right-hand side of the equation.
History.
The word "fugacity" is derived from the Latin "fugere", to flee. In the sense of an "escaping tendency", it was introduced to thermodynamics in 1901 by the American chemist Gilbert N. Lewis and popularized in an influential textbook by Lewis and Merle Randall, "Thermodynamics and the Free Energy of Chemical Substances", in 1923. The "escaping tendency" referred to the flow of matter between phases and played a similar role to that of temperature in heat flow.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\varphi = \\frac{f}{P}."
},
{
"math_id": 1,
"text": "d\\mu = dG_\\mathrm{m} = -S_\\mathrm{m}dT + V_\\mathrm{m}dP,"
},
{
"math_id": 2,
"text": "V_\\mathrm{m}^\\text{ideal} = \\frac{RT}{P},"
},
{
"math_id": 3,
"text": "d\\mu = V_\\mathrm{m}dP = RT \\, \\frac{dP}{P} = R T\\, d \\ln P,"
},
{
"math_id": 4,
"text": "d\\mu = R T \\,d \\ln f"
},
{
"math_id": 5,
"text": " \\lim_{P\\to 0} \\frac{f}{P} = 1."
},
{
"math_id": 6,
"text": " \\varphi = \\frac{f}{P}"
},
{
"math_id": 7,
"text": "\\mu - \\mu^0 = R T \\,\\ln \\frac{f}{P^0},"
},
{
"math_id": 8,
"text": "a = f/P^0"
},
{
"math_id": 9,
"text": "d\\mu_\\mathrm{c} = R T \\,d \\ln f_\\mathrm{c}"
},
{
"math_id": 10,
"text": " \\lim_{P\\to 0} \\frac{f_\\mathrm{c}}{P} = 1."
},
{
"math_id": 11,
"text": "f_\\mathrm{c} = f_\\mathrm{g}."
},
{
"math_id": 12,
"text": "\\ln\\frac{f}{f_\\mathrm{sat}} = \\frac{V_\\mathrm{m}}{R T}\\int_{P_\\mathrm{sat}}^P dp = \\frac{V\\left(P-P_\\mathrm{sat}\\right)}{R T}."
},
{
"math_id": 13,
"text": "f = \\varphi_\\mathrm{sat}P_\\mathrm{sat}\\exp\\left(\\frac{V\\left(P-P_\\mathrm{sat}\\right)}{R T}\\right)."
},
{
"math_id": 14,
"text": "dG_i = R T \\,d \\ln f_i"
},
{
"math_id": 15,
"text": " \\lim_{P\\to 0} \\frac{f_i}{P_i} = 1,"
},
{
"math_id": 16,
"text": "P_i = y_i P,"
},
{
"math_id": 17,
"text": "f_i = y_i f^*_i,"
},
{
"math_id": 18,
"text": "f_i = x_i f^*_i,"
},
{
"math_id": 19,
"text": " \\left(\\frac{\\partial \\ln f}{\\partial P}\\right)_T = \\frac{V_\\mathrm{m}}{R T}"
},
{
"math_id": 20,
"text": "\\left(\\frac{\\partial \\ln f}{\\partial T}\\right)_P = \\frac{\\Delta H_\\mathrm{m}}{R T^2},"
},
{
"math_id": 21,
"text": "\\left(\\frac{\\partial \\left(T \\ln \\frac{f}{P^0} \\right)}{\\partial T}\\right)_P = -\\frac{S_\\mathrm{m}}{R} < 0."
},
{
"math_id": 22,
"text": " \\ln\\varphi = \\frac{1}{R T}\\int_0^p \\left(V_m - V_\\mathrm{m}^\\mathrm{ideal}\\right) d P."
},
{
"math_id": 23,
"text": "Z = \\frac{P V_\\mathrm{m}}{R T}."
},
{
"math_id": 24,
"text": " \\ln\\varphi = \\int_0^P \\left(\\frac{Z-1}{P}\\right) d P."
},
{
"math_id": 25,
"text": "RT \\ln \\varphi = \\frac{RTb}{V_\\mathrm{m}-b} - \\frac{2a}{V_\\mathrm{m}} - RT \\ln \\left ( 1 - \\frac{a(V_\\mathrm{m}-b)}{RTV_\\mathrm{m}^2}\\right )"
}
] | https://en.wikipedia.org/wiki?curid=1185498 |
1185709 | Science of value | Philosophical theory
The science of value, or value science, is a creation of philosopher Robert S. Hartman, which attempts to formally elucidate value theory using both formal and symbolic logic.
Fundamentals.
The fundamental principle, which functions as an axiom, and can be stated in symbolic logic, is that "a thing is good insofar as it exemplifies its concept". To put it another way, "a thing is good if it has all its descriptive properties." This means, according to Hartman, that the good thing has a name, that the name has a meaning defined by a set of properties, and that the thing possesses all of the properties in the set. A thing is bad if it does not fulfill its description.
He introduces three basic dimensions of value, "systemic", "extrinsic" and "intrinsic" for sets of properties—"perfection" is to "systemic value" what "goodness" is to "extrinsic value" and what "uniqueness" is to "intrinsic value"—each with their own cardinality: finite, formula_0 and formula_1. In practice, the terms "good" and "bad" apply to finite sets of properties, since this is the only case where there is a ratio between the total number of desired properties and the number of such properties possessed by some object being valued. (In the case where the number of properties is countably infinite, the "extrinsic" dimension of value, the "exposition" as well as the mere definition of a specific concept is taken into consideration.)
Hartman quantifies this notion by the principle that "each property of the thing is worth as much as each other property, depending on the level of abstraction". Hence, if a thing has "n" properties, each of them—if on the same level of abstraction—is proportionally worth "n"−1.
Infinite sets of properties.
Hartman goes on to consider infinite sets of properties. Hartman claims that "according to a theorem of transfinite mathematics, any collection of material objects is at most denumerably infinite". This is not, in fact, a theorem of mathematics. But, according to Hartman, people are capable of a denumerably infinite set of predicates, intended in as many ways, which he gives as formula_1. As this yields a notional cardinality of the continuum, Hartman advises that when setting out to describe a person, a continuum of properties would be most fitting and appropriate to bear in mind. This is the cardinality of "intrinsic value" in Hartman's system.
Although they play no role in ordinary mathematics, Hartman deploys the notion of aleph number reciprocals, as a sort of infinitesimal proportion. This, he contends goes to zero in the limit as the uncountable cardinals become larger. In Hartman's calculus, for example, the assurance in a Dear John letter, that "we will always be friends" has axiological value formula_2, whereas taking a metaphor literally would be slightly preferable, the reification having a value of formula_3.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\aleph_0"
},
{
"math_id": 1,
"text": "\\aleph_1"
},
{
"math_id": 2,
"text": "\\frac{_1}{\\aleph_2}"
},
{
"math_id": 3,
"text": "\\frac{_1}{\\aleph_1}"
}
] | https://en.wikipedia.org/wiki?curid=1185709 |
11857532 | Collision problem | Theoretical problem
The r-to-1 collision problem is an important theoretical problem in complexity theory, quantum computing, and computational mathematics. The collision problem most often refers to the 2-to-1 version: given formula_0 even and a function formula_1, we are promised that f is either 1-to-1 or 2-to-1. We are only allowed to make queries about the value of formula_2 for any formula_3. The problem then asks how many such queries we need to make to determine with certainty whether f is 1-to-1 or 2-to-1.
Classical solutions.
Deterministic.
Solving the 2-to-1 version deterministically requires formula_4 queries, and in general distinguishing r-to-1 functions from 1-to-1 functions requires formula_5 queries.
This is a straightforward application of the pigeonhole principle: if a function is r-to-1, then after formula_5 queries we are guaranteed to have found a collision. If a function is 1-to-1, then no collision exists. Thus, formula_5 queries suffice. If we are unlucky, then the first formula_6 queries could return distinct answers, so formula_5 queries is also necessary.
Randomized.
If we allow randomness, the problem is easier. By the birthday paradox, if we choose (distinct) queries at random, then with high probability we find a collision in any fixed 2-to-1 function after formula_7 queries.
Quantum solution.
The BHT algorithm, which uses Grover's algorithm, solves this problem optimally by only making formula_8 queries to "f". | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "f:\\,\\{1,\\ldots,n\\}\\rightarrow\\{1,\\ldots,n\\}"
},
{
"math_id": 2,
"text": "f(i)"
},
{
"math_id": 3,
"text": "i\\in\\{1,\\ldots,n\\}"
},
{
"math_id": 4,
"text": "\\frac{n}{2}+1"
},
{
"math_id": 5,
"text": "\\frac{n}{r} + 1"
},
{
"math_id": 6,
"text": "n/r"
},
{
"math_id": 7,
"text": "\\Theta(\\sqrt{n})"
},
{
"math_id": 8,
"text": "O(n^{1/3})"
}
] | https://en.wikipedia.org/wiki?curid=11857532 |
1185756 | Eightfold way (physics) | Classification scheme for hadrons
In physics, the eightfold way is an organizational scheme for a class of subatomic particles known as hadrons that led to the development of the quark model. Both the American physicist Murray Gell-Mann and the Israeli physicist Yuval Ne'eman independently and simultaneously proposed the idea in 1961.
The name comes from Gell-Mann's (1961) paper and is an allusion to the Noble Eightfold Path of Buddhism.
Background.
By 1947, physicists believed that they had a good understanding of what the smallest bits of matter were. There were electrons, protons, neutrons, and photons (the components that make up the vast part of everyday experience such as atoms and light) along with a handful of unstable (i.e., they undergo radioactive decay) exotic particles needed to explain cosmic rays observations such as pions, muons and hypothesized neutrino. In addition, the discovery of the positron suggested there could be anti-particles for each of them. It was known a "strong interaction" must exist to overcome electrostatic repulsion in atomic nuclei. Not all particles are influenced by this strong force but those that are, are dubbed "hadrons", which are now further classified as mesons (middle mass) and baryons (heavy weight).
But the discovery of the neutral kaon in late 1947 and the subsequent discovery of a positively charged kaon in 1949 extended the meson family in an unexpected way and in 1950 the lambda particle did the same thing for the baryon family. These particles decay much more slowly than they are produced, a hint that there are two different physical processes involved. This was first suggested by Abraham Pais in 1952. In 1953, Murray Gell-Mann and a collaboration in Japan, Tadao Nakano with Kazuhiko Nishijima, independently suggested a new conserved value now known as "strangeness" during their attempts to understand the growing collection of known particles.
The trend of discovering new mesons and baryons would continue through the 1950s as the number of known "elementary" particles ballooned. Physicists were interested in understanding hadron-hadron interactions via the strong interaction. The concept of isospin, introduced in 1932 by Werner Heisenberg shortly after the discovery of the neutron, was used to group some hadrons together into "multiplets" but no successful scientific theory as yet covered the hadrons as a whole. This was the beginning of a chaotic period in particle physics that has become known as the "particle zoo" era. The eightfold way represented a step out of this confusion and towards the quark model, which proved to be the solution.
Organization.
Group representation theory is the mathematical underpinning behind the eightfold way but this rather technical mathematics is not needed to understand how it helps organize particles. Particles are sorted into groups as mesons or baryons. Within each group, they are further separated by their spin angular momentum. Symmetrical patterns appear when these groups of particles have their strangeness plotted against their electric charge. (This is the most common way to make these plots today but originally physicists used an equivalent pair of properties called "hypercharge" and "isotopic spin", the latter of which is now known as "isospin".) The symmetry in these patterns is a hint of the underlying symmetry of the strong interaction between the particles themselves. In the plots below, points representing particles that lie along the same horizontal line share the same strangeness, s, while those on the same left-leaning diagonals share the same electric charge, q (given as multiples of the elementary charge).
Mesons.
In the original eightfold way, the mesons were organized into octets and singlets. This is one of the finer points of differences between the eightfold way and the quark model it inspired, which suggests the mesons should be grouped into nonets (groups of nine).
Meson octet.
The eightfold way organizes eight of the lowest spin-0 mesons into an octet. They are:
Diametrically opposite particles in the diagram are anti-particles of one-another while particles in the center are their own anti-particle.
Meson singlet.
The chargeless, strangeless eta prime meson was originally classified by itself as a singlet:
Under the quark model later developed, it is better viewed as part of a meson nonet, as previously mentioned.
Baryons.
Baryon octet.
The eightfold way organizes the spin- baryons into an octet. They consist of
Baryon decuplet.
The organizational principles of the eightfold way also apply to the spin- baryons, forming a decuplet.
However, one of the particles of this decuplet had never been previously observed when the eightfold way was proposed. Gell-Mann called this particle the and predicted in 1962 that it would have a strangeness −3, electric charge −1 and a mass near . In 1964, a particle closely matching these predictions was discovered by a particle accelerator group at Brookhaven. Gell-Mann received the 1969 Nobel Prize in Physics for his work on the theory of elementary particles.
Historical development.
Development.
Historically, quarks were motivated by an understanding of flavour symmetry. First, it was noticed (1961) that groups of particles were related to each other in a way that matched the representation theory of SU(3). From that, it was inferred that there is an approximate symmetry of the universe which is parametrized by the group SU(3). Finally (1964), this led to the discovery of three light quarks (up, down, and strange) interchanged by these SU(3) transformations.
Modern interpretation.
The eightfold way may be understood in modern terms as a consequence of flavor symmetries between various kinds of quarks. Since the strong nuclear force affects quarks the same way regardless of their flavor, replacing one flavor of quark with another in a hadron should not alter its mass very much, provided the respective quark masses are smaller than the strong interaction scale—which holds for the three light quarks. Mathematically, this replacement may be described by elements of the SU(3) group. The octets and other hadron arrangements are representations of this group.
Flavor symmetry.
SU(3).
There is an abstract three-dimensional vector space:
formula_0
and the laws of physics are "approximately" invariant under applying a determinant-1 unitary transformation to this space (sometimes called a "flavour rotation"):
formula_1
Here, SU(3) refers to the Lie group of 3×3 unitary matrices with determinant 1 (special unitary group). For example, the flavour rotation
formula_2
is a transformation that simultaneously turns all the up quarks in the universe into down quarks and vice versa. More specifically, these flavour rotations are exact symmetries if "only" strong force interactions are looked at, but they are not truly exact symmetries of the universe because the three quarks have different masses and different electroweak interactions.
This approximate symmetry is called "flavour symmetry", or more specifically "flavour SU(3) symmetry".
Connection to representation theory.
Assume we have a certain particle—for example, a proton—in a quantum state formula_3. If we apply one of the flavour rotations "A" to our particle, it enters a new quantum state which we can call formula_4. Depending on "A", this new state might be a proton, or a neutron, or a superposition of a proton and a neutron, or various other possibilities. The set of all possible quantum states spans a vector space.
Representation theory is a mathematical theory that describes the situation where elements of a group (here, the flavour rotations "A" in the group SU(3)) are automorphisms of a vector space (here, the set of all possible quantum states that you get from flavour-rotating a proton). Therefore, by studying the representation theory of SU(3), we can learn the possibilities for what the vector space is and how it is affected by flavour symmetry.
Since the flavour rotations "A" are approximate, not exact, symmetries, each orthogonal state in the vector space corresponds to a different particle species. In the example above, when a proton is transformed by every possible flavour rotation "A", it turns out that it moves around an 8 dimensional vector space. Those 8 dimensions correspond to the 8 particles in the so-called "baryon octet" (proton, neutron, , , , ). This corresponds to an 8-dimensional ("octet") representation of the group SU(3). Since "A" is an approximate symmetry, all the particles in this octet have similar mass.
Every Lie group has a corresponding Lie algebra, and each group representation of the Lie group can be mapped to a corresponding Lie algebra representation on the same vector space. The Lie algebra formula_5(3) can be written as the set of 3×3 traceless Hermitian matrices. Physicists generally discuss the representation theory of the Lie algebra formula_5(3) instead of the Lie group SU(3), since the former is simpler and the two are ultimately equivalent.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{up quark} \\rightarrow \\begin{pmatrix} 1 \\\\ 0 \\\\ 0 \\end{pmatrix}, \\qquad \\text{down quark} \\rightarrow \\begin{pmatrix} 0 \\\\ 1 \\\\ 0 \\end{pmatrix}, \\qquad \\text{strange quark} \\rightarrow \\begin{pmatrix} 0 \\\\ 0 \\\\ 1 \\end{pmatrix}"
},
{
"math_id": 1,
"text": "\\begin{pmatrix} x \\\\ y \\\\ z \\end{pmatrix} \\mapsto A \\begin{pmatrix} x \\\\ y \\\\ z \\end{pmatrix}, \\quad \\text{where } A \\text{ is in } SU(3)"
},
{
"math_id": 2,
"text": "A=\\begin{pmatrix} ~\\;~0&1&0 \\\\ -1&0&0 \\\\ ~\\;~0&0&1 \\end{pmatrix}"
},
{
"math_id": 3,
"text": "|\\psi\\rangle"
},
{
"math_id": 4,
"text": "A|\\psi\\rangle"
},
{
"math_id": 5,
"text": "\\mathfrak{su}"
}
] | https://en.wikipedia.org/wiki?curid=1185756 |
1185840 | Private information retrieval | Information retrieval husing cryptography
In cryptography, a private information retrieval (PIR) protocol is a protocol that allows a user to retrieve an item from a server in possession of a database without revealing which item is retrieved. PIR is a weaker version of 1-out-of-"n" oblivious transfer, where it is also required that the user should not get information about other database items.
One trivial, but very inefficient way to achieve PIR is for the server to send an entire copy of the database to the user. In fact, this is the only possible protocol (in the classical or the quantum setting) that gives the user information theoretic privacy for their query in a single-server setting. There are two ways to address this problem: make the server computationally bounded or assume that there are multiple non-cooperating servers, each having a copy of the database.
The problem was introduced in 1995 by Chor, Goldreich, Kushilevitz and Sudan in the information-theoretic setting and in 1997 by Kushilevitz and Ostrovsky in the computational setting. Since then, very efficient solutions have been discovered. Single database (computationally private) PIR can be achieved with constant (amortized) communication and k-database (information theoretic) PIR can be done with formula_0 communication.
Advances in computational PIR.
The first single-database computational PIR scheme to achieve communication complexity less than formula_1 was created in 1997 by Kushilevitz and Ostrovsky and achieved communication complexity of formula_2 for any formula_3, where formula_1 is the number of bits in the database. The security of their scheme was based on the well-studied Quadratic residuosity problem. In 1999, Christian Cachin, Silvio Micali and Markus Stadler achieved poly-logarithmic communication complexity. The security of their system is based on the Phi-hiding assumption. In 2004, Helger Lipmaa achieved log-squared communication complexity formula_4, where formula_5 is the length of the strings and formula_6 is the security parameter. The security of his system reduces to the semantic security of a length-flexible additively homomorphic cryptosystem like the Damgård–Jurik cryptosystem. In 2005 Craig Gentry and Zulfikar Ramzan achieved log-squared communication complexity which retrieves log-square (consecutive) bits of the database. The security of their scheme is also based on a variant of the Phi-hiding assumption. The communication rate was finally brought down to formula_7 by Aggelos Kiayias, Nikos Leonardos, Helger Lipmaa, Kateryna Pavlyk, Qiang Tang, in 2015.
All previous sublinear-communication computational PIR protocol required linear computational complexity of formula_8 public-key operations. In 2009, Helger Lipmaa designed a computational PIR protocol with communication complexity formula_4 and worst-case computation of formula_9 public-key operations. Amortization techniques that retrieve non-consecutive bits have been considered by Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky and Amit Sahai.
As shown by Ostrovsky and Skeith, the schemes by Kushilevitz and Ostrovsky and Lipmaa use similar ideas based on homomorphic encryption. The Kushilevitz and Ostrovsky protocol is based on the Goldwasser–Micali cryptosystem while the protocol by Lipmaa is based on the Damgård–Jurik cryptosystem.
Advances in information theoretic PIR.
Achieving information theoretic security requires the assumption that there are multiple non-cooperating servers, each having a copy of the database. Without this assumption, any information-theoretically secure PIR protocol requires an amount of communication that is at least the size of the database "n". Multi-server PIR protocols tolerant of non-responsive or malicious/colluding servers are called "robust" or "Byzantine robust" respectively. These issues were first considered by Beimel and Stahl (2002). An ℓ-server system that can operate where only "k" of the servers respond, ν of the servers respond incorrectly, and which can withstand up to "t" colluding servers without revealing the client's query is called ""t"-private ν-Byzantine robust "k"-out-of-ℓ PIR" [DGH 2012]. In 2012, C. Devet, I. Goldberg, and N. Heninger (DGH 2012) proposed an optimally robust scheme that is Byzantine-robust to formula_10 which is the theoretical maximum value. It is based on an earlier protocol of Goldberg that uses Shamir's Secret Sharing to hide the query. Goldberg has released a C++ implementation on SourceForge.
Relation to other cryptographic primitives.
One-way functions are necessary, but not known to be sufficient, for nontrivial (i.e., with sublinear communication) single database computationally private information retrieval. In fact, such a protocol was proved by Giovanni Di Crescenzo, Tal Malkin and Rafail Ostrovsky to imply oblivious transfer (see below).
Oblivious transfer, also called symmetric PIR, is PIR with the additional restriction that the user may not learn any item other than the one she requested. It is termed symmetric because both the user and the database have a privacy requirement.
Collision-resistant cryptographic hash functions are implied by any one-round computational PIR scheme, as shown by Ishai, Kushilevitz and Ostrovsky.
PIR variations.
The basic motivation for Private Information Retrieval is a family of two-party protocols in which one of the parties (the "sender") owns a database, and the other part (the "receiver") wants to query it with certain privacy restrictions and warranties. So, as a result of the protocol, if the "receiver" wants the "i-th" value in the database he must learn the "i-th" entry, but the "sender" must learn nothing about "i". In a general PIR protocol, a computationally unbounded "sender" can learn nothing about "i" so privacy is theoretically preserved. Since the PIR problem was posed, different approaches to its solution have been pursued and some variations were proposed.
A CPIR (Computationally Private Information Retrieval) protocol is similar to a PIR protocol: the "receiver" retrieves an element chosen by him from the sender's database, so that the "sender" obtains no knowledge about which element was transferred. The only difference is that privacy is safeguarded against a polynomially bounded sender.
A CSPIR (Computationally Symmetric Private Information Retrieval) protocol is used in a similar scenario in which a CPIR protocol is used. If the "sender" owns a database, and the "receiver" wants to get the "i-th" value in this database, at the end of the execution of a SPIR protocol, the "receiver" should have learned nothing about values in the database other than the "i-th" one.
PIR implementations.
Numerous Computational PIR and Information theoretic PIR schemes in the literature have been implemented. Here is an incomplete list:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "n^{O\\left(\\frac{\\log \\log k}{k \\log k}\\right)}"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "n^\\epsilon"
},
{
"math_id": 3,
"text": "\\epsilon"
},
{
"math_id": 4,
"text": "O(\\ell \\log n+k \\log^2 n)"
},
{
"math_id": 5,
"text": "\\ell"
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": " 1 "
},
{
"math_id": 8,
"text": "\\Omega (n)"
},
{
"math_id": 9,
"text": "O (n / \\log n)"
},
{
"math_id": 10,
"text": "\\nu < k-t-1"
},
{
"math_id": 11,
"text": "O(n^{1/(2k-1)})"
}
] | https://en.wikipedia.org/wiki?curid=1185840 |
1186249 | Guard (computer science) | Concept in computer science
In computer programming, a guard is a Boolean expression that must evaluate to true if the execution of the program is to continue in the branch in question. Regardless of which programming language is used, a guard clause, guard code, or guard statement is a check of integrity preconditions used to avoid errors during execution.
Uses.
A typical example is checking that a reference about to be processed is not null, which avoids null-pointer failures.
Other uses include using a Boolean field for idempotence (so subsequent calls are nops), as in the dispose pattern.
public string Foo(string username)
if (username == null) {
throw new ArgumentNullException(nameof(username), "Username is null.");
// Rest of the method code follows here...
Flatter code with less nesting.
The guard provides an early exit from a subroutine, and is a commonly used deviation from structured programming, removing one level of nesting and resulting in flatter code: replacing codice_0 with codice_1.
Using guard clauses can be a refactoring technique to improve code. In general, less nesting is good, as it simplifies the code and reduces cognitive burden.
For example, in Python:
def f_noguard(x):
if isinstance(x, int):
#code
#code
#code
return x + 1
else:
return None
def f_guard(x):
if not isinstance(x, int):
return None
#code
#code
#code
return x + 1
Another example, written in C:
// This function has no guard clause
int funcNoGuard(int x) {
if (x >= 0) {
//code
//code
//code
return x + 1;
} else {
return 0;
// Equivalent function with a guard clause
int funcGuard(int x) {
if (x < 0) {
return 0;
//code
//code
//code
return x + 1;
Terminology.
The term is used with specific meaning in APL, Haskell, Clean, Erlang, occam, Promela, OCaml, Swift, Python from version 3.10, and Scala programming languages. In Mathematica, guards are called "constraints". Guards are the fundamental concept in Guarded Command Language, a language in formal methods. Guards can be used to augment pattern matching with the possibility to skip a pattern even if the structure matches. Boolean expressions in conditional statements usually also fit this definition of a guard although they are called "conditions".
Mathematics.
In the following Haskell example, the guards occur between each pair of "|" and "=":
f x
| x > 0 = 1
| otherwise = 0
This is similar to the respective mathematical notation:
formula_0
In this case the guards are in the "if" and "otherwise" clauses.
Multiple guards.
If there are several parallel guards, they are normally tried in a top-to-bottom order, and the branch of the first to pass is chosen. Guards in a list of cases are typically parallel.
However, in Haskell list comprehensions the guards are in series, and if any of them fails, the list element is not produced. This would be the same as combining the separate guards with logical AND, except that there can be other list comprehension clauses among the guards.
Evolution.
A simple conditional expression, already present in CPL in 1963, has a guard on first sub-expression, and another sub-expression to use in case the first one cannot be used. Some common ways to write this:
(x>0) -> 1/x; 0
x>0 ? 1/x : 0
If the second sub-expression can be a further simple conditional expression, we can give more alternatives to try before the last "fall-through":
(x>0) -> 1/x; (x<0) -> -1/x; 0
In 1966 ISWIM had a form of conditional expression without an obligatory fall-through case, thus separating guard from the concept of choosing either-or. In the case of ISWIM, if none of the alternatives could be used, the value was to be "undefined", which was defined to never compute into a value.
KRC, a "miniaturized version" of SASL (1976), was one of the first programming languages to use the term "guard". Its function definitions could have several clauses, and the one to apply was chosen based on the guards that followed each clause:
fac n = 1, n = 0
= n * fac (n-1), n > 0
Use of guard clauses, and the term "guard clause", dates at least to Smalltalk practice in the 1990s, as codified by Kent Beck.
In 1996, Dyalog APL adopted an alternative pure functional style in which the guard is the only control structure. This example, in APL, computes the parity of the input number:
parity←{
2∣⍵ : 'odd'
'even'
Pattern guard.
In addition to a guard attached to a pattern, pattern guard can refer to the use of pattern matching in the context of a guard. In effect, a match of the pattern is taken to mean pass. This meaning was introduced in a proposal for Haskell by Simon Peyton Jones titled A new view of guards in April 1997 and was used in the implementation of the proposal. The feature provides the ability to use patterns in the guards of a pattern.
An example in extended Haskell:
clunky env var1 var2
| Just val1 <- lookup env var1
, Just val2 <- lookup env var2
= val1 + val2
-- ...other equations for clunky...
This would read: "Clunky for an environment and two variables, "in case the lookups of the variables from the environment produce values", is the sum of the values. ..." As in list comprehensions, the guards are in series, and if any of them fails the branch is not taken.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nf(x) = \\left\\{ \\begin{matrix}\n 1 & \\mbox{if } x>0 \\\\\n 0 & \\mbox{otherwise}\n \\end{matrix}\n \\right.\n"
}
] | https://en.wikipedia.org/wiki?curid=1186249 |
11864519 | Approximate Bayesian computation | Computational method in Bayesian statistics
Approximate Bayesian computation (ABC) constitutes a class of computational methods rooted in Bayesian statistics that can be used to estimate the posterior distributions of model parameters.
In all model-based statistical inference, the likelihood function is of central importance, since it expresses the probability of the observed data under a particular statistical model, and thus quantifies the support data lend to particular values of parameters and to choices among different models. For simple models, an analytical formula for the likelihood function can typically be derived. However, for more complex models, an analytical formula might be elusive or the likelihood function might be computationally very costly to evaluate.
ABC methods bypass the evaluation of the likelihood function. In this way, ABC methods widen the realm of models for which statistical inference can be considered. ABC methods are mathematically well-founded, but they inevitably make assumptions and approximations whose impact needs to be carefully assessed. Furthermore, the wider application domain of ABC exacerbates the challenges of parameter estimation and model selection.
ABC has rapidly gained popularity over the last years and in particular for the analysis of complex problems arising in biological sciences, e.g. in population genetics, ecology, epidemiology, systems biology, and in radio propagation.
History.
The first ABC-related ideas date back to the 1980s. Donald Rubin, when discussing the interpretation of Bayesian statements in 1984, described a hypothetical sampling mechanism that yields a sample from the posterior distribution. This scheme was more of a conceptual thought experiment to demonstrate what type of manipulations are done when inferring the posterior distributions of parameters. The description of the sampling mechanism coincides exactly with that of the ABC-rejection scheme, and this article can be considered to be the first to describe approximate Bayesian computation. However, a two-stage quincunx was constructed by Francis Galton in the late 1800s that can be seen as a physical implementation of an ABC-rejection scheme for a single unknown (parameter) and a single observation. Another prescient point was made by Rubin when he argued that in Bayesian inference, applied statisticians should not settle for analytically tractable models only, but instead consider computational methods that allow them to estimate the posterior distribution of interest. This way, a wider range of models can be considered. These arguments are particularly relevant in the context of ABC.
In 1984, Peter Diggle and Richard Gratton suggested using a systematic simulation scheme to approximate the likelihood function in situations where its analytic form is intractable. Their method was based on defining a grid in the parameter space and using it to approximate the likelihood by running several simulations for each grid point. The approximation was then improved by applying smoothing techniques to the outcomes of the simulations. While the idea of using simulation for hypothesis testing was not new, Diggle and Gratton seemingly introduced the first procedure using simulation to do statistical inference under a circumstance where the likelihood is intractable.
Although Diggle and Gratton's approach had opened a new frontier, their method was not yet exactly identical to what is now known as ABC, as it aimed at approximating the likelihood rather than the posterior distribution. An article of Simon Tavaré and co-authors was first to propose an ABC algorithm for posterior inference. In their seminal work, inference about the genealogy of DNA sequence data was considered, and in particular the problem of deciding the posterior distribution of the time to the most recent common ancestor of the sampled individuals. Such inference is analytically intractable for many demographic models, but the authors presented ways of simulating coalescent trees under the putative models. A sample from the posterior of model parameters was obtained by accepting/rejecting proposals based on comparing the number of segregating sites in the synthetic and real data. This work was followed by an applied study on modeling the variation in human Y chromosome by Jonathan K. Pritchard and co-authors using the ABC method. Finally, the term approximate Bayesian computation was established by Mark Beaumont and co-authors, extending further the ABC methodology and discussing the suitability of the ABC-approach more specifically for problems in population genetics. Since then, ABC has spread to applications outside population genetics, such as systems biology, epidemiology, and phylogeography.
Approximate Bayesian computation can be understood as a kind of Bayesian version of indirect inference.
Several efficient Monte Carlo based approaches have been developed to perform sampling from the ABC posterior distribution for purposes of estimation and prediction problems. A popular choice is the SMC Samplers algorithim adapted to the ABC context in the method (SMC-ABC).
Method.
Motivation.
A common incarnation of Bayes’ theorem relates the conditional probability (or density) of a particular parameter value formula_0 given data formula_1 to the probability of formula_1 given formula_0 by the rule
formula_2,
where formula_3 denotes the posterior, formula_4 the likelihood, formula_5 the prior, and formula_6 the evidence (also referred to as the marginal likelihood or the prior predictive probability of the data). Note that the denominator formula_6 is normalizing the total probability of the posterior density formula_3 to one and can be calculated that way.
The prior represents beliefs or knowledge (such as f.e. physical constraints) about formula_0 before formula_1 is available. Since the prior narrows down uncertainty, the posterior estimates have less variance, but might be biased. For convenience the prior is often specified by choosing a particular distribution among a set of well-known and tractable families of distributions, such that both the evaluation of prior probabilities and random generation of values of formula_0 are relatively straightforward. For certain kinds of models, it is more pragmatic to specify the prior formula_5 using a factorization of the joint distribution of all the elements of formula_0 in terms of a sequence of their conditional distributions. If one is only interested in the relative posterior plausibilities of different values of formula_0, the evidence formula_6 can be ignored, as it constitutes a normalising constant, which cancels for any ratio of posterior probabilities. It remains, however, necessary to evaluate the likelihood formula_4 and the prior formula_5. For numerous applications, it is computationally expensive, or even completely infeasible, to evaluate the likelihood, which motivates the use of ABC to circumvent this issue.
The ABC rejection algorithm.
All ABC-based methods approximate the likelihood function by simulations, the outcomes of which are compared with the observed data. More specifically, with the ABC rejection algorithm — the most basic form of ABC — a set of parameter points is first sampled from the prior distribution. Given a sampled parameter point formula_7, a data set formula_8 is then simulated under the statistical model formula_9 specified by formula_7. If the generated formula_8 is too different from the observed data formula_1, the sampled parameter value is discarded. In precise terms, formula_8 is accepted with tolerance formula_10 if:
formula_11,
where the distance measure formula_12 determines the level of discrepancy between formula_8 and formula_1 based on a given metric (e.g. Euclidean distance). A strictly positive tolerance is usually necessary, since the probability that the simulation outcome coincides exactly with the data (event formula_13) is negligible for all but trivial applications of ABC, which would in practice lead to rejection of nearly all sampled parameter points. The outcome of the ABC rejection algorithm is a sample of parameter values approximately distributed according to the desired posterior distribution, and, crucially, obtained without the need to explicitly evaluate the likelihood function.
Summary statistics.
The probability of generating a data set formula_8 with a small distance to formula_1 typically decreases as the dimensionality of the data increases. This leads to a substantial decrease in the computational efficiency of the above basic ABC rejection algorithm. A common approach to lessen this problem is to replace formula_1 with a set of lower-dimensional summary statistics formula_14, which are selected to capture the relevant information in formula_1. The acceptance criterion in ABC rejection algorithm becomes:
formula_15.
If the summary statistics are sufficient with respect to the model parameters formula_0, the efficiency increase obtained in this way does not introduce any error. Indeed, by definition, sufficiency implies that all information in formula_1 about formula_0 is captured by formula_14.
As elaborated below, it is typically impossible, outside the exponential family of distributions, to identify a finite-dimensional set of sufficient statistics. Nevertheless, informative but possibly insufficient summary statistics are often used in applications where inference is performed with ABC methods.
Example.
An illustrative example is a bistable system that can be characterized by a hidden Markov model (HMM) subject to measurement noise. Such models are employed for many biological systems: They have, for example, been used in development, cell signaling, activation/deactivation, logical processing and non-equilibrium thermodynamics. For instance, the behavior of the Sonic hedgehog (Shh) transcription factor in "Drosophila melanogaster" can be modeled with an HMM. The (biological) dynamical model consists of two states: A and B. If the probability of a transition from one state to the other is defined as formula_0 in both directions, then the probability to remain in the same state at each time step is formula_16. The probability to measure the state correctly is formula_17 (and conversely, the probability of an incorrect measurement is formula_18).
Due to the conditional dependencies between states at different time points, calculation of the likelihood of time series data is somewhat tedious, which illustrates the motivation to use ABC. A computational issue for basic ABC is the large dimensionality of the data in an application like this. The dimensionality can be reduced using the summary statistic formula_19, which is the frequency of switches between the two states. The absolute difference is used as a distance measure formula_20 with tolerance formula_21. The posterior inference about the parameter formula_0 can be done following the five steps presented in.
Step 1: Assume that the observed data form the state sequence AAAABAABBAAAAAABAAAA, which is generated using formula_22 and formula_23. The associated summary statistic—the number of switches between the states in the experimental data—is formula_24.
Step 2: Assuming nothing is known about formula_0, a uniform prior in the interval formula_25 is employed. The parameter formula_17 is assumed to be known and fixed to the data-generating value formula_23, but it could in general also be estimated from the observations. A total of formula_26 parameter points are drawn from the prior, and the model is simulated for each of the parameter points formula_27, which results in formula_26 sequences of simulated data. In this example, formula_28, with each drawn parameter and simulated dataset recorded in Table 1, columns 2-3. In practice, formula_26 would need to be much larger to obtain an appropriate approximation.
Step 3: The summary statistic is computed for each sequence of simulated data formula_29.
Step 4: The distance between the observed and simulated transition frequencies formula_30 is computed for all parameter points. Parameter points for which the distance is smaller than or equal to formula_31 are accepted as approximate samples from the posterior.
Step 5: The posterior distribution is approximated with the accepted parameter points. The posterior distribution should have a non-negligible probability for parameter values in a region around the true value of formula_0 in the system if the data are sufficiently informative. In this example, the posterior probability mass is evenly split between the values 0.08 and 0.43.
The posterior probabilities are obtained via ABC with large formula_26 by utilizing the summary statistic (with formula_32 and formula_33) and the full data sequence (with formula_32). These are compared with the true posterior, which can be computed exactly and efficiently using the Viterbi algorithm. The summary statistic utilized in this example is not sufficient, as the deviation from the theoretical posterior is significant even under the stringent requirement of formula_32. A much longer observed data sequence would be needed to obtain a posterior concentrated around formula_34, the true value of formula_0.
This example application of ABC uses simplifications for illustrative purposes. More realistic applications of ABC are available in a growing number of peer-reviewed articles.
Model comparison with ABC.
Outside of parameter estimation, the ABC framework can be used to compute the posterior probabilities of different candidate models. In such applications, one possibility is to use rejection sampling in a hierarchical manner. First, a model is sampled from the prior distribution for the models. Then, parameters are sampled from the prior distribution assigned to that model. Finally, a simulation is performed as in single-model ABC. The relative acceptance frequencies for the different models now approximate the posterior distribution for these models. Again, computational improvements for ABC in the space of models have been proposed, such as constructing a particle filter in the joint space of models and parameters.
Once the posterior probabilities of the models have been estimated, one can make full use of the techniques of Bayesian model comparison. For instance, to compare the relative plausibilities of two models formula_35 and formula_36, one can compute their posterior ratio, which is related to the Bayes factor formula_37:
formula_38.
If the model priors are equal—that is, formula_39—the Bayes factor equals the posterior ratio.
In practice, as discussed below, these measures can be highly sensitive to the choice of parameter prior distributions and summary statistics, and thus conclusions of model comparison should be drawn with caution.
Pitfalls and remedies.
As for all statistical methods, a number of assumptions and approximations are inherently required for the application of ABC-based methods to real modeling problems. For example, setting the tolerance parameter formula_40 to zero ensures an exact result, but typically makes computations prohibitively expensive. Thus, values of formula_31 larger than zero are used in practice, which introduces a bias. Likewise, sufficient statistics are typically not available and instead, other summary statistics are used, which introduces an additional bias due to the loss of information. Additional sources of bias- for example, in the context of model selection—may be more subtle.
At the same time, some of the criticisms that have been directed at the ABC methods, in particular within the field of phylogeography, are not specific to ABC and apply to all Bayesian methods or even all statistical methods (e.g., the choice of prior distribution and parameter ranges). However, because of the ability of ABC-methods to handle much more complex models, some of these general pitfalls are of particular relevance in the context of ABC analyses.
This section discusses these potential risks and reviews possible ways to address them.
Approximation of the posterior.
A non-negligible formula_31 comes with the price that one samples from formula_41 instead of the true posterior formula_3. With a sufficiently small tolerance, and a sensible distance measure, the resulting distribution formula_41 should often approximate the actual target distribution formula_3 reasonably well. On the other hand, a tolerance that is large enough that every point in the parameter space becomes accepted will yield a replica of the prior distribution. There are empirical studies of the difference between formula_41 and formula_3 as a function of formula_31, and theoretical results for an upper formula_31-dependent bound for the error in parameter estimates. The accuracy of the posterior (defined as the expected quadratic loss) delivered by ABC as a function of formula_31 has also been investigated. However, the convergence of the distributions when formula_31 approaches zero, and how it depends on the distance measure used, is an important topic that has yet to be investigated in greater detail. In particular, it remains difficult to disentangle errors introduced by this approximation from errors due to model mis-specification.
As an attempt to correct some of the error due to a non-zero formula_31, the usage of local linear weighted regression with ABC to reduce the variance of the posterior estimates has been suggested. The method assigns weights to the parameters according to how well simulated summaries adhere to the observed ones and performs linear regression between the summaries and the weighted parameters in the vicinity of observed summaries. The obtained regression coefficients are used to correct sampled parameters in the direction of observed summaries. An improvement was suggested in the form of nonlinear regression using a feed-forward neural network model. However, it has been shown that the posterior distributions obtained with these approaches are not always consistent with the prior distribution, which did lead to a reformulation of the regression adjustment that respects the prior distribution.
Finally, statistical inference using ABC with a non-zero tolerance formula_31 is not inherently flawed: under the assumption of measurement errors, the optimal formula_31 can in fact be shown to be not zero. Indeed, the bias caused by a non-zero tolerance can be characterized and compensated by introducing a specific form of noise to the summary statistics. Asymptotic consistency for such “noisy ABC”, has been established, together with formulas for the asymptotic variance of the parameter estimates for a fixed tolerance.
Choice and sufficiency of summary statistics.
Summary statistics may be used to increase the acceptance rate of ABC for high-dimensional data. Low-dimensional sufficient statistics are optimal for this purpose, as they capture all relevant information present in the data in the simplest possible form. However, low-dimensional sufficient statistics are typically unattainable for statistical models where ABC-based inference is most relevant, and consequently, some heuristic is usually necessary to identify useful low-dimensional summary statistics. The use of a set of poorly chosen summary statistics will often lead to inflated credible intervals due to the implied loss of information, which can also bias the discrimination between models. A review of methods for choosing summary statistics is available, which may provide valuable guidance in practice.
One approach to capture most of the information present in data would be to use many statistics, but the accuracy and stability of ABC appears to decrease rapidly with an increasing numbers of summary statistics. Instead, a better strategy is to focus on the relevant statistics only—relevancy depending on the whole inference problem, on the model used, and on the data at hand.
An algorithm has been proposed for identifying a representative subset of summary statistics, by iteratively assessing whether an additional statistic introduces a meaningful modification of the posterior. One of the challenges here is that a large ABC approximation error may heavily influence the conclusions about the usefulness of a statistic at any stage of the procedure. Another method decomposes into two main steps. First, a reference approximation of the posterior is constructed by minimizing the entropy. Sets of candidate summaries are then evaluated by comparing the ABC-approximated posteriors with the reference posterior.
With both of these strategies, a subset of statistics is selected from a large set of candidate statistics. Instead, the partial least squares regression approach uses information from all the candidate statistics, each being weighted appropriately. Recently, a method for constructing summaries in a semi-automatic manner has attained a considerable interest. This method is based on the observation that the optimal choice of summary statistics, when minimizing the quadratic loss of the parameter point estimates, can be obtained through the posterior mean of the parameters, which is approximated by performing a linear regression based on the simulated data. Summary statistics for model selection have been obtained using multinomial logistic regression on simulated data, treating competing models as the label to predict.
Methods for the identification of summary statistics that could also simultaneously assess the influence on the approximation of the posterior would be of substantial value. This is because the choice of summary statistics and the choice of tolerance constitute two sources of error in the resulting posterior distribution. These errors may corrupt the ranking of models and may also lead to incorrect model predictions.
Bayes factor with ABC and summary statistics.
It has been shown that the combination of insufficient summary statistics and ABC for model selection can be problematic. Indeed, if one lets the Bayes factor based on the summary statistic formula_14 be denoted by formula_42, the relation between formula_37 and formula_42 takes the form:
formula_43.
Thus, a summary statistic formula_14 is sufficient for comparing two models formula_35 and formula_36 if and only if:
formula_44,
which results in that formula_45. It is also clear from the equation above that there might be a huge difference between formula_37 and formula_42 if the condition is not satisfied, as can be demonstrated by toy examples. Crucially, it was shown that sufficiency for formula_35 or formula_36 alone, or for both models, does not guarantee sufficiency for ranking the models. However, it was also shown that any sufficient summary statistic for a model formula_9 in which both formula_35 and formula_36 are nested is valid for ranking the nested models.
The computation of Bayes factors on formula_14 may therefore be misleading for model selection purposes, unless the ratio between the Bayes factors on formula_1 and formula_14 would be available, or at least could be approximated reasonably well. Alternatively, necessary and sufficient conditions on summary statistics for a consistent Bayesian model choice have recently been derived, which can provide useful guidance.
However, this issue is only relevant for model selection when the dimension of the data has been reduced. ABC-based inference, in which the actual data sets are directly compared—as is the case for some systems biology applications (e.g., see )—circumvents this problem.
Indispensable quality controls.
As the above discussion makes clear, any ABC analysis requires choices and trade-offs that can have a considerable impact on its outcomes. Specifically, the choice of competing models/hypotheses, the number of simulations, the choice of summary statistics, or the acceptance threshold cannot currently be based on general rules, but the effect of these choices should be evaluated and tested in each study.
A number of heuristic approaches to the quality control of ABC have been proposed, such as the quantification of the fraction of parameter variance explained by the summary statistics. A common class of methods aims at assessing whether or not the inference yields valid results, regardless of the actually observed data. For instance, given a set of parameter values, which are typically drawn from the prior or the posterior distributions for a model, one can generate a large number of artificial datasets. In this way, the quality and robustness of ABC inference can be assessed in a controlled setting, by gauging how well the chosen ABC inference method recovers the true parameter values, and also models if multiple structurally different models are considered simultaneously.
Another class of methods assesses whether the inference was successful in light of the given observed data, for example, by comparing the posterior predictive distribution of summary statistics to the summary statistics observed. Beyond that, cross-validation techniques and predictive checks represent promising future strategies to evaluate the stability and out-of-sample predictive validity of ABC inferences. This is particularly important when modeling large data sets, because then the posterior support of a particular model can appear overwhelmingly conclusive, even if all proposed models in fact are poor representations of the stochastic system underlying the observation data. Out-of-sample predictive checks can reveal potential systematic biases within a model and provide clues on to how to improve its structure or parametrization.
Fundamentally novel approaches for model choice that incorporate quality control as an integral step in the process have recently been proposed. ABC allows, by construction, estimation of the discrepancies between the observed data and the model predictions, with respect to a comprehensive set of statistics. These statistics are not necessarily the same as those used in the acceptance criterion. The resulting discrepancy distributions have been used for selecting models that are in agreement with many aspects of the data simultaneously, and model inconsistency is detected from conflicting and co-dependent summaries. Another quality-control-based method for model selection employs ABC to approximate the effective number of model parameters and the deviance of the posterior predictive distributions of summaries and parameters. The deviance information criterion is then used as measure of model fit. It has also been shown that the models preferred based on this criterion can conflict with those supported by Bayes factors. For this reason, it is useful to combine different methods for model selection to obtain correct conclusions.
Quality controls are achievable and indeed performed in many ABC-based works, but for certain problems, the assessment of the impact of the method-related parameters can be challenging. However, the rapidly increasing use of ABC can be expected to provide a more thorough understanding of the limitations and applicability of the method.
General risks in statistical inference exacerbated in ABC.
This section reviews risks that are strictly speaking not specific to ABC, but also relevant for other statistical methods as well. However, the flexibility offered by ABC to analyze very complex models makes them highly relevant to discuss here.
Prior distribution and parameter ranges.
The specification of the range and the prior distribution of parameters strongly benefits from previous knowledge about the properties of the system. One criticism has been that in some studies the “parameter ranges and distributions are only guessed based upon the subjective opinion of the investigators”, which is connected to classical objections of Bayesian approaches.
With any computational method, it is typically necessary to constrain the investigated parameter ranges. The parameter ranges should if possible be defined based on known properties of the studied system, but may for practical applications necessitate an educated guess. However, theoretical results regarding objective priors are available, which may for example be based on the principle of indifference or the principle of maximum entropy. On the other hand, automated or semi-automated methods for choosing a prior distribution often yield improper densities. As most ABC procedures require generating samples from the prior, improper priors are not directly applicable to ABC.
One should also keep the purpose of the analysis in mind when choosing the prior distribution. In principle, uninformative and flat priors, that exaggerate our subjective ignorance about the parameters, may still yield reasonable parameter estimates. However, Bayes factors are highly sensitive to the prior distribution of parameters. Conclusions on model choice based on Bayes factor can be misleading unless the sensitivity of conclusions to the choice of priors is carefully considered.
Small number of models.
Model-based methods have been criticized for not exhaustively covering the hypothesis space. Indeed, model-based studies often revolve around a small number of models, and due to the high computational cost to evaluate a single model in some instances, it may then be difficult to cover a large part of the hypothesis space.
An upper limit to the number of considered candidate models is typically set by the substantial effort required to define the models and to choose between many alternative options. There is no commonly accepted ABC-specific procedure for model construction, so experience and prior knowledge are used instead. Although more robust procedures for "a priori" model choice and formulation would be beneficial, there is no one-size-fits-all strategy for model development in statistics: sensible characterization of complex systems will always necessitate a great deal of detective work and use of expert knowledge from the problem domain.
Some opponents of ABC contend that since only few models—subjectively chosen and probably all wrong—can be realistically considered, ABC analyses provide only limited insight. However, there is an important distinction between identifying a plausible null hypothesis and assessing the relative fit of alternative hypotheses. Since useful null hypotheses, that potentially hold true, can extremely seldom be put forward in the context of complex models, predictive ability of statistical models as explanations of complex phenomena is far more important than the test of a statistical null hypothesis in this context. It is also common to average over the investigated models, weighted based on their relative plausibility, to infer model features (e.g., parameter values) and to make predictions.
Large datasets.
Large data sets may constitute a computational bottleneck for model-based methods. It was, for example, pointed out that in some ABC-based analyses, part of the data have to be omitted. A number of authors have argued that large data sets are not a practical limitation, although the severity of this issue depends strongly on the characteristics of the models. Several aspects of a modeling problem can contribute to the computational complexity, such as the sample size, number of observed variables or features, time or spatial resolution, etc. However, with increasing computing power, this issue will potentially be less important.
Instead of sampling parameters for each simulation from the prior, it has been proposed alternatively to combine the Metropolis-Hastings algorithm with ABC, which was reported to result in a higher acceptance rate than for plain ABC. Naturally, such an approach inherits the general burdens of MCMC methods, such as the difficulty to assess convergence, correlation among the samples from the posterior, and relatively poor parallelizability.
Likewise, the ideas of sequential Monte Carlo (SMC) and population Monte Carlo (PMC) methods have been adapted to the ABC setting. The general idea is to iteratively approach the posterior from the prior through a sequence of target distributions. An advantage of such methods, compared to ABC-MCMC, is that the samples from the resulting posterior are independent. In addition, with sequential methods the tolerance levels must not be specified prior to the analysis, but are adjusted adaptively.
It is relatively straightforward to parallelize a number of steps in ABC algorithms based on rejection sampling and sequential Monte Carlo methods. It has also been demonstrated that parallel algorithms may yield significant speedups for MCMC-based inference in phylogenetics, which may be a tractable approach also for ABC-based methods. Yet an adequate model for a complex system is very likely to require intensive computation irrespectively of the chosen method of inference, and it is up to the user to select a method that is suitable for the particular application in question.
Curse of dimensionality.
High-dimensional data sets and high-dimensional parameter spaces can require an extremely large number of parameter points to be simulated in ABC-based studies to obtain a reasonable level of accuracy for the posterior inferences. In such situations, the computational cost is severely increased and may in the worst case render the computational analysis intractable. These are examples of well-known phenomena, which are usually referred to with the umbrella term curse of dimensionality.
To assess how severely the dimensionality of a data set affects the analysis within the context of ABC, analytical formulas have been derived for the error of the ABC estimators as functions of the dimension of the summary statistics. In addition, Blum and François have investigated how the dimension of the summary statistics is related to the mean squared error for different correction adjustments to the error of ABC estimators. It was also argued that dimension reduction techniques are useful to avoid the curse-of-dimensionality, due to a potentially lower-dimensional underlying structure of summary statistics. Motivated by minimizing the quadratic loss of ABC estimators, Fearnhead and Prangle have proposed a scheme to project (possibly high-dimensional) data into estimates of the parameter posterior means; these means, now having the same dimension as the parameters, are then used as summary statistics for ABC.
ABC can be used to infer problems in high-dimensional parameter spaces, although one should account for the possibility of overfitting (e.g., see the model selection methods in and ). However, the probability of accepting the simulated values for the parameters under a given tolerance with the ABC rejection algorithm typically decreases exponentially with increasing dimensionality of the parameter space (due to the global acceptance criterion). Although no computational method (based on ABC or not) seems to be able to break the curse-of-dimensionality, methods have recently been developed to handle high-dimensional parameter spaces under certain assumptions (e.g., based on polynomial approximation on sparse grids, which could potentially heavily reduce the simulation times for ABC). However, the applicability of such methods is problem dependent, and the difficulty of exploring parameter spaces should in general not be underestimated. For example, the introduction of deterministic global parameter estimation led to reports that the global optima obtained in several previous studies of low-dimensional problems were incorrect. For certain problems, it might therefore be difficult to know whether the model is incorrect or, as discussed above, whether the explored region of the parameter space is inappropriate. More pragmatic approaches are to cut the scope of the problem through model reduction, discretisation of variables and the use of canonical models such as noisy models. Noisy models exploit information on the conditional independence between variables.
Software.
A number of software packages are currently available for application of ABC to particular classes of statistical models.
The suitability of individual software packages depends on the specific application at hand, the computer system environment, and the algorithms required.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "p(\\theta|D) = \\frac{p(D|\\theta)p(\\theta)}{p(D)}"
},
{
"math_id": 3,
"text": "p(\\theta|D)"
},
{
"math_id": 4,
"text": "p(D|\\theta)"
},
{
"math_id": 5,
"text": "p(\\theta)"
},
{
"math_id": 6,
"text": "p(D)"
},
{
"math_id": 7,
"text": "\\hat{\\theta}"
},
{
"math_id": 8,
"text": "\\hat{D}"
},
{
"math_id": 9,
"text": "M"
},
{
"math_id": 10,
"text": "\\epsilon \\ge 0"
},
{
"math_id": 11,
"text": "\\rho (\\hat{D},D)\\le\\epsilon"
},
{
"math_id": 12,
"text": "\\rho(\\hat{D},D)"
},
{
"math_id": 13,
"text": "\\hat{D}=D"
},
{
"math_id": 14,
"text": "S(D)"
},
{
"math_id": 15,
"text": "\\rho(S(\\hat{D}),S(D))\\le\\epsilon"
},
{
"math_id": 16,
"text": "{1-\\theta}"
},
{
"math_id": 17,
"text": "\\gamma"
},
{
"math_id": 18,
"text": "{1-\\gamma}"
},
{
"math_id": 19,
"text": "S"
},
{
"math_id": 20,
"text": "\\rho(\\cdot,\\cdot)"
},
{
"math_id": 21,
"text": "\\epsilon=2"
},
{
"math_id": 22,
"text": "\\theta=0.25"
},
{
"math_id": 23,
"text": "\\gamma=0.8"
},
{
"math_id": 24,
"text": "\\omega_E=6"
},
{
"math_id": 25,
"text": "[0,1]"
},
{
"math_id": 26,
"text": "n"
},
{
"math_id": 27,
"text": "\\theta_i: \\text{ } i = 1,\\ldots, n"
},
{
"math_id": 28,
"text": "n=5"
},
{
"math_id": 29,
"text": "\\omega_{S,i}: \\text{ } i = 1,\\ldots,n"
},
{
"math_id": 30,
"text": "\\rho(\\omega_{S,i}, \\omega_E) = |\\omega_{S,i} - \\omega_{E}|"
},
{
"math_id": 31,
"text": "\\epsilon"
},
{
"math_id": 32,
"text": "\\epsilon = 0 "
},
{
"math_id": 33,
"text": "\\epsilon = 2 "
},
{
"math_id": 34,
"text": "\\theta = 0.25"
},
{
"math_id": 35,
"text": "M_1"
},
{
"math_id": 36,
"text": "M_2"
},
{
"math_id": 37,
"text": "B_{1,2}"
},
{
"math_id": 38,
"text": "\\frac{p(M_1 |D)}{p(M_2|D)}=\\frac{p(D|M_1)}{p(D|M_2)}\\frac{p(M_1)}{p(M_2)} = B_{1,2}\\frac{p(M_1)}{p(M_2)}"
},
{
"math_id": 39,
"text": "p(M_1)=p(M_2)"
},
{
"math_id": 40,
"text": "\\epsilon "
},
{
"math_id": 41,
"text": "p(\\theta|\\rho(\\hat{D},D)\\le\\epsilon)"
},
{
"math_id": 42,
"text": "B_{1,2}^s"
},
{
"math_id": 43,
"text": "B_{1,2}=\\frac{p(D|M_1)}{p(D|M_2)}=\\frac{p(D|S(D),M_1)}{p(D|S(D),M_2)} \\frac{p(S(D)|M_1)}{p(S(D)|M_2)}=\\frac{p(D|S(D),M_1)}{p(D|S(D),M_2)} B_{1,2}^s"
},
{
"math_id": 44,
"text": "p(D|S(D),M_1)=p(D|S(D),M_2)"
},
{
"math_id": 45,
"text": "B_{1,2}=B_{1,2}^s"
}
] | https://en.wikipedia.org/wiki?curid=11864519 |
11864935 | Haar-like feature | Haar-like features are digital image features used in object recognition. They owe their name to their intuitive similarity with Haar wavelets and were used in the first real-time face detector.
Historically, working with only image intensities (i.e., the RGB pixel values at each and every pixel of image) made the task of feature calculation computationally expensive. A publication by Papageorgiou et al. discussed working with an alternate feature set based on Haar wavelets instead of the usual image intensities. Paul Viola and Michael Jones adapted the idea of using Haar wavelets and developed the so-called Haar-like features. A Haar-like feature considers adjacent rectangular regions at a specific location in a detection window, sums up the pixel intensities in each region and calculates the difference between these sums. This difference is then used to categorize subsections of an image.
For example, with a human face, it is a common observation that among all faces the region of the eyes is darker than the region of the cheeks. Therefore, a common Haar feature for face detection is a set of two adjacent rectangles that lie above the eye and the cheek region. The position of these rectangles is defined relative to a detection window that acts like a bounding box to the target object (the face in this case).
In the detection phase of the Viola–Jones object detection framework, a window of the target size is moved over the input image, and for each subsection of the image the Haar-like feature is calculated. This difference is then compared to a learned threshold that separates non-objects from objects. Because such a Haar-like feature is only a weak learner or classifier (its detection quality is slightly better than random guessing) a large number of Haar-like features are necessary to describe an object with sufficient accuracy. In the Viola–Jones object detection framework, the Haar-like features are therefore organized in something called a "classifier cascade" to form a strong learner or classifier.
The key advantage of a Haar-like feature over most other features is its calculation speed. Due to the use of "integral images", a Haar-like feature of any size can be calculated in constant time (approximately 60 microprocessor instructions for a 2-rectangle feature).
Rectangular Haar-like features.
A simple rectangular Haar-like feature can be defined as the difference of the sum of pixels of areas inside the rectangle, which can be at any position and scale within the original image. This modified feature set is called "2-rectangle feature". Viola and Jones also defined 3-rectangle features and 4-rectangle features. The values indicate certain characteristics of a particular area of the image. Each feature type can indicate the existence (or absence) of certain characteristics in the image, such as edges or changes in texture. For example, a 2-rectangle feature can indicate where the border lies between a dark region and a light region.
Fast computation of Haar-like features.
One of the contributions of Viola and Jones was to use summed-area tables, which they called "integral images". Integral images can be defined as two-dimensional lookup tables in the form of a matrix with the same size of the original image. Each element of the integral image contains the sum of all pixels located on the up-left region of the original image (in relation to the element's position). This allows to compute sum of rectangular areas in the image, at any position or scale, using only four lookups:
formula_0
where points formula_1 belong to the integral image formula_2, as shown in the figure.
Each Haar-like feature may need more than four lookups, depending on how it was defined. Viola and Jones's 2-rectangle features need six lookups, 3-rectangle features need eight lookups, and 4-rectangle features need nine lookups.
Tilted Haar-like features.
Lienhart and Maydt introduced the concept of a tilted (45°) Haar-like feature. This was used to increase the dimensionality of the set of features in an attempt to improve the detection of objects in images. This was successful, as some of these features are able to describe the object in a better way. For example, a 2-rectangle tilted Haar-like feature can indicate the existence of an edge at 45°.
Messom and Barczak extended the idea to a generic rotated Haar-like feature. Although the idea is sound mathematically, practical problems prevent the use of Haar-like features at any angle. In order to be fast, detection algorithms use low resolution images introducing rounding errors. For this reason rotated Haar-like features are not commonly used. | [
{
"math_id": 0,
"text": " \\text{sum} = I(C) + I(A) - I(B) - I(D). \\, "
},
{
"math_id": 1,
"text": "A, B, C, D"
},
{
"math_id": 2,
"text": "I"
}
] | https://en.wikipedia.org/wiki?curid=11864935 |
11866 | Global Positioning System | American satellite-based radio navigation service
The Global Positioning System (GPS), originally Navstar GPS, is a satellite-based radio navigation system owned by the United States government and operated by the United States Space Force. It is one of the global navigation satellite systems (GNSS) that provide geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. It does not require the user to transmit any data, and operates independently of any telephone or Internet reception, though these technologies can enhance the usefulness of the GPS positioning information. It provides critical positioning capabilities to military, civil, and commercial users around the world. Although the United States government created, controls and maintains the GPS system, it is freely accessible to anyone with a GPS receiver.
Overview.
The GPS project was started by the U.S. Department of Defense in 1973. The first prototype spacecraft was launched in 1978 and the full constellation of 24 satellites became operational in 1993.
After Korean Air Lines Flight 007 was shot down when it mistakenly entered Soviet airspace, President Ronald Reagan announced that the GPS system would be made available for civilian use as of September 16, 1983; however, initially this civilian use was limited to an average accuracy of by use of Selective Availability (SA), a deliberate error introduced into the GPS data (which military receivers could correct for).
As civilian GPS usage grew, there was increasing pressure to remove this error. The SA system was temporarily disabled during the Gulf War, as a shortage of military GPS units meant that many US soldiers were using civilian GPS units sent from home. In the 1990s, Differential GPS systems from the US Coast Guard, Federal Aviation Administration, and similar agencies in other countries began to broadcast local GPS corrections, reducing the effect of both SA degradation and atmospheric effects (that military receivers also corrected for). The US military had also developed methods to perform local GPS jamming, meaning that the ability to globally degrade the system was no longer necessary. As a result, President Bill Clinton signed a bill ordering that Selective Availability be disabled on May 1 2000; and, in 2007, the US government announced that the next generation of GPS satellites would not include the feature at all.
Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation of GPS Block III satellites and Next Generation Operational Control System (OCX) which was authorized by the U.S. Congress in 2000. When Selective Availability was discontinued, GPS was accurate to about . GPS receivers that use the L5 band have much higher accuracy of , while those for high-end applications such as engineering and land surveying are accurate to within and can even provide sub-millimeter accuracy with long-term measurements. Consumer devices such as smartphones can be accurate to or better when used with assistive services like Wi-Fi positioning.
As of July 2023[ [update]], 18 GPS satellites broadcast L5 signals, which are considered pre-operational prior to being broadcast by a full complement of 24 satellites in 2027.
History.
The GPS project was launched in the United States in 1973 to overcome the limitations of previous navigation systems, combining ideas from several predecessors, including classified engineering design studies from the 1960s. The U.S. Department of Defense developed the system, which originally used 24 satellites, for use by the United States military, and became fully operational in 1993. Civilian use was allowed from the 1980s. Roger L. Easton of the Naval Research Laboratory, Ivan A. Getting of The Aerospace Corporation, and Bradford Parkinson of the Applied Physics Laboratory are credited with inventing it. The work of Gladys West on the creation of the mathematical geodetic Earth model is credited as instrumental in the development of computational techniques for detecting satellite positions with the precision needed for GPS.
The design of GPS is based partly on similar ground-based radio-navigation systems, such as LORAN and the Decca Navigator, developed in the early 1940s.
In 1955, Friedwardt Winterberg proposed a test of general relativity—detecting time slowing in a strong gravitational field using accurate atomic clocks placed in orbit inside artificial satellites. Special and general relativity predicted that the clocks on GPS satellites, as observed by those on Earth, run 38 microseconds faster per day than those on the Earth. The design of GPS corrects for this difference; because without doing so, GPS calculated positions would accumulate errors of up to .
Predecessors.
When the Soviet Union launched its first artificial satellite (Sputnik 1) in 1957, two American physicists, William Guier and George Weiffenbach, at Johns Hopkins University's Applied Physics Laboratory (APL) decided to monitor its radio transmissions. Within hours they realized that, because of the Doppler effect, they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to their UNIVAC to do the heavy calculations required.
Early the next year, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem: pinpointing the user's location, given the satellite's. (At the time, the Navy was developing the submarine-launched Polaris missile, which required them to know the submarine's location.) This led them and APL to develop the TRANSIT system. In 1959, ARPA (renamed DARPA in 1972) also played a role in TRANSIT.
TRANSIT was first successfully tested in 1960. It used a constellation of five satellites and could provide a navigational fix approximately once per hour.
In 1967, the U.S. Navy developed the Timation satellite, which proved the feasibility of placing accurate clocks in space, a technology required for GPS.
In the 1970s, the ground-based OMEGA navigation system, based on phase comparison of signal transmission from pairs of stations, became the first worldwide radio navigation system. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy.
Although there were wide needs for accurate navigation in military and civilian sectors, almost none of those was seen as justification for the billions of dollars it would cost in research, development, deployment, and operation of a constellation of navigation satellites. During the Cold War arms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded. It is also the reason for the ultra-secrecy at that time. The nuclear triad consisted of the United States Navy's submarine-launched ballistic missiles (SLBMs) along with United States Air Force (USAF) strategic bombers and intercontinental ballistic missiles (ICBMs). Considered vital to the nuclear deterrence posture, accurate determination of the SLBM launch position was a force multiplier.
Precise navigation would enable United States ballistic missile submarines to get an accurate fix of their positions before they launched their SLBMs. The USAF, with two thirds of the nuclear triad, also had requirements for a more accurate and reliable navigation system. The U.S. Navy and U.S. Air Force were developing their own technologies in parallel to solve what was essentially the same problem.
To increase the survivability of ICBMs, there was a proposal to use mobile launch platforms (comparable to the Soviet SS-24 and SS-25) and so the need to fix the launch position had similarity to the SLBM situation.
In 1960, the Air Force proposed a radio-navigation system called MOSAIC (MObile System for Accurate ICBM Control) that was essentially a 3-D LORAN. A follow-on study, Project 57, was performed in 1963 and it was "in this study that the GPS concept was born". That same year, the concept was pursued as Project 621B, which had "many of the attributes that you now see in GPS" and promised increased accuracy for Air Force bombers as well as ICBMs.
Updates from the Navy TRANSIT system were too slow for the high speeds of Air Force operation. The Naval Research Laboratory (NRL) continued making advances with their Timation (Time Navigation) satellites, first launched in 1967, second launched in 1969, with the third in 1974 carrying the first atomic clock into orbit and the fourth launched in 1977.
Another important predecessor to GPS came from a different branch of the United States military. In 1964, the United States Army orbited its first Sequential Collation of Range (SECOR) satellite used for geodetic surveying. The SECOR system included three ground-based transmitters at known locations that would send signals to the satellite transponder in orbit. A fourth ground-based station, at an undetermined position, could then use those signals to fix its location precisely. The last SECOR satellite was launched in 1969.
Development.
With these parallel developments in the 1960s, it was realized that a superior system could be developed by synthesizing the best technologies from 621B, Transit, Timation, and SECOR in a multi-service program. Satellite orbital position errors, induced by variations in the gravity field and radar refraction among others, had to be resolved. A team led by Harold L Jury of Pan Am Aerospace Division in Florida from 1970 to 1973, used real-time data assimilation and recursive estimation to do so, reducing systematic and residual errors to a manageable level to permit accurate navigation.
During Labor Day weekend in 1973, a meeting of about twelve military officers at the Pentagon discussed the creation of a "Defense Navigation Satellite System (DNSS)". It was at this meeting that the real synthesis that became GPS was created. Later that year, the DNSS program was named "Navstar." Navstar is often erroneously considered an acronym for "NAVigation System using Timing And Ranging" but was never considered as such by the GPS Joint Program Office (TRW may have once advocated for a different navigational system that used that acronym). With the individual satellites being associated with the name Navstar (as with the predecessors Transit and Timation), a more fully encompassing name was used to identify the constellation of Navstar satellites, "Navstar-GPS". Ten "Block I" prototype satellites were launched between 1978 and 1985 (an additional unit was destroyed in a launch failure).
The effect of the ionosphere on radio transmission was investigated in a geophysics laboratory of Air Force Cambridge Research Laboratory, renamed to Air Force Geophysical Research Lab (AFGRL) in 1974. AFGRL developed the Klobuchar model for computing ionospheric corrections to GPS location. Of note is work done by Australian space scientist Elizabeth Essex-Cohen at AFGRL in 1974. She was concerned with the curving of the paths of radio waves (atmospheric refraction) traversing the ionosphere from NavSTAR satellites.
After Korean Air Lines Flight 007, a Boeing 747 carrying 269 people, was shot down by a Soviet interceptor aircraft after straying in prohibited airspace because of navigational errors, in the vicinity of Sakhalin and Moneron Islands, President Ronald Reagan issued a directive making GPS freely available for civilian use, once it was sufficiently developed, as a common good. The first Block II satellite was launched on February 14, 1989, and the 24th satellite was launched in 1994. The GPS program cost at this point, not including the cost of the user equipment but including the costs of the satellite launches, has been estimated at US$5 billion (equivalent to $ billion in 2023).
Initially, the highest-quality signal was reserved for military use, and the signal available for civilian use was intentionally degraded, in a policy known as Selective Availability. This changed on May 1, 2000, with President Bill Clinton signing a policy directive to turn off Selective Availability to provide the same accuracy to civilians that was afforded to the military. The directive was proposed by the U.S. Secretary of Defense, William Perry, in view of the widespread growth of differential GPS services by private industry to improve civilian accuracy. Moreover, the U.S. military was developing technologies to deny GPS service to potential adversaries on a regional basis. Selective Availability was removed from the GPS architecture beginning with GPS-III.
Since its deployment, the U.S. has implemented several improvements to the GPS service, including new signals for civil use and increased accuracy and integrity for all users, all the while maintaining compatibility with existing GPS equipment. Modernization of the satellite system has been an ongoing initiative by the U.S. Department of Defense through a series of satellite acquisitions to meet the growing needs of the military, civilians, and the commercial market.
As of early 2015, high-quality Standard Positioning Service (SPS) GPS receivers provided horizontal accuracy of better than , although many factors such as receiver and antenna quality and atmospheric issues can affect this accuracy.
GPS is owned and operated by the United States government as a national resource. The Department of Defense is the steward of GPS. The "Interagency GPS Executive Board (IGEB)" oversaw GPS policy matters from 1996 to 2004. After that, the National Space-Based Positioning, Navigation and Timing Executive Committee was established by presidential directive in 2004 to advise and coordinate federal departments and agencies on matters concerning the GPS and related systems. The executive committee is chaired jointly by the Deputy Secretaries of Defense and Transportation. Its membership includes equivalent-level officials from the Departments of State, Commerce, and Homeland Security, the Joint Chiefs of Staff and NASA. Components of the executive office of the president participate as observers to the executive committee, and the FCC chairman participates as a liaison.
The U.S. Department of Defense is required by law to "maintain a Standard Positioning Service (as defined in the federal radio navigation plan and the standard positioning service signal specification) that will be available on a continuous, worldwide basis" and "develop measures to prevent hostile use of GPS and its augmentations without unduly disrupting or degrading civilian uses".
Awards.
On February 10, 1993, the National Aeronautic Association selected the GPS Team as winners of the 1992 Robert J. Collier Trophy, the US's most prestigious aviation award. This team combines researchers from the Naval Research Laboratory, the USAF, the Aerospace Corporation, Rockwell International Corporation, and IBM Federal Systems Company. The citation honors them "for the most significant development for safe and efficient navigation and surveillance of air and spacecraft since the introduction of radio navigation 50 years ago".
Two GPS developers received the National Academy of Engineering Charles Stark Draper Prize for 2003:
GPS developer Roger L. Easton received the National Medal of Technology on February 13, 2006.
Francis X. Kane (Col. USAF, ret.) was inducted into the U.S. Air Force Space and Missile Pioneers Hall of Fame at Lackland A.F.B., San Antonio, Texas, March 2, 2010, for his role in space technology development and the engineering design concept of GPS conducted as part of Project 621B.
In 1998, GPS technology was inducted into the Space Foundation Space Technology Hall of Fame.
On October 4, 2011, the International Astronautical Federation (IAF) awarded the Global Positioning System (GPS) its 60th Anniversary Award, nominated by IAF member, the American Institute for Aeronautics and Astronautics (AIAA). The IAF Honors and Awards Committee recognized the uniqueness of the GPS program and the exemplary role it has played in building international collaboration for the benefit of humanity.
On December 6, 2018, Gladys West was inducted into the Air Force Space and Missile Pioneers Hall of Fame in recognition of her work on an extremely accurate geodetic Earth model, which was ultimately used to determine the orbit of the GPS constellation.
On February 12, 2019, four founding members of the project were awarded the Queen Elizabeth Prize for Engineering with the chair of the awarding board stating: "Engineering is the foundation of civilisation; there is no other foundation; it makes things happen. And that's exactly what today's Laureates have done – they've made things happen. They've re-written, in a major way, the infrastructure of our world."
Principles.
The GPS satellites carry very stable atomic clocks that are synchronized with one another and with the reference atomic clocks at the ground control stations; any drift of the clocks aboard the satellites from the reference time maintained on the ground stations is corrected regularly. Since the speed of radio waves (speed of light) is constant and independent of the satellite speed, the time delay between when the satellite transmits a signal and the ground station receives it is proportional to the distance from the satellite to the ground station. With the distance information collected from multiple ground stations, the location coordinates of any satellite at any time can be calculated with great precision.
Each GPS satellite carries an accurate record of its own position and time, and broadcasts that data continuously. Based on data received from multiple GPS satellites, an end user's GPS receiver can calculate its own four-dimensional position in spacetime; However, at a minimum, four satellites must be in view of the receiver for it to compute four unknown quantities (three position coordinates and the deviation of its own clock from satellite time).
More detailed description.
Each GPS satellite continually broadcasts a signal (carrier wave with modulation) that includes:
Conceptually, the receiver measures the TOAs (according to its own clock) of four satellite signals. From the TOAs and the TOTs, the receiver forms four time of flight (TOF) values, which are (given the speed of light) approximately equivalent to receiver-satellite ranges plus time difference between the receiver and GPS satellites multiplied by speed of light, which are called pseudo-ranges. The receiver then computes its three-dimensional position and clock deviation from the four TOFs.
In practice the receiver position (in three dimensional Cartesian coordinates with origin at the Earth's center) and the offset of the receiver clock relative to the GPS time are computed simultaneously, using the navigation equations to process the TOFs.
The receiver's Earth-centered solution location is usually converted to latitude, longitude and height relative to an ellipsoidal Earth model. The height may then be further converted to height relative to the geoid, which is essentially mean sea level. These coordinates may be displayed, such as on a moving map display, or recorded or used by some other system, such as a vehicle guidance system.
User-satellite geometry.
Although usually not formed explicitly in the receiver processing, the conceptual time differences of arrival (TDOAs) define the measurement geometry. Each TDOA corresponds to a hyperboloid of revolution (see Multilateration). The line connecting the two satellites involved (and its extensions) forms the axis of the hyperboloid. The receiver is located at the point where three hyperboloids intersect.
It is sometimes incorrectly said that the user location is at the intersection of three spheres. While simpler to visualize, this is the case only if the receiver has a clock synchronized with the satellite clocks (i.e., the receiver measures true ranges to the satellites rather than range differences). There are marked performance benefits to the user carrying a clock synchronized with the satellites. Foremost is that only three satellites are needed to compute a position solution. If it were an essential part of the GPS concept that all users needed to carry a synchronized clock, a smaller number of satellites could be deployed, but the cost and complexity of the user equipment would increase.
Receiver in continuous operation.
The description above is representative of a receiver start-up situation. Most receivers have a track algorithm, sometimes called a "tracker", that combines sets of satellite measurements collected at different times—in effect, taking advantage of the fact that successive receiver positions are usually close to each other. After a set of measurements are processed, the tracker predicts the receiver location corresponding to the next set of satellite measurements. When the new measurements are collected, the receiver uses a weighting scheme to combine the new measurements with the tracker prediction. In general, a tracker can (a) improve receiver position and time accuracy, (b) reject bad measurements, and (c) estimate receiver speed and direction.
The disadvantage of a tracker is that changes in speed or direction can be computed only with a delay, and that derived direction becomes inaccurate when the distance traveled between two position measurements drops below or near the random error of position measurement. GPS units can use measurements of the Doppler shift of the signals received to compute velocity accurately. More advanced navigation systems use additional sensors like a compass or an inertial navigation system to complement GPS.
Non-navigation applications.
GPS requires four or more satellites to be visible for accurate navigation. The solution of the navigation equations gives the position of the receiver along with the difference between the time kept by the receiver's on-board clock and the true time-of-day, thereby eliminating the need for a more precise and possibly impractical receiver based clock. Applications for GPS such as time transfer, traffic signal timing, and synchronization of cell phone base stations, make use of this cheap and highly accurate timing. Some GPS applications use this time for display, or, other than for the basic position calculations, do not use it at all.
Although four satellites are required for normal operation, fewer apply in special cases. If one variable is already known, a receiver can determine its position using only three satellites. For example, a ship on the open ocean usually has a known elevation close to 0m, and the elevation of an aircraft may be known. Some GPS receivers may use additional clues or assumptions such as reusing the last known altitude, dead reckoning, inertial navigation, or including information from the vehicle computer, to give a (possibly degraded) position when fewer than four satellites are visible.
Structure.
The current GPS consists of three major segments. These are the space segment, a control segment, and a user segment. The U.S. Space Force develops, maintains, and operates the space and control segments. GPS satellites broadcast signals from space, and each GPS receiver uses these signals to calculate its three-dimensional location (latitude, longitude, and altitude) and the current time.
Space segment.
The space segment (SS) is composed of 24 to 32 satellites, or Space Vehicles (SV), in medium Earth orbit, and also includes the payload adapters to the boosters required to launch them into orbit. The GPS design originally called for 24 SVs, eight each in three approximately circular orbits, but this was modified to six orbital planes with four satellites each. The six orbit planes have approximately 55° inclination (tilt relative to the Earth's equator) and are separated by 60° right ascension of the ascending node (angle along the equator from a reference point to the orbit's intersection). The orbital period is one-half of a sidereal day, "i.e.", 11 hours and 58 minutes, so that the satellites pass over the same locations or almost the same locations every day. The orbits are arranged so that at least six satellites are always within line of sight from everywhere on the Earth's surface (see animation at right). The result of this objective is that the four satellites are not evenly spaced (90°) apart within each orbit. In general terms, the angular difference between satellites in each orbit is 30°, 105°, 120°, and 105° apart, which sum to 360°.
Orbiting at an altitude of approximately ; orbital radius of approximately , each SV makes two complete orbits each sidereal day, repeating the same ground track each day. This was very helpful during development because even with only four satellites, correct alignment means all four are visible from one spot for a few hours each day. For military operations, the ground track repeat can be used to ensure good coverage in combat zones.
As of 2019[ [update]], there are 31 satellites in the GPS constellation, 27 of which are in use at a given time with the rest allocated as stand-bys. A 32nd was launched in 2018, but as of July 2019 is still in evaluation. More decommissioned satellites are in orbit and available as spares. The additional satellites improve the precision of GPS receiver calculations by providing redundant measurements. With the increased number of satellites, the constellation was changed to a nonuniform arrangement. Such an arrangement was shown to improve accuracy but also improves reliability and availability of the system, relative to a uniform system, when multiple satellites fail. With the expanded constellation, nine satellites are usually visible at any time from any point on the Earth with a clear horizon, ensuring considerable redundancy over the minimum four satellites needed for a position.
Control segment.
The control segment (CS) is composed of:
The MCS can also access Satellite Control Network (SCN) ground antennas (for additional command and control capability) and NGA (National Geospatial-Intelligence Agency) monitor stations. The flight paths of the satellites are tracked by dedicated U.S. Space Force monitoring stations in Hawaii, Kwajalein Atoll, Ascension Island, Diego Garcia, Colorado Springs, Colorado and Cape Canaveral, along with shared NGA monitor stations operated in England, Argentina, Ecuador, Bahrain, Australia and Washington DC. The tracking information is sent to the MCS at Schriever Space Force Base ESE of Colorado Springs, which is operated by the 2nd Space Operations Squadron (2 SOPS) of the U.S. Space Force. Then 2 SOPS contacts each GPS satellite regularly with a navigational update using dedicated or shared (AFSCN) ground antennas (GPS dedicated ground antennas are located at Kwajalein, Ascension Island, Diego Garcia, and Cape Canaveral). These updates synchronize the atomic clocks on board the satellites to within a few nanoseconds of each other, and adjust the ephemeris of each satellite's internal orbital model. The updates are created by a Kalman filter that uses inputs from the ground monitoring stations, space weather information, and various other inputs.
When a satellite's orbit is being adjusted, the satellite is marked "unhealthy", so receivers do not use it. After the maneuver, engineers track the new orbit from the ground, upload the new ephemeris, and mark the satellite healthy again.
The operation control segment (OCS) currently serves as the control segment of record. It provides the operational capability that supports GPS users and keeps the GPS operational and performing within specification.
OCS successfully replaced the legacy 1970s-era mainframe computer at Schriever Air Force Base in September 2007. After installation, the system helped enable upgrades and provide a foundation for a new security architecture that supported U.S. armed forces.
OCS will continue to be the ground control system of record until the new segment, Next Generation GPS Operation Control System (OCX), is fully developed and functional. The US Department of Defense has claimed that the new capabilities provided by OCX will be the cornerstone for revolutionizing GPS's mission capabilities, enabling U.S. Space Force to greatly enhance GPS operational services to U.S. combat forces, civil partners and myriad domestic and international users. The GPS OCX program also will reduce cost, schedule and technical risk. It is designed to provide 50% sustainment cost savings through efficient software architecture and Performance-Based Logistics. In addition, GPS OCX is expected to cost millions less than the cost to upgrade OCS while providing four times the capability.
The GPS OCX program represents a critical part of GPS modernization and provides significant information assurance improvements over the current GPS OCS program.
On September 14, 2011, the U.S. Air Force announced the completion of GPS OCX Preliminary Design Review and confirmed that the OCX program is ready for the next phase of development. The GPS OCX program missed major milestones and pushed its launch into 2021, 5 years past the original deadline. According to the Government Accounting Office in 2019, the 2021 deadline looked shaky.
The project remained delayed in 2023, and was (as of June 2023) 73% over its original estimated budget. In late 2023, Frank Calvelli, the assistant secretary of the Air Force for space acquisitions and integration, stated that the project was estimated to go live some time during the summer of 2024.
User segment.
The user segment (US) is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service, and tens of millions of civil, commercial and scientific users of the Standard Positioning Service. In general, GPS receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites, receiver-processors, and a highly stable clock (often a crystal oscillator). They may also include a display for providing location and speed information to the user.
GPS receivers may include an input for differential corrections, using the RTCM SC-104 format. This is typically in the form of an RS-232 port at 4,800 bit/s speed. Data is actually sent at a much lower rate, which limits the accuracy of the signal sent using RTCM. Receivers with internal DGPS receivers can outperform those using external RTCM data. As of 2006[ [update]], even low-cost units commonly include Wide Area Augmentation System (WAAS) receivers.
Many GPS receivers can relay position data to a PC or other device using the NMEA 0183 protocol. Although this protocol is officially defined by the National Marine Electronics Association (NMEA), references to this protocol have been compiled from public records, allowing open source tools like gpsd to read the protocol without violating intellectual property laws. Other proprietary protocols exist as well, such as the SiRF and MTK protocols. Receivers can interface with other devices using methods including a serial connection, USB, or Bluetooth.
Applications.
While originally a military project, GPS is considered a dual-use technology, meaning it has significant civilian applications as well.
GPS has become a widely deployed and useful tool for commerce, scientific uses, tracking, and surveillance. GPS's accurate time facilitates everyday activities such as banking, mobile phone operations, and even the control of power grids by allowing well synchronized hand-off switching.
Civilian.
Many civilian applications use one or more of GPS's three basic components: absolute location, relative movement, and time transfer.
Restrictions on civilian use.
The U.S. government controls the export of some civilian receivers. All GPS receivers capable of functioning above above sea level and , or designed or modified for use with unmanned missiles and aircraft, are classified as munitions (weapons)—which means they require State Department export licenses. This rule applies even to otherwise purely civilian units that only receive the L1 frequency and the C/A (Coarse/Acquisition) code.
Disabling operation above these limits exempts the receiver from classification as a munition. Vendor interpretations differ. The rule refers to operation at both the target altitude and speed, but some receivers stop operating even when stationary. This has caused problems with some amateur radio balloon launches that regularly reach .
These limits only apply to units or components exported from the United States. A growing trade in various components exists, including GPS units from other countries. These are expressly sold as ITAR-free.
Military.
As of 2009, military GPS applications include:
GPS type navigation was first used in war in the 1991 Persian Gulf War, before GPS was fully developed in 1995, to assist Coalition Forces to navigate and perform maneuvers in the war. The war also demonstrated the vulnerability of GPS to being jammed, when Iraqi forces installed jamming devices on likely targets that emitted radio noise, disrupting reception of the weak GPS signal.
GPS's vulnerability to jamming is a threat that continues to grow as jamming equipment and experience grows. GPS signals have been reported to have been jammed many times over the years for military purposes. Russia seems to have several objectives for this approach, such as intimidating neighbors while undermining confidence in their reliance on American systems, promoting their GLONASS alternative, disrupting Western military exercises, and protecting assets from drones. China uses jamming to discourage US surveillance aircraft near the contested Spratly Islands. North Korea has mounted several major jamming operations near its border with South Korea and offshore, disrupting flights, shipping and fishing operations. Iranian Armed Forces disrupted the civilian airliner plane Flight PS752's GPS when it shot down the aircraft.
In the Russo-Ukrainian War, GPS-guided munitions provided to Ukraine by NATO countries experienced significant failure rates as a result of Russian electronic warfare. Excalibur artillery shells efficiency rate hitting targets dropped from 70% to 6% as Russia adapted its electronic warfare activities.
Timekeeping.
Leap seconds.
While most clocks derive their time from Coordinated Universal Time (UTC), the atomic clocks on the satellites are set to "GPS time". The difference is that GPS time is not corrected to match the rotation of the Earth, so it does not contain new leap seconds or other corrections that are periodically added to UTC. GPS time was set to match UTC in 1980, but has since diverged. The lack of corrections means that GPS time remains at a constant offset with International Atomic Time (TAI) (TAI - GPS = 19 seconds). Periodic corrections are performed to the on-board clocks to keep them synchronized with ground clocks.Section 1.2.2
The GPS navigation message includes the difference between GPS time and UTC. As of 2017,[ [update]] GPS time is 18 seconds ahead of UTC because of the leap second added to UTC on December 31, 2016. Receivers subtract this offset from GPS time to calculate UTC and specific time zone values. New GPS units may not show the correct UTC time until after receiving the UTC offset message. The GPS-UTC offset field can accommodate 255 leap seconds (eight bits).
Accuracy.
GPS time is theoretically accurate to about 14 nanoseconds, due to the clock drift relative to International Atomic Time that the atomic clocks in GPS transmitters experience. Most receivers lose some accuracy in their interpretation of the signals and are only accurate to about 100 nanoseconds.
Relativistic corrections.
The GPS implements two major corrections to its time signals for relativistic effects: one for relative velocity of satellite and receiver, using the special theory of relativity, and one for the difference in gravitational potential between satellite and receiver, using general relativity. The acceleration of the satellite could also be computed independently as a correction, depending on purpose, but normally the effect is already dealt with in the first two corrections.
Format.
As opposed to the year, month, and day format of the Gregorian calendar, the GPS date is expressed as a week number and a seconds-into-week number. The week number is transmitted as a ten-bit field in the C/A and P(Y) navigation messages, and so it becomes zero again every 1,024 weeks (19.6 years). GPS week zero started at 00:00:00 UTC (00:00:19 TAI) on January 6, 1980, and the week number became zero again for the first time at 23:59:47 UTC on August 21, 1999 (00:00:19 TAI on August 22, 1999). It happened the second time at 23:59:42 UTC on April 6, 2019. To determine the current Gregorian date, a GPS receiver must be provided with the approximate date (to within 3,584 days) to correctly translate the GPS date signal. To address this concern in the future the modernized GPS civil navigation (CNAV) message will use a 13-bit field that only repeats every 8,192 weeks (157 years), thus lasting until 2137 (157 years after GPS week zero).
Communication.
The navigational signals transmitted by GPS satellites encode a variety of information including satellite positions, the state of the internal clocks, and the health of the network. These signals are transmitted on two separate carrier frequencies that are common to all satellites in the network. Two different encodings are used: a public encoding that enables lower resolution navigation, and an encrypted encoding used by the U.S. military.
Message format.
Each GPS satellite continuously broadcasts a "navigation message" on L1 (C/A and P/Y) and L2 (P/Y) frequencies at a rate of 50 bits per second (see bitrate). Each complete message takes 750 seconds (<templatestyles src="Fraction/styles.css" />12+1⁄2 minutes) to complete. The message structure has a basic format of a 1500-bit-long frame made up of five subframes, each subframe being 300 bits (6 seconds) long. Subframes 4 and 5 are subcommutated 25 times each, so that a complete data message requires the transmission of 25 full frames. Each subframe consists of ten words, each 30 bits long. Thus, with 300 bits in a subframe times 5 subframes in a frame times 25 frames in a message, each message is 37,500 bits long. At a transmission rate of 50-bit/s, this gives 750 seconds to transmit an entire almanac message (GPS). Each 30-second frame begins precisely on the minute or half-minute as indicated by the atomic clock on each satellite.
The first subframe of each frame encodes the week number and the time within the week, as well as the data about the health of the satellite. The second and the third subframes contain the "ephemeris" – the precise orbit for the satellite. The fourth and fifth subframes contain the "almanac", which contains coarse orbit and status information for up to 32 satellites in the constellation as well as data related to error correction. Thus, to obtain an accurate satellite location from this transmitted message, the receiver must demodulate the message from each satellite it includes in its solution for 18 to 30 seconds. To collect all transmitted almanacs, the receiver must demodulate the message for 732 to 750 seconds or <templatestyles src="Fraction/styles.css" />12+1⁄2 minutes.
All satellites broadcast at the same frequencies, encoding signals using unique code-division multiple access (CDMA) so receivers can distinguish individual satellites from each other. The system uses two distinct CDMA encoding types: the coarse/acquisition (C/A) code, which is accessible by the general public, and the precise (P(Y)) code, which is encrypted so that only the U.S. military and other NATO nations who have been given access to the encryption code can access it.
The ephemeris is updated every 2 hours and is sufficiently stable for 4 hours, with provisions for updates every 6 hours or longer in non-nominal conditions. The almanac is updated typically every 24 hours. Additionally, data for a few weeks following is uploaded in case of transmission updates that delay data upload.
Satellite frequencies.
All satellites broadcast at the same two frequencies, 1.57542 GHz (L1 signal) and 1.2276 GHz (L2 signal). The satellite network uses a CDMA spread-spectrum technique where the low-bitrate message data is encoded with a high-rate pseudo-random (PRN) sequence that is different for each satellite. The receiver must be aware of the PRN codes for each satellite to reconstruct the actual message data. The C/A code, for civilian use, transmits data at 1.023 million chips per second, whereas the P code, for U.S. military use, transmits at 10.23 million chips per second. The actual internal reference of the satellites is 10.22999999543 MHz to compensate for relativistic effects that make observers on the Earth perceive a different time reference with respect to the transmitters in orbit. The L1 carrier is modulated by both the C/A and P codes, while the L2 carrier is only modulated by the P code. The P code can be encrypted as a so-called P(Y) code that is only available to military equipment with a proper decryption key. Both the C/A and P(Y) codes impart the precise time-of-day to the user.
The L3 signal at a frequency of 1.38105 GHz is used to transmit data from the satellites to ground stations. This data is used by the United States Nuclear Detonation (NUDET) Detection System (USNDS) to detect, locate, and report nuclear detonations (NUDETs) in the Earth's atmosphere and near space. One usage is the enforcement of nuclear test ban treaties.
The L4 band at 1.379913 GHz is being studied for additional ionospheric correction.
The L5 frequency band at 1.17645 GHz was added in the process of GPS modernization. This frequency falls into an internationally protected range for aeronautical navigation, promising little or no interference under all circumstances. The first Block IIF satellite that provides this signal was launched in May 2010. On February 5, 2016, the 12th and final Block IIF satellite was launched. The L5 consists of two carrier components that are in phase quadrature with each other. Each carrier component is bi-phase shift key (BPSK) modulated by a separate bit train. "L5, the third civil GPS signal, will eventually support safety-of-life applications for aviation and provide improved availability and accuracy."
In 2011, a conditional waiver was granted to LightSquared to operate a terrestrial broadband service near the L1 band. Although LightSquared had applied for a license to operate in the 1525 to 1559 band as early as 2003 and it was put out for public comment, the FCC asked LightSquared to form a study group with the GPS community to test GPS receivers and identify issues that might arise due to the larger signal power from the LightSquared terrestrial network. The GPS community had not objected to the LightSquared (formerly MSV and SkyTerra) applications until November 2010, when LightSquared applied for a modification to its Ancillary Terrestrial Component (ATC) authorization. This filing (SAT-MOD-20101118-00239) amounted to a request to run several orders of magnitude more power in the same frequency band for terrestrial base stations, essentially repurposing what was supposed to be a "quiet neighborhood" for signals from space as the equivalent of a cellular network. Testing in the first half of 2011 has demonstrated that the impact of the lower 10 MHz of spectrum is minimal to GPS devices (less than 1% of the total GPS devices are affected). The upper 10 MHz intended for use by LightSquared may have some impact on GPS devices. There is some concern that this may seriously degrade the GPS signal for many consumer uses. "Aviation Week" magazine reports that the latest testing (June 2011) confirms "significant jamming" of GPS by LightSquared's system.
Demodulation and decoding.
Because all of the satellite signals are modulated onto the same L1 carrier frequency, the signals must be separated after demodulation. This is done by assigning each satellite a unique binary sequence known as a Gold code. The signals are decoded after demodulation using addition of the Gold codes corresponding to the satellites monitored by the receiver.
If the almanac information has previously been acquired, the receiver picks the satellites to listen for by their PRNs, unique numbers in the range 1 through 32. If the almanac information is not in memory, the receiver enters a search mode until a lock is obtained on one of the satellites. To obtain a lock, it is necessary that there be an unobstructed line of sight from the receiver to the satellite. The receiver can then acquire the almanac and determine the satellites it should listen for. As it detects each satellite's signal, it identifies it by its distinct C/A code pattern. There can be a delay of up to 30 seconds before the first estimate of position because of the need to read the ephemeris data.
Processing of the navigation message enables the determination of the time of transmission and the satellite position at this time. For more information see Demodulation and Decoding, Advanced.
Navigation equations.
Problem statement.
The receiver uses messages received from satellites to determine the satellite positions and time sent. The "x, y," and "z" components of satellite position and the time sent ("s") are designated as ["xi, yi, zi, si"] where the subscript "i" denotes the satellite and has the value 1, 2, ..., "n", where "n" ≥ 4. When the time of message reception indicated by the on-board receiver clock is formula_0, the true reception time is formula_1, where "b" is the receiver's clock bias from the much more accurate GPS clocks employed by the satellites. The receiver clock bias is the same for all received satellite signals (assuming the satellite clocks are all perfectly synchronized). The message's transit time is formula_2, where "si" is the satellite time. Assuming the message traveled at the speed of light, "c", the distance traveled is formula_3.
For n satellites, the equations to satisfy are:
formula_4
where "di" is the geometric distance or range between receiver and satellite "i" (the values without subscripts are the "x, y," and "z" components of receiver position):
formula_5
Defining "pseudoranges" as formula_6, we see they are biased versions of the true range:
formula_7 .
Since the equations have four unknowns ["x, y, z, b"]—the three components of GPS receiver position and the clock bias—signals from at least four satellites are necessary to attempt solving these equations. They can be solved by algebraic or numerical methods. Existence and uniqueness of GPS solutions are discussed by Abell and Chaffee. When "n" is greater than four, this system is overdetermined and a fitting method must be used.
The amount of error in the results varies with the received satellites' locations in the sky, since certain configurations (when the received satellites are close together in the sky) cause larger errors. Receivers usually calculate a running estimate of the error in the calculated position. This is done by multiplying the basic resolution of the receiver by quantities called the geometric dilution of position (GDOP) factors, calculated from the relative sky directions of the satellites used. The receiver location is expressed in a specific coordinate system, such as latitude and longitude using the WGS 84 geodetic datum or a country-specific system.
Geometric interpretation.
The GPS equations can be solved by numerical and analytical methods. Geometrical interpretations can enhance the understanding of these solution methods.
Spheres.
The measured ranges, called pseudoranges, contain clock errors. In a simplified idealization in which the ranges are synchronized, these true ranges represent the radii of spheres, each centered on one of the transmitting satellites. The solution for the position of the receiver is then at the intersection of the surfaces of these spheres; see trilateration (more generally, true-range multilateration). Signals from at minimum three satellites are required, and their three spheres would typically intersect at two points. One of the points is the location of the receiver, and the other moves rapidly in successive measurements and would not usually be on Earth's surface.
In practice, there are many sources of inaccuracy besides clock bias, including random errors as well as the potential for precision loss from subtracting numbers close to each other if the centers of the spheres are relatively close together. This means that the position calculated from three satellites alone is unlikely to be accurate enough. Data from more satellites can help because of the tendency for random errors to cancel out and also by giving a larger spread between the sphere centers. But at the same time, more spheres will not generally intersect at one point. Therefore, a near intersection gets computed, typically via least squares. The more signals available, the better the approximation is likely to be.
Hyperboloids.
If the pseudorange between the receiver and satellite "i" and the pseudorange between the receiver and satellite "j" are subtracted, "pi" − "pj", the common receiver clock bias ("b") cancels out, resulting in a difference of distances "di" − "dj". The locus of points having a constant difference in distance to two points (here, two satellites) is a hyperbola on a plane and a hyperboloid of revolution (more specifically, a two-sheeted hyperboloid) in 3D space (see Multilateration). Thus, from four pseudorange measurements, the receiver can be placed at the intersection of the surfaces of three hyperboloids each with foci at a pair of satellites. With additional satellites, the multiple intersections are not necessarily unique, and a best-fitting solution is sought instead.
Inscribed sphere.
The receiver position can be interpreted as the center of an inscribed sphere (insphere) of radius "bc", given by the receiver clock bias "b" (scaled by the speed of light "c"). The insphere location is such that it touches other spheres. The circumscribing spheres are centered at the GPS satellites, whose radii equal the measured pseudoranges "p"i. This configuration is distinct from the one described above, in which the spheres' radii were the unbiased or geometric ranges "d"i.
Hypercones.
The clock in the receiver is usually not of the same quality as the ones in the satellites and will not be accurately synchronized to them. This produces pseudoranges with large differences compared to the true distances to the satellites. Therefore, in practice, the time difference between the receiver clock and the satellite time is defined as an unknown clock bias "b". The equations are then solved simultaneously for the receiver position and the clock bias. The solution space ["x, y, z, b"] can be seen as a four-dimensional spacetime, and signals from at minimum four satellites are needed. In that case each of the equations describes a hypercone (or spherical cone), with the cusp located at the satellite, and the base a sphere around the satellite. The receiver is at the intersection of four or more of such hypercones.
Solution methods.
Least squares.
When more than four satellites are available, the calculation can use the four best, or more than four simultaneously (up to all visible satellites), depending on the number of receiver channels, processing capability, and geometric dilution of precision (GDOP).
Using more than four involves an over-determined system of equations with no unique solution; such a system can be solved by a least-squares or weighted least squares method.
formula_8
Iterative.
Both the equations for four satellites, or the least squares equations for more than four, are non-linear and need special solution methods. A common approach is by iteration on a linearized form of the equations, such as the Gauss–Newton algorithm.
The GPS was initially developed assuming use of a numerical least-squares solution method—i.e., before closed-form solutions were found.
Closed-form.
One closed-form solution to the above set of equations was developed by S. Bancroft. Its properties are well known; in particular, proponents claim it is superior in low-GDOP situations, compared to iterative least squares methods.
Bancroft's method is algebraic, as opposed to numerical, and can be used for four or more satellites. When four satellites are used, the key steps are inversion of a 4x4 matrix and solution of a single-variable quadratic equation. Bancroft's method provides one or two solutions for the unknown quantities. When there are two (usually the case), only one is a near-Earth sensible solution.
When a receiver uses more than four satellites for a solution, Bancroft uses the generalized inverse (i.e., the pseudoinverse) to find a solution. A case has been made that iterative methods, such as the Gauss–Newton algorithm approach for solving over-determined non-linear least squares problems, generally provide more accurate solutions.
Leick et al. (2015) states that "Bancroft's (1985) solution is a very early, if not the first, closed-form solution."
Other closed-form solutions were published afterwards, although their adoption in practice is unclear.
Error sources and analysis.
GPS error analysis examines error sources in GPS results and the expected size of those errors. GPS makes corrections for receiver clock errors and other effects, but some residual errors remain uncorrected. Error sources include signal arrival time measurements, numerical calculations, atmospheric effects (ionospheric/tropospheric delays), ephemeris and clock data, multipath signals, and natural and artificial interference. Magnitude of residual errors from these sources depends on geometric dilution of precision. Artificial errors may result from jamming devices and threaten ships and aircraft or from intentional signal degradation through selective availability, which limited accuracy to ≈ , but has been switched off since May 1, 2000.
Regulatory spectrum issues concerning GPS receivers.
In the United States, GPS receivers are regulated under the Federal Communications Commission's (FCC) Part 15 rules. As indicated in the manuals of GPS-enabled devices sold in the United States, as a Part 15 device, it "must accept any interference received, including interference that may cause undesired operation". With respect to GPS devices in particular, the FCC states that GPS receiver manufacturers "must use receivers that reasonably discriminate against reception of signals outside their allocated spectrum". For the last 30 years, GPS receivers have operated next to the Mobile Satellite Service band, and have discriminated against reception of mobile satellite services, such as Inmarsat, without any issue.
The spectrum allocated for GPS L1 use by the FCC is 1559 to 1610 MHz, while the spectrum allocated for satellite-to-ground use owned by Lightsquared is the Mobile Satellite Service band. Since 1996, the FCC has authorized licensed use of the spectrum neighboring the GPS band of 1525 to 1559 MHz to the Virginia company LightSquared. On March 1, 2001, the FCC received an application from LightSquared's predecessor, Motient Services, to use their allocated frequencies for an integrated satellite-terrestrial service. In 2002, the U.S. GPS Industry Council came to an out-of-band-emissions (OOBE) agreement with LightSquared to prevent transmissions from LightSquared's ground-based stations from emitting transmissions into the neighboring GPS band of 1559 to 1610 MHz. In 2004, the FCC adopted the OOBE agreement in its authorization for LightSquared to deploy a ground-based network ancillary to their satellite system – known as the Ancillary Tower Components (ATCs) – "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service." This authorization was reviewed and approved by the U.S. Interdepartment Radio Advisory Committee, which includes the U.S. Department of Agriculture, U.S. Space Force, U.S. Army, U.S. Coast Guard, Federal Aviation Administration, National Aeronautics and Space Administration (NASA), U.S. Department of the Interior, and U.S. Department of Transportation.
In January 2011, the FCC conditionally authorized LightSquared's wholesale customers—such as Best Buy, Sharp, and C Spire—to only purchase an integrated satellite-ground-based service from LightSquared and re-sell that integrated service on devices that are equipped to only use the ground-based signal using LightSquared's allocated frequencies of 1525 to 1559 MHz. In December 2010, GPS receiver manufacturers expressed concerns to the FCC that LightSquared's signal would interfere with GPS receiver devices although the FCC's policy considerations leading up to the January 2011 order did not pertain to any proposed changes to the maximum number of ground-based LightSquared stations or the maximum power at which these stations could operate. The January 2011 order makes final authorization contingent upon studies of GPS interference issues carried out by a LightSquared led working group along with GPS industry and Federal agency participation. On February 14, 2012, the FCC initiated proceedings to vacate LightSquared's Conditional Waiver Order based on the NTIA's conclusion that there was currently no practical way to mitigate potential GPS interference.
GPS receiver manufacturers design GPS receivers to use spectrum beyond the GPS-allocated band. In some cases, GPS receivers are designed to use up to 400 MHz of spectrum in either direction of the L1 frequency of 1575.42 MHz, because mobile satellite services in those regions are broadcasting from space to ground, and at power levels commensurate with mobile satellite services. As regulated under the FCC's Part 15 rules, GPS receivers are not warranted protection from signals outside GPS-allocated spectrum. This is why GPS operates next to the Mobile Satellite Service band, and also why the Mobile Satellite Service band operates next to GPS. The symbiotic relationship of spectrum allocation ensures that users of both bands are able to operate cooperatively and freely.
The FCC adopted rules in February 2003 that allowed Mobile Satellite Service (MSS) licensees such as LightSquared to construct a small number of ancillary ground-based towers in their licensed spectrum to "promote more efficient use of terrestrial wireless spectrum". In those 2003 rules, the FCC stated: "As a preliminary matter, terrestrial [Commercial Mobile Radio Service ('CMRS')] and MSS ATC are expected to have different prices, coverage, product acceptance and distribution; therefore, the two services appear, at best, to be imperfect substitutes for one another that would be operating in predominantly different market segments ... MSS ATC is unlikely to compete directly with terrestrial CMRS for the same customer base...". In 2004, the FCC clarified that the ground-based towers would be ancillary, noting: "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service." In July 2010, the FCC stated that it expected LightSquared to use its authority to offer an integrated satellite-terrestrial service to "provide mobile broadband services similar to those provided by terrestrial mobile providers and enhance competition in the mobile broadband sector". GPS receiver manufacturers have argued that LightSquared's licensed spectrum of 1525 to 1559 MHz was never envisioned as being used for high-speed wireless broadband based on the 2003 and 2004 FCC ATC rulings making clear that the Ancillary Tower Component (ATC) would be, in fact, ancillary to the primary satellite component. To build public support of efforts to continue the 2004 FCC authorization of LightSquared's ancillary terrestrial component vs. a simple ground-based LTE service in the Mobile Satellite Service band, GPS receiver manufacturer Trimble Navigation Ltd. formed the "Coalition To Save Our GPS".
The FCC and LightSquared have each made public commitments to solve the GPS interference issue before the network is allowed to operate. According to Chris Dancy of the Aircraft Owners and Pilots Association, airline pilots with the type of systems that would be affected "may go off course and not even realize it". The problems could also affect the Federal Aviation Administration upgrade to the air traffic control system, United States Defense Department guidance, and local emergency services including 911.
On February 14, 2012, the FCC moved to bar LightSquared's planned national broadband network after being informed by the National Telecommunications and Information Administration (NTIA), the federal agency that coordinates spectrum uses for the military and other federal government entities, that "there is no practical way to mitigate potential interference at this time". LightSquared is challenging the FCC's action.
Similar systems.
Following the United States' deployment of GPS, other countries have also developed their own satellite navigation systems. These systems include:
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tilde{t}_i"
},
{
"math_id": 1,
"text": "t_i = \\tilde{t}_i - b"
},
{
"math_id": 2,
"text": "\\tilde{t}_i - b - s_i"
},
{
"math_id": 3,
"text": "\\left(\\tilde{t}_i - b - s_i\\right) c"
},
{
"math_id": 4,
"text": "d_i = \\left( \\tilde{t}_i - b - s_i \\right)c, \\; i=1,2,\\dots,n"
},
{
"math_id": 5,
"text": "d_i = \\sqrt{(x-x_i)^2 + (y-y_i)^2 + (z-z_i)^2}"
},
{
"math_id": 6,
"text": " p_i = \\left ( \\tilde{t}_i - s_i \\right )c"
},
{
"math_id": 7,
"text": "p_i = d_i + bc, \\;i=1,2,...,n"
},
{
"math_id": 8,
"text": "\\left( \\hat{x},\\hat{y},\\hat{z},\\hat{b} \\right) = \\underset{\\left( x,y,z,b \\right)}{\\arg \\min} \\sum_i \\left( \\sqrt{(x-x_i)^2 + (y-y_i)^2 + (z-z_i)^2} + bc - p_i \\right)^2"
}
] | https://en.wikipedia.org/wiki?curid=11866 |
11866035 | Cottrell equation | Equation in electrochemistry
In electrochemistry, the Cottrell equation describes the change in electric current with respect to time in a controlled potential experiment, such as chronoamperometry. Specifically it describes the current response when the potential is a step function in time. It was derived by Frederick Gardner Cottrell in 1903. For a simple redox event, such as the ferrocene/ferrocenium couple, the current measured depends on the rate at which the analyte diffuses to the electrode. That is, the current is said to be "diffusion controlled". The Cottrell equation describes the case for an electrode that is planar but can also be derived for spherical, cylindrical, and rectangular geometries by using the corresponding Laplace operator and boundary conditions in conjunction with Fick's second law of diffusion.
formula_0
where,
i= current, in units of A
n = number of electrons (to reduce/oxidize one molecule of analyte j, for example)
F = Faraday constant, 96485 C/mol
A = area of the (planar) electrode in cm2
formula_1 = initial concentration of the reducible analyte formula_2 in mol/cm3;
Dj = diffusion coefficient for species j in cm2/s
t = time in s.
Deviations from linearity in the plot of i vs. "t"–1/2sometimes indicate that the redox event is associated with other processes, such as association of a ligand, dissociation of a ligand, or a change in geometry. Deviations from linearity can be expected at very short time scales due to non-ideality in the potential step. At long time scales, buildup of the diffusion layer causes a shift from a linearly dominated to a radially dominated diffusion regime, which causes another deviation from linearity.
In practice, the Cottrell equation simplifies to formula_3 where k is the collection of constants for a given system (n, F, A, &NoBreak;&NoBreak;, Dj). | [
{
"math_id": 0,
"text": " i = \\frac {nFAc_{j}^{0}\\sqrt{D_{j}}}{\\sqrt{\\pi t}} "
},
{
"math_id": 1,
"text": " c_j^0 "
},
{
"math_id": 2,
"text": " j "
},
{
"math_id": 3,
"text": "i = kt^{-1/2},"
}
] | https://en.wikipedia.org/wiki?curid=11866035 |
1186804 | Initial condition | Parameter in differential equations and dynamical systems
In mathematics and particularly in dynamic systems, an initial condition, in some contexts called a seed value, is a value of an evolving variable at some point in time designated as the initial time (typically denoted "t" = 0). For a system of order "k" (the number of time lags in discrete time, or the order of the largest derivative in continuous time) and dimension "n" (that is, with "n" different evolving variables, which together can be denoted by an "n"-dimensional coordinate vector), generally "nk" initial conditions are needed in order to trace the system's variables forward through time.
In both differential equations in continuous time and difference equations in discrete time, initial conditions affect the value of the dynamic variables (state variables) at any future time. In continuous time, the problem of finding a closed form solution for the state variables as a function of time and of the initial conditions is called the initial value problem. A corresponding problem exists for discrete time situations. While a closed form solution is not always possible to obtain, future values of a discrete time system can be found by iterating forward one time period per iteration, though rounding error may make this impractical over long horizons.
Linear system.
Discrete time.
A linear matrix difference equation of the homogeneous (having no constant term) form formula_0 has closed form solution formula_1 predicated on the vector formula_2 of initial conditions on the individual variables that are stacked into the vector; formula_2 is called the vector of initial conditions or simply the initial condition, and contains "nk" pieces of information, "n" being the dimension of the vector "X" and "k" = 1 being the number of time lags in the system. The initial conditions in this linear system do not affect the qualitative nature of the future behavior of the state variable "X"; that behavior is stable or unstable based on the eigenvalues of the matrix "A" but not based on the initial conditions.
Alternatively, a dynamic process in a single variable "x" having multiple time lags is
formula_3
Here the dimension is "n" = 1 and the order is "k", so the necessary number of initial conditions to trace the system through time, either iteratively or via closed form solution, is "nk" = "k". Again the initial conditions do not affect the qualitative nature of the variable's long-term evolution. The solution of this equation is found by using its characteristic equation formula_4 to obtain the latter's "k" solutions, which are the characteristic values formula_5 for use in the solution equation
formula_6
Here the constants formula_7 are found by solving a system of "k" different equations based on this equation, each using one of "k" different values of "t" for which the specific initial condition formula_8 Is known.
Continuous time.
A differential equation system of the first order with "n" variables stacked in a vector "X" is
formula_9
Its behavior through time can be traced with a closed form solution conditional on an initial condition vector formula_2. The number of required initial pieces of information is the dimension "n" of the system times the order "k" = 1 of the system, or "n". The initial conditions do not affect the qualitative behavior (stable or unstable) of the system.
A single "k"th order linear equation in a single variable "x" is
formula_10
Here the number of initial conditions necessary for obtaining a closed form solution is the dimension "n" = 1 times the order "k", or simply "k". In this case the "k" initial pieces of information will typically not be different values of the variable "x" at different points in time, but rather the values of "x" and its first "k" – 1 derivatives, all at some point in time such as time zero. The initial conditions do not affect the qualitative nature of the system's behavior. The characteristic equation of this dynamic equation is formula_11 whose solutions are the characteristic values formula_12 these are used in the solution equation
formula_13
This equation and its first "k" – 1 derivatives form a system of "k" equations that can be solved for the "k" parameters formula_14 given the known initial conditions on "x" and its "k" – 1 derivatives' values at some time "t".
Nonlinear systems.
Nonlinear systems can exhibit a substantially richer variety of behavior than linear systems can. In particular, the initial conditions can affect whether the system diverges to infinity or whether it converges to one or another attractor of the system. Each attractor, a (possibly disconnected) region of values that some dynamic paths approach but never leave, has a (possibly disconnected) basin of attraction such that state variables with initial conditions in that basin (and nowhere else) will evolve toward that attractor. Even nearby initial conditions could be in basins of attraction of different attractors (see for example Newton's method#Basins of attraction).
Moreover, in those nonlinear systems showing chaotic behavior, the evolution of the variables exhibits sensitive dependence on initial conditions: the iterated values of any two very nearby points on the same strange attractor, while each remaining on the attractor, will diverge from each other over time. Thus even on a single attractor the precise values of the initial conditions make a substantial difference for the future positions of the iterates. This feature makes accurate simulation of future values difficult, and impossible over long horizons, because stating the initial conditions with exact precision is seldom possible and because rounding error is inevitable after even only a few iterations from an exact initial condition.
Empirical laws and initial conditions.
<templatestyles src="Template:Blockquote/styles.css" />Every empirical law has the disquieting quality that one does not know its limitations. We have seen that there are regularities in the events in the world around us which can be formulated in terms of mathematical concepts with an uncanny accuracy. There are, on the other hand, aspects of the world concerning which we do not believe in the existence of any accurate regularities. We call these initial conditions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X_{t+1}=AX_t"
},
{
"math_id": 1,
"text": "X_t=A^tX_0"
},
{
"math_id": 2,
"text": "X_0"
},
{
"math_id": 3,
"text": "x_t=a_1x_{t-1} +a_2x_{t-2}+\\cdots +a_kx_{t-k}."
},
{
"math_id": 4,
"text": "\\lambda^k-a_1\\lambda^{k-1} -a_2\\lambda^{k-2}-\\cdots -a_{k-1}\\lambda-a_k=0"
},
{
"math_id": 5,
"text": "\\lambda_1, \\dots , \\lambda_k,"
},
{
"math_id": 6,
"text": "x_t=c_1\\lambda _1^t+\\cdots + c_k\\lambda _k^t."
},
{
"math_id": 7,
"text": "c_1, \\dots , c_k"
},
{
"math_id": 8,
"text": "x_t"
},
{
"math_id": 9,
"text": "\\frac{dX}{dt}=AX."
},
{
"math_id": 10,
"text": "\\frac{d^{k}x}{dt^k}+a_{k-1}\\frac{d^{k-1}x}{dt^{k-1}}+\\cdots +a_1\\frac{dx}{dt} +a_0x=0."
},
{
"math_id": 11,
"text": "\\lambda^k+a_{k-1}\\lambda^{k-1}+\\cdots +a_1\\lambda +a_0=0,"
},
{
"math_id": 12,
"text": "\\lambda_1,\\dots , \\lambda_k;"
},
{
"math_id": 13,
"text": "x(t)=c_1e^{\\lambda_1t}+\\cdots + c_ke^{\\lambda_kt}."
},
{
"math_id": 14,
"text": "c_1, \\dots , c_k,"
}
] | https://en.wikipedia.org/wiki?curid=1186804 |
1186886 | Linear system of divisors | In algebraic geometry, a linear system of divisors is an algebraic generalization of the geometric notion of a family of curves; the dimension of the linear system corresponds to the number of parameters of the family.
These arose first in the form of a "linear system" of algebraic curves in the projective plane. It assumed a more general form, through gradual generalisation, so that one could speak of linear equivalence of divisors "D" on a general scheme or even a ringed space formula_0.
Linear systems of dimension 1, 2, or 3 are called a pencil, a net, or a web, respectively.
A map determined by a linear system is sometimes called the Kodaira map.
Definitions.
Given a general variety formula_1, two divisors formula_2 are linearly equivalent if
formula_3
for some non-zero rational function formula_4 on formula_1, or in other words a non-zero element formula_4 of the function field formula_5. Here formula_6 denotes the divisor of zeroes and poles of the function formula_4.
Note that if formula_1 has singular points, the notion of 'divisor' is inherently ambiguous (Cartier divisors, Weil divisors: see divisor (algebraic geometry)). The definition in that case is usually said with greater care (using invertible sheaves or holomorphic line bundles); see below.
A complete linear system on formula_1 is defined as the set of all effective divisors linearly equivalent to some given divisor formula_7. It is denoted formula_8. Let formula_9 be the line bundle associated to formula_10. In the case that formula_1 is a nonsingular projective variety, the set formula_8 is in natural bijection with formula_11 by associating the element formula_12 of formula_8 to the set of non-zero multiples of formula_4 (this is well defined since two non-zero rational functions have the same divisor if and only if they are non-zero multiples of each other). A complete linear system formula_8 is therefore a projective space.
A linear system formula_13 is then a projective subspace of a complete linear system, so it corresponds to a vector subspace "W" of formula_14 The dimension of the linear system formula_13 is its dimension as a projective space. Hence formula_15.
Linear systems can also be introduced by means of the line bundle or invertible sheaf language. In those terms, divisors formula_10 (Cartier divisors, to be precise) correspond to line bundles, and linear equivalence of two divisors means that the corresponding line bundles are isomorphic.
Examples.
Linear equivalence.
Consider the line bundle formula_16 on formula_17 whose sections formula_18 define quadric surfaces. For the associated divisor formula_19, it is linearly equivalent to any other divisor defined by the vanishing locus of some formula_20 using the rational function formula_21 (Proposition 7.2). For example, the divisor formula_10 associated to the vanishing locus of formula_22 is linearly equivalent to the divisor formula_23 associated to the vanishing locus of formula_24. Then, there is the equivalence of divisorsformula_25
Linear systems on curves.
One of the important complete linear systems on an algebraic curve formula_26 of genus formula_27 is given by the complete linear system associated with the canonical divisor formula_28, denoted formula_29. This definition follows from proposition II.7.7 of Hartshorne since every effective divisor in the linear system comes from the zeros of some section of formula_30.
Hyperelliptic curves.
One application of linear systems is used in the classification of algebraic curves. A hyperelliptic curve is a curve formula_26 with a degre formula_31 morphism formula_32. For the case formula_33 all curves are hyperelliptic: the Riemann–Roch theorem then gives the degree of formula_34 is formula_35 and formula_36, hence there is a degree formula_31 map to formula_37.
gdr.
A formula_38 is a linear system formula_13 on a curve formula_26 which is of degree formula_39 and dimension formula_40. For example, hyperelliptic curves have a formula_41 which is induced by the formula_42-map formula_43. In fact, hyperelliptic curves have a unique formula_41 from proposition 5.3. Another close set of examples are curves with a formula_44 which are called trigonal curves. In fact, any curve has a formula_45 for formula_46.
Linear systems of hypersurfaces in a projective space.
Consider the line bundle formula_47 over formula_48. If we take global sections formula_49, then we can take its projectivization formula_50. This is isomorphic to formula_51 where
formula_52
Then, using any embedding formula_53 we can construct a linear system of dimension formula_54.
Characteristic linear system of a family of curves.
The characteristic linear system of a family of curves on an algebraic surface "Y" for a curve "C" in the family is a linear system formed by the curves in the family that are infinitely near "C".
In modern terms, it is a subsystem of the linear system associated to the normal bundle to formula_55. Note a characteristic system need not to be complete; in fact, the question of completeness is something studied extensively by the Italian school without a satisfactory conclusion; nowadays, the Kodaira–Spencer theory can be used to answer the question of the completeness.
Other examples.
The Cayley–Bacharach theorem is a property of a pencil of cubics, which states that the base locus satisfies an "8 implies 9" property: any cubic containing 8 of the points necessarily contains the 9th.
Linear systems in birational geometry.
In general linear systems became a basic tool of birational geometry as practised by the Italian school of algebraic geometry. The technical demands became quite stringent; later developments clarified a number of issues. The computation of the relevant dimensions — the Riemann–Roch problem as it can be called — can be better phrased in terms of homological algebra. The effect of working on varieties with singular points is to show up a difference between Weil divisors (in the free abelian group generated by codimension-one subvarieties), and Cartier divisors coming from sections of invertible sheaves.
The Italian school liked to reduce the geometry on an algebraic surface to that of linear systems cut out by surfaces in three-space; Zariski wrote his celebrated book "Algebraic Surfaces" to try to pull together the methods, involving "linear systems with fixed base points". There was a controversy, one of the final issues in the conflict between 'old' and 'new' points of view in algebraic geometry, over Henri Poincaré's characteristic linear system of an algebraic family of curves on an algebraic surface.
Base locus.
The base locus of a linear system of divisors on a variety refers to the subvariety of points 'common' to all divisors in the linear system. Geometrically, this corresponds to the common intersection of the varieties. Linear systems may or may not have a base locus – for example, the pencil of affine lines formula_56 has no common intersection, but given two (nondegenerate) conics in the complex projective plane, they intersect in four points (counting with multiplicity) and thus the pencil they define has these points as base locus.
More precisely, suppose that formula_8 is a complete linear system of divisors on some variety formula_1. Consider the intersection
formula_57
where formula_58 denotes the support of a divisor, and the intersection is taken over all effective divisors formula_59 in the linear system. This is the base locus of formula_8 (as a set, at least: there may be more subtle scheme-theoretic considerations as to what the structure sheaf of formula_60 should be).
One application of the notion of base locus is to nefness of a Cartier divisor class (i.e. complete linear system). Suppose formula_8 is such a class on a variety formula_1, and formula_26 an irreducible curve on formula_1. If formula_26 is not contained in the base locus of formula_8, then there exists some divisor formula_61 in the class which does not contain formula_26, and so intersects it properly. Basic facts from intersection theory then tell us that we must have formula_62. The conclusion is that to check nefness of a divisor class, it suffices to compute the intersection number with curves contained in the base locus of the class. So, roughly speaking, the 'smaller' the base locus, the 'more likely' it is that the class is nef.
In the modern formulation of algebraic geometry, a complete linear system formula_8 of (Cartier) divisors on a variety formula_1 is viewed as a line bundle formula_63 on formula_1. From this viewpoint, the base locus formula_64 is the set of common zeroes of all sections of formula_63. A simple consequence is that the bundle is globally generated if and only if the base locus is empty.
The notion of the base locus still makes sense for a non-complete linear system as well: the base locus of it is still the intersection of the supports of all the effective divisors in the system.
Example.
Consider the Lefschetz pencil formula_65 given by two generic sections formula_66, so formula_67 given by the schemeformula_68This has an associated linear system of divisors since each polynomial, formula_69 for a fixed formula_70 is a divisor in formula_48. Then, the base locus of this system of divisors is the scheme given by the vanishing locus of formula_71, soformula_72
A map determined by a linear system.
Each linear system on an algebraic variety determines a morphism from the complement of the base locus to a projective space of dimension of the system, as follows. (In a sense, the converse is also true; see the section below)
Let "L" be a line bundle on an algebraic variety "X" and formula_73 a finite-dimensional vector subspace. For the sake of clarity, we first consider the case when "V" is base-point-free; in other words, the natural map formula_74 is surjective (here, "k" = the base field). Or equivalently, formula_75 is surjective. Hence, writing formula_76 for the trivial vector bundle and passing the surjection to the relative Proj, there is a closed immersion:
formula_77
where formula_78 on the right is the invariance of the projective bundle under a twist by a line bundle. Following "i" by a projection, there results in the map:
formula_79
When the base locus of "V" is not empty, the above discussion still goes through with formula_80 in the direct sum replaced by an ideal sheaf defining the base locus and "X" replaced by the blow-up formula_81 of it along the (scheme-theoretic) base locus "B". Precisely, as above, there is a surjection formula_82 where formula_83 is the ideal sheaf of "B" and that gives rise to
formula_84
Since formula_85 an open subset of formula_81, there results in the map:
formula_86
Finally, when a basis of "V" is chosen, the above discussion becomes more down-to-earth (and that is the style used in Hartshorne, Algebraic Geometry).
Linear system determined by a map to a projective space.
Each morphism from an algebraic variety to a projective space determines a base-point-free linear system on the variety; because of this, a base-point-free linear system and a map to a projective space are often used interchangeably.
For a closed immersion formula_87 of algebraic varieties there is a pullback of a linear system formula_88 on formula_1 to formula_89, defined as formula_90 (page 158).
O(1) on a projective variety.
A projective variety formula_1 embedded in formula_91 has a natural linear system determining a map to projective space from formula_92. This sends a point formula_93 to its corresponding point formula_94.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "(X, \\mathcal{O}_X)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "D,E \\in \\text{Div}(X)"
},
{
"math_id": 3,
"text": "E = D + (f)\\ "
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "k(X)"
},
{
"math_id": 6,
"text": "(f)"
},
{
"math_id": 7,
"text": "D \\in \\text{Div}(X)"
},
{
"math_id": 8,
"text": "|D|"
},
{
"math_id": 9,
"text": "\\mathcal{L}"
},
{
"math_id": 10,
"text": "D"
},
{
"math_id": 11,
"text": " (\\Gamma(X,\\mathcal{L}) \\smallsetminus \\{0\\})/k^\\ast, "
},
{
"math_id": 12,
"text": "E = D + (f)"
},
{
"math_id": 13,
"text": " \\mathfrak{d} "
},
{
"math_id": 14,
"text": " \\Gamma(X,\\mathcal{L}). "
},
{
"math_id": 15,
"text": " \\dim \\mathfrak{d} = \\dim W - 1 "
},
{
"math_id": 16,
"text": "\\mathcal{O}(2)"
},
{
"math_id": 17,
"text": "\\mathbb{P}^3"
},
{
"math_id": 18,
"text": "s \\in \\Gamma(\\mathbb{P}^3,\\mathcal{O}(2))"
},
{
"math_id": 19,
"text": "D_s = Z(s)"
},
{
"math_id": 20,
"text": "t \\in \\Gamma(\\mathbb{P}^3,\\mathcal{O}(2)) "
},
{
"math_id": 21,
"text": "\\left(t/s\\right)"
},
{
"math_id": 22,
"text": "x^2 + y^2 + z^2 + w^2"
},
{
"math_id": 23,
"text": "E"
},
{
"math_id": 24,
"text": "xy"
},
{
"math_id": 25,
"text": "D = E + \\left( \\frac{x^2 + y^2 + z^2 + w^2}{xy} \\right)"
},
{
"math_id": 26,
"text": "C"
},
{
"math_id": 27,
"text": "g"
},
{
"math_id": 28,
"text": "K"
},
{
"math_id": 29,
"text": "|K| = \\mathbb{P}(H^0(C,\\omega_C))"
},
{
"math_id": 30,
"text": "\\omega_C"
},
{
"math_id": 31,
"text": "2"
},
{
"math_id": 32,
"text": "f:C \\to \\mathbb{P}^1"
},
{
"math_id": 33,
"text": "g=2"
},
{
"math_id": 34,
"text": "K_C"
},
{
"math_id": 35,
"text": "2g - 2 = 2"
},
{
"math_id": 36,
"text": "h^0(K_C) = 2"
},
{
"math_id": 37,
"text": "\\mathbb{P}^1 = \\mathbb{P}(H^0(C,\\omega_C))"
},
{
"math_id": 38,
"text": "g^r_d"
},
{
"math_id": 39,
"text": "d"
},
{
"math_id": 40,
"text": "r"
},
{
"math_id": 41,
"text": "g^1_2"
},
{
"math_id": 42,
"text": "2:1"
},
{
"math_id": 43,
"text": "C \\to \\mathbb P^1"
},
{
"math_id": 44,
"text": "g_1^3"
},
{
"math_id": 45,
"text": "g^d_1"
},
{
"math_id": 46,
"text": "d \\geq (1/2)g + 1"
},
{
"math_id": 47,
"text": "\\mathcal{O}(d)"
},
{
"math_id": 48,
"text": "\\mathbb{P}^n"
},
{
"math_id": 49,
"text": "V = \\Gamma(\\mathcal{O}(d))"
},
{
"math_id": 50,
"text": "\\mathbb{P}(V)"
},
{
"math_id": 51,
"text": "\\mathbb{P}^N"
},
{
"math_id": 52,
"text": "N = \\binom{n+d}{n} - 1"
},
{
"math_id": 53,
"text": "\\mathbb{P}^k \\to \\mathbb{P}^N"
},
{
"math_id": 54,
"text": "k"
},
{
"math_id": 55,
"text": "C \\hookrightarrow Y"
},
{
"math_id": 56,
"text": "x=a"
},
{
"math_id": 57,
"text": "\\operatorname{Bl}(|D|) := \\bigcap_{D_\\text{eff} \\in |D|} \\operatorname{Supp} D_\\text{eff} \\ "
},
{
"math_id": 58,
"text": "\\operatorname{Supp}"
},
{
"math_id": 59,
"text": "D_\\text{eff}"
},
{
"math_id": 60,
"text": "\\operatorname{Bl}"
},
{
"math_id": 61,
"text": "\\tilde D"
},
{
"math_id": 62,
"text": "|D| \\cdot C \\geq 0"
},
{
"math_id": 63,
"text": "\\mathcal{O}(D)"
},
{
"math_id": 64,
"text": "\\operatorname{Bl}(|D|)"
},
{
"math_id": 65,
"text": "p:\\mathfrak{X} \\to \\mathbb{P}^1"
},
{
"math_id": 66,
"text": "f,g \\in \\Gamma(\\mathbb{P}^n,\\mathcal{O}(d))"
},
{
"math_id": 67,
"text": "\\mathfrak{X}"
},
{
"math_id": 68,
"text": "\\mathfrak{X} =\\text{Proj}\\left( \\frac{k[s,t][x_0,\\ldots,x_n]}{(sf + tg)} \\right)"
},
{
"math_id": 69,
"text": "s_0f + t_0g"
},
{
"math_id": 70,
"text": "[s_0:t_0] \\in \\mathbb{P}^1"
},
{
"math_id": 71,
"text": "f,g"
},
{
"math_id": 72,
"text": "\\text{Bl}(\\mathfrak{X}) = \\text{Proj}\\left(\n\\frac{\n k[s,t][x_0,\\ldots,x_n]\n}{\n (f,g)\n}\n\\right)"
},
{
"math_id": 73,
"text": "V \\subset \\Gamma(X, L)"
},
{
"math_id": 74,
"text": "V \\otimes_k \\mathcal{O}_X \\to L"
},
{
"math_id": 75,
"text": "\\operatorname{Sym}((V \\otimes_k \\mathcal{O}_X) \\otimes_{\\mathcal{O}_X} L^{-1}) \\to \\bigoplus_{n=0}^{\\infty} \\mathcal{O}_X"
},
{
"math_id": 76,
"text": "V_X = V \\times X"
},
{
"math_id": 77,
"text": "i: X \\hookrightarrow \\mathbb{P}(V_X^* \\otimes L) \\simeq \\mathbb{P}(V_X^*) = \\mathbb{P}(V^*) \\times X"
},
{
"math_id": 78,
"text": "\\simeq"
},
{
"math_id": 79,
"text": "f: X \\to \\mathbb{P}(V^*)."
},
{
"math_id": 80,
"text": "\\mathcal{O}_X"
},
{
"math_id": 81,
"text": "\\widetilde{X}"
},
{
"math_id": 82,
"text": "\\operatorname{Sym}((V \\otimes_k \\mathcal{O}_X) \\otimes_{\\mathcal{O}_X} L^{-1}) \\to \\bigoplus_{n=0}^{\\infty} \\mathcal{I}^n"
},
{
"math_id": 83,
"text": "\\mathcal{I}"
},
{
"math_id": 84,
"text": "i: \\widetilde{X} \\hookrightarrow \\mathbb{P}(V^*) \\times X."
},
{
"math_id": 85,
"text": "X - B \\simeq"
},
{
"math_id": 86,
"text": "f: X - B \\to \\mathbb{P}(V^*)."
},
{
"math_id": 87,
"text": "f: Y \\hookrightarrow X"
},
{
"math_id": 88,
"text": "\\mathfrak{d}"
},
{
"math_id": 89,
"text": "Y"
},
{
"math_id": 90,
"text": "f^{-1}(\\mathfrak{d}) = \\{ f^{-1}(D) | D \\in \\mathfrak{d} \\}"
},
{
"math_id": 91,
"text": "\\mathbb{P}^r"
},
{
"math_id": 92,
"text": "\\mathcal{O}_X(1) = \\mathcal{O}_X \\otimes_{\\mathcal{O}_{\\mathbb{P}^r}} \\mathcal{O}_{\\mathbb{P}^r}(1)"
},
{
"math_id": 93,
"text": "x \\in X"
},
{
"math_id": 94,
"text": "[x_0:\\cdots:x_r] \\in \\mathbb{P}^r "
}
] | https://en.wikipedia.org/wiki?curid=1186886 |
1186896 | Ample line bundle | In mathematics, a distinctive feature of algebraic geometry is that some line bundles on a projective variety can be considered "positive", while others are "negative" (or a mixture of the two). The most important notion of positivity is that of an ample line bundle, although there are several related classes of line bundles. Roughly speaking, positivity properties of a line bundle are related to having many global sections. Understanding the ample line bundles on a given variety "X" amounts to understanding the different ways of mapping "X" into projective space. In view of the correspondence between line bundles and divisors (built from codimension-1 subvarieties), there is an equivalent notion of an ample divisor.
In more detail, a line bundle is called basepoint-free if it has enough sections to give a morphism to projective space. A line bundle is semi-ample if some positive power of it is basepoint-free; semi-ampleness is a kind of "nonnegativity". More strongly, a line bundle on a complete variety "X" is very ample if it has enough sections to give a closed immersion (or "embedding") of "X" into projective space. A line bundle is ample if some positive power is very ample.
An ample line bundle on a projective variety "X" has positive degree on every curve in "X". The converse is not quite true, but there are corrected versions of the converse, the Nakai–Moishezon and Kleiman criteria for ampleness.
Introduction.
Pullback of a line bundle and hyperplane divisors.
Given a morphism formula_0 of schemes, a vector bundle formula_1 (or more generally a coherent sheaf on "Y") has a pullback to "X", formula_2 where the projection formula_3 is the projection on the first coordinate (see Sheaf of modules#Operations). The pullback of a vector bundle is a vector bundle of the same rank. In particular, the pullback of a line bundle is a line bundle. (Briefly, the fiber of formula_4 at a point "x" in "X" is the fiber of "E" at "f"("x").)
The notions described in this article are related to this construction in the case of a morphism to projective space
formula_5
with "E" = "O"(1) the line bundle on projective space whose global sections are the homogeneous polynomials of degree 1 (that is, linear functions) in variables formula_6. The line bundle "O"(1) can also be described as the line bundle associated to a hyperplane in formula_7 (because the zero set of a section of "O"(1) is a hyperplane). If "f" is a closed immersion, for example, it follows that the pullback formula_8 is the line bundle on "X" associated to a hyperplane section (the intersection of "X" with a hyperplane in formula_9).
Basepoint-free line bundles.
Let "X" be a scheme over a field "k" (for example, an algebraic variety) with a line bundle "L". (A line bundle may also be called an invertible sheaf.) Let formula_10 be elements of the "k"-vector space formula_11 of global sections of "L". The zero set of each section is a closed subset of "X"; let "U" be the open subset of points at which at least one of formula_12 is not zero. Then these sections define a morphism
formula_13
In more detail: for each point "x" of "U", the fiber of "L" over "x" is a 1-dimensional vector space over the residue field "k"("x"). Choosing a basis for this fiber makes formula_14 into a sequence of "n"+1 numbers, not all zero, and hence a point in projective space. Changing the choice of basis scales all the numbers by the same nonzero constant, and so the point in projective space is independent of the choice.
Moreover, this morphism has the property that the restriction of "L" to "U" is isomorphic to the pullback formula_8.
The base locus of a line bundle "L" on a scheme "X" is the intersection of the zero sets of all global sections of "L". A line bundle "L" is called basepoint-free if its base locus is empty. That is, for every point "x" of "X" there is a global section of "L" which is nonzero at "x". If "X" is proper over a field "k", then the vector space formula_11 of global sections has finite dimension; the dimension is called formula_15. So a basepoint-free line bundle "L" determines a morphism formula_16 over "k", where formula_17, given by choosing a basis for formula_11. Without making a choice, this can be described as the morphism
formula_18
from "X" to the space of hyperplanes in formula_11, canonically associated to the basepoint-free line bundle "L". This morphism has the property that "L" is the pullback formula_8.
Conversely, for any morphism "f" from a scheme "X" to projective space formula_9 over "k", the pullback line bundle formula_8 is basepoint-free. Indeed, "O"(1) is basepoint-free on formula_9, because for every point "y" in formula_9 there is a hyperplane not containing "y". Therefore, for every point "x" in "X", there is a section "s" of "O"(1) over formula_9 that is not zero at "f"("x"), and the pullback of "s" is a global section of formula_8 that is not zero at "x". In short, basepoint-free line bundles are exactly those that can be expressed as the pullback of "O"(1) by some morphism to projective space.
Nef, globally generated, semi-ample.
The degree of a line bundle "L" on a proper curve "C" over "k" is defined as the degree of the divisor ("s") of any nonzero rational section "s" of "L". The coefficients of this divisor are positive at points where "s" vanishes and negative where "s" has a pole. Therefore, any line bundle "L" on a curve "C" such that formula_19 has nonnegative degree (because sections of "L" over "C", as opposed to rational sections, have no poles). In particular, every basepoint-free line bundle on a curve has nonnegative degree. As a result, a basepoint-free line bundle "L" on any proper scheme "X" over a field is nef, meaning that "L" has nonnegative degree on every (irreducible) curve in "X".
More generally, a sheaf "F" of formula_20-modules on a scheme "X" is said to be globally generated if there is a set "I" of global sections formula_21 such that the corresponding morphism
formula_22
of sheaves is surjective. A line bundle is globally generated if and only if it is basepoint-free.
For example, every quasi-coherent sheaf on an affine scheme is globally generated. Analogously, in complex geometry, Cartan's theorem A says that every coherent sheaf on a Stein manifold is globally generated.
A line bundle "L" on a proper scheme over a field is semi-ample if there is a positive integer "r" such that the tensor power formula_23 is basepoint-free. A semi-ample line bundle is nef (by the corresponding fact for basepoint-free line bundles).
Very ample line bundles.
A line bundle "L" on a proper scheme "X" over a field "k" is said to be very ample if it is basepoint-free and the associated morphism
formula_24
is a closed immersion. Here formula_17. Equivalently, "L" is very ample if "X" can be embedded into projective space of some dimension over "k" in such a way that "L" is the restriction of the line bundle "O"(1) to "X". The latter definition is used to define very ampleness for a line bundle on a proper scheme over any commutative ring.
The name "very ample" was introduced by Alexander Grothendieck in 1961. Various names had been used earlier in the context of linear systems of divisors.
For a very ample line bundle "L" on a proper scheme "X" over a field with associated morphism "f", the degree of "L" on a curve "C" in "X" is the degree of "f"("C") as a curve in formula_9. So "L" has positive degree on every curve in "X" (because every subvariety of projective space has positive degree).
Definitions.
Ample invertible sheaves on quasi-compact schemes.
Ample line bundles are used most often on proper schemes, but they can be defined in much wider generality.
Let "X" be a scheme, and let formula_25 be an invertible sheaf on "X". For each formula_26, let formula_27 denote the ideal sheaf of the reduced subscheme supported only at "x". For formula_28, define
formula_29
Equivalently, if formula_30 denotes the residue field at "x" (considered as a skyscraper sheaf supported at "x"), then
formula_31
where formula_32 is the image of "s" in the tensor product.
Fix formula_28. For every "s", the restriction formula_33 is a free formula_34-module trivialized by the restriction of "s", meaning the multiplication-by-s morphism formula_35 is an isomorphism. The set formula_36 is always open, and the inclusion morphism formula_37 is an affine morphism. Despite this, formula_36 need not be an affine scheme. For example, if formula_38, then formula_39 is open in itself and affine over itself but generally not affine.
Assume "X" is quasi-compact. Then formula_25 is ample if, for every formula_26, there exists an formula_40 and an formula_41 such that formula_42 and formula_36 is an affine scheme. For example, the trivial line bundle formula_34 is ample if and only if "X" is quasi-affine.
In general, it is not true that every formula_36 is affine. For example, if formula_43 for some point "O", and if formula_25 is the restriction of formula_44 to "X", then formula_25 and formula_44 have the same global sections, and the non-vanishing locus of a section of formula_25 is affine if and only if the corresponding section of formula_44 contains "O".
It is necessary to allow powers of formula_25 in the definition. In fact, for every "N", it is possible that formula_36 is non-affine for every formula_41 with formula_45. Indeed, suppose "Z" is a finite set of points in formula_46, formula_47, and formula_48. The vanishing loci of the sections of formula_49 are plane curves of degree "N". By taking "Z" to be a sufficiently large set of points in general position, we may ensure that no plane curve of degree "N" (and hence any lower degree) contains all the points of "Z". In particular their non-vanishing loci are all non-affine.
Define formula_50. Let formula_51 denote the structural morphism. There is a natural isomorphism between formula_34-algebra homomorphisms formula_52 and endomorphisms of the graded ring "S". The identity endomorphism of "S" corresponds to a homomorphism formula_53. Applying the formula_54 functor produces a morphism from an open subscheme of "X", denoted formula_55, to formula_56.
The basic characterization of ample invertible sheaves states that if "X" is a quasi-compact quasi-separated scheme and formula_25 is an invertible sheaf on "X", then the following assertions are equivalent:
On proper schemes.
When "X" is separated and finite type over an affine scheme, an invertible sheaf formula_25 is ample if and only if there exists a positive integer "r" such that the tensor power formula_70 is very ample. In particular, a proper scheme over "R" has an ample line bundle if and only if it is projective over "R". Often, this characterization is taken as the definition of ampleness.
The rest of this article will concentrate on ampleness on proper schemes over a field, as this is the most important case. An ample line bundle on a proper scheme "X" over a field has positive degree on every curve in "X", by the corresponding statement for very ample line bundles.
A Cartier divisor "D" on a proper scheme "X" over a field "k" is said to be ample if the corresponding line bundle "O"("D") is ample. (For example, if "X" is smooth over "k", then a Cartier divisor can be identified with a finite linear combination of closed codimension-1 subvarieties of "X" with integer coefficients.)
Weakening the notion of "very ample" to "ample" gives a flexible concept with a wide variety of different characterizations. A first point is that tensoring high powers of an ample line bundle with any coherent sheaf whatsoever gives a sheaf with many global sections. More precisely, a line bundle "L" on a proper scheme "X" over a field (or more generally over a Noetherian ring) is ample if and only if for every coherent sheaf "F" on "X", there is an integer "s" such that the sheaf formula_71 is globally generated for all formula_72. Here "s" may depend on "F".
Another characterization of ampleness, known as the Cartan–Serre–Grothendieck theorem, is in terms of coherent sheaf cohomology. Namely, a line bundle "L" on a proper scheme "X" over a field (or more generally over a Noetherian ring) is ample if and only if for every coherent sheaf "F" on "X", there is an integer "s" such that
formula_73
for all formula_74 and all formula_72. In particular, high powers of an ample line bundle kill cohomology in positive degrees. This implication is called the Serre vanishing theorem, proved by Jean-Pierre Serre in his 1955 paper Faisceaux algébriques cohérents.
formula_78
by
formula_79
This is a closed immersion for formula_80, with image a rational normal curve of degree "d" in formula_81. Therefore, "O"("d") is basepoint-free if and only if formula_77, and very ample if and only if formula_80. It follows that "O"("d") is ample if and only if formula_80.
Criteria for ampleness of line bundles.
Intersection theory.
To determine whether a given line bundle on a projective variety "X" is ample, the following "numerical criteria" (in terms of intersection numbers) are often the most useful. It is equivalent to ask when a Cartier divisor "D" on "X" is ample, meaning that the associated line bundle "O"("D") is ample. The intersection number formula_88 can be defined as the degree of the line bundle "O"("D") restricted to "C". In the other direction, for a line bundle "L" on a projective variety, the first Chern class formula_89 means the associated Cartier divisor (defined up to linear equivalence), the divisor of any nonzero rational section of "L".
On a smooth projective curve "X" over an algebraically closed field "k", a line bundle "L" is very ample if and only if formula_90 for all "k"-rational points "x","y" in "X". Let "g" be the genus of "X". By the Riemann–Roch theorem, every line bundle of degree at least 2"g" + 1 satisfies this condition and hence is very ample. As a result, a line bundle on a curve is ample if and only if it has positive degree.
For example, the canonical bundle formula_91 of a curve "X" has degree 2"g" − 2, and so it is ample if and only if formula_92. The curves with ample canonical bundle form an important class; for example, over the complex numbers, these are the curves with a metric of negative curvature. The canonical bundle is very ample if and only if formula_92 and the curve is not hyperelliptic.
The Nakai–Moishezon criterion (named for Yoshikazu Nakai (1963) and Boris Moishezon (1964)) states that a line bundle "L" on a proper scheme "X" over a field is ample if and only if formula_93 for every (irreducible) closed subvariety "Y" of "X" ("Y" is not allowed to be a point). In terms of divisors, a Cartier divisor "D" is ample if and only if formula_94 for every (nonzero-dimensional) subvariety "Y" of "X". For "X" a curve, this says that a divisor is ample if and only if it has positive degree. For "X" a surface, the criterion says that a divisor "D" is ample if and only if its self-intersection number formula_95 is positive and every curve "C" on "X" has formula_96.
Kleiman's criterion.
To state Kleiman's criterion (1966), let "X" be a projective scheme over a field. Let formula_97 be the real vector space of 1-cycles (real linear combinations of curves in "X") modulo numerical equivalence, meaning that two 1-cycles "A" and "B" are equal in formula_97 if and only if every line bundle has the same degree on "A" and on "B". By the Néron–Severi theorem, the real vector space formula_97 has finite dimension. Kleiman's criterion states that a line bundle "L" on "X" is ample if and only if "L" has positive degree on every nonzero element "C" of the closure of the cone of curves NE("X") in formula_97. (This is slightly stronger than saying that "L" has positive degree on every curve.) Equivalently, a line bundle is ample if and only if its class in the dual vector space formula_98 is in the interior of the nef cone.
Kleiman's criterion fails in general for proper (rather than projective) schemes "X" over a field, although it holds if "X" is smooth or more generally Q-factorial.
A line bundle on a projective variety is called strictly nef if it has positive degree on every curve . and David Mumford constructed line bundles on smooth projective surfaces that are strictly nef but not ample. This shows that the condition formula_99 cannot be omitted in the Nakai–Moishezon criterion, and it is necessary to use the closure of NE("X") rather than NE("X") in Kleiman's criterion. Every nef line bundle on a surface has formula_100, and Nagata and Mumford's examples have formula_101.
C. S. Seshadri showed that a line bundle "L" on a proper scheme over an algebraically closed field is ample if and only if there is a positive real number ε such that deg("L"|"C") ≥ ε"m"("C") for all (irreducible) curves "C" in "X", where "m"("C") is the maximum of the multiplicities at the points of "C".
Several characterizations of ampleness hold more generally for line bundles on a proper algebraic space over a field "k". In particular, the Nakai-Moishezon criterion is valid in that generality. The Cartan-Serre-Grothendieck criterion holds even more generally, for a proper algebraic space over a Noetherian ring "R". (If a proper algebraic space over "R" has an ample line bundle, then it is in fact a projective scheme over "R".) Kleiman's criterion fails for proper algebraic spaces "X" over a field, even if "X" is smooth.
Openness of ampleness.
On a projective scheme "X" over a field, Kleiman's criterion implies that ampleness is an open condition on the class of an R-divisor (an R-linear combination of Cartier divisors) in formula_98, with its topology based on the topology of the real numbers. (An R-divisor is defined to be ample if it can be written as a positive linear combination of ample Cartier divisors.) An elementary special case is: for an ample divisor "H" and any divisor "E", there is a positive real number "b" such that formula_102 is ample for all real numbers "a" of absolute value less than "b". In terms of divisors with integer coefficients (or line bundles), this means that "nH" + "E" is ample for all sufficiently large positive integers "n".
Ampleness is also an open condition in a quite different sense, when the variety or line bundle is varied in an algebraic family. Namely, let formula_103 be a proper morphism of schemes, and let "L" be a line bundle on "X". Then the set of points "y" in "Y" such that "L" is ample on the fiber formula_104 is open (in the Zariski topology). More strongly, if "L" is ample on one fiber formula_104, then there is an affine open neighborhood "U" of "y" such that "L" is ample on formula_105 over "U".
Kleiman's other characterizations of ampleness.
Kleiman also proved the following characterizations of ampleness, which can be viewed as intermediate steps between the definition of ampleness and numerical criteria. Namely, for a line bundle "L" on a proper scheme "X" over a field, the following are equivalent:
formula_108 as formula_109.
Generalizations.
Ample vector bundles.
Robin Hartshorne defined a vector bundle "F" on a projective scheme "X" over a field to be ample if the line bundle formula_110 on the space formula_111 of hyperplanes in "F" is ample.
Several properties of ample line bundles extend to ample vector bundles. For example, a vector bundle "F" is ample if and only if high symmetric powers of "F" kill the cohomology formula_112 of coherent sheaves for all formula_74. Also, the Chern class formula_113 of an ample vector bundle has positive degree on every "r"-dimensional subvariety of "X", for formula_114.
Big line bundles.
A useful weakening of ampleness, notably in birational geometry, is the notion of a big line bundle. A line bundle "L" on a projective variety "X" of dimension "n" over a field is said to be big if there is a positive real number "a" and a positive integer formula_115 such that formula_116 for all formula_117. This is the maximum possible growth rate for the spaces of sections of powers of "L", in the sense that for every line bundle "L" on "X" there is a positive number "b" with formula_118 for all "j" > 0.
There are several other characterizations of big line bundles. First, a line bundle is big if and only if there is a positive integer "r" such that the rational map from "X" to formula_119 given by the sections of formula_23 is birational onto its image. Also, a line bundle "L" is big if and only if it has a positive tensor power which is the tensor product of an ample line bundle "A" and an effective line bundle "B" (meaning that formula_120). Finally, a line bundle is big if and only if its class in formula_98 is in the interior of the cone of effective divisors.
Bigness can be viewed as a birationally invariant analog of ampleness. For example, if formula_103 is a dominant rational map between smooth projective varieties of the same dimension, then the pullback of a big line bundle on "Y" is big on "X". (At first sight, the pullback is only a line bundle on the open subset of "X" where "f" is a morphism, but this extends uniquely to a line bundle on all of "X".) For ample line bundles, one can only say that the pullback of an ample line bundle by a finite morphism is ample.
Example: Let "X" be the blow-up of the projective plane formula_85 at a point over the complex numbers. Let "H" be the pullback to "X" of a line on formula_85, and let "E" be the exceptional curve of the blow-up formula_121. Then the divisor "H" + "E" is big but not ample (or even nef) on "X", because
formula_122
This negativity also implies that the base locus of "H" + "E" (or of any positive multiple) contains the curve "E". In fact, this base locus is equal to "E".
Relative ampleness.
Given a quasi-compact morphism of schemes formula_123, an invertible sheaf "L" on "X" is said to be ample relative to "f" or "f"-ample if the following equivalent conditions are met:
The condition 2 says (roughly) that "X" can be openly compactified to a projective scheme with formula_127 (not just to a proper scheme).
Notes.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f\\colon X \\to Y"
},
{
"math_id": 1,
"text": " p \\colon E \\to Y"
},
{
"math_id": 2,
"text": "f^*E = \\{ (x,e) \\in X \\times E,\\; f(x) = p(e) \\}"
},
{
"math_id": 3,
"text": " p' \\colon f^*E \\to X "
},
{
"math_id": 4,
"text": "f^*E"
},
{
"math_id": 5,
"text": "f\\colon X \\to \\mathbb P^n, "
},
{
"math_id": 6,
"text": "x_0,\\ldots,x_n"
},
{
"math_id": 7,
"text": "\\mathbb P^n"
},
{
"math_id": 8,
"text": "f^*O(1)"
},
{
"math_id": 9,
"text": "\\mathbb{P}^n"
},
{
"math_id": 10,
"text": "a_0,...,a_n"
},
{
"math_id": 11,
"text": "H^0(X,L)"
},
{
"math_id": 12,
"text": "a_0,\\ldots,a_n"
},
{
"math_id": 13,
"text": "f\\colon U\\to \\mathbb{P}^{n}_k,\\ x \\mapsto [a_0(x),\\ldots,a_n(x)]."
},
{
"math_id": 14,
"text": "a_0(x),\\ldots,a_n(x)"
},
{
"math_id": 15,
"text": "h^0(X,L)"
},
{
"math_id": 16,
"text": "f\\colon X\\to \\mathbb{P}^n"
},
{
"math_id": 17,
"text": "n=h^0(X,L)-1"
},
{
"math_id": 18,
"text": "f\\colon X\\to \\mathbb{P}(H^0(X,L))"
},
{
"math_id": 19,
"text": "H^0(C,L)\\neq 0"
},
{
"math_id": 20,
"text": "O_X"
},
{
"math_id": 21,
"text": "s_i\\in H^0(X,F)"
},
{
"math_id": 22,
"text": "\\bigoplus_{i\\in I}O_X\\to F"
},
{
"math_id": 23,
"text": "L^{\\otimes r}"
},
{
"math_id": 24,
"text": "f\\colon X\\to\\mathbb{P}^n_k"
},
{
"math_id": 25,
"text": "\\mathcal{L}"
},
{
"math_id": 26,
"text": "x \\in X"
},
{
"math_id": 27,
"text": "\\mathfrak{m}_x"
},
{
"math_id": 28,
"text": "s \\in \\Gamma(X, \\mathcal{L})"
},
{
"math_id": 29,
"text": "X_s = \\{x \\in X \\colon s_x \\not\\in \\mathfrak{m}_x\\mathcal{L}_x\\}."
},
{
"math_id": 30,
"text": "\\kappa(x)"
},
{
"math_id": 31,
"text": "X_s = \\{x \\in X \\colon \\bar s_x \\neq 0 \\in \\kappa(x) \\otimes \\mathcal{L}_x\\},"
},
{
"math_id": 32,
"text": "\\bar s_x"
},
{
"math_id": 33,
"text": "\\mathcal{L}|_{X_s}"
},
{
"math_id": 34,
"text": "\\mathcal{O}_X"
},
{
"math_id": 35,
"text": "\\mathcal{O}_{X_s} \\to \\mathcal{L}|_{X_s}"
},
{
"math_id": 36,
"text": "X_s"
},
{
"math_id": 37,
"text": "X_s \\to X"
},
{
"math_id": 38,
"text": "s = 1 \\in \\Gamma(X, \\mathcal{O}_X)"
},
{
"math_id": 39,
"text": "X_s = X"
},
{
"math_id": 40,
"text": "n \\ge 1"
},
{
"math_id": 41,
"text": "s \\in \\Gamma(X, \\mathcal{L}^{\\otimes n})"
},
{
"math_id": 42,
"text": "x \\in X_s"
},
{
"math_id": 43,
"text": "X = \\mathbf{P}^2 \\setminus \\{O\\}"
},
{
"math_id": 44,
"text": "\\mathcal{O}_{\\mathbf{P}^2}(1)"
},
{
"math_id": 45,
"text": "n \\le N"
},
{
"math_id": 46,
"text": "\\mathbf{P}^2"
},
{
"math_id": 47,
"text": "X = \\mathbf{P}^2 \\setminus Z"
},
{
"math_id": 48,
"text": "\\mathcal{L} = \\mathcal{O}_{\\mathbf{P}^2}(1)|_X"
},
{
"math_id": 49,
"text": "\\mathcal{L}^{\\otimes N}"
},
{
"math_id": 50,
"text": "\\textstyle S = \\bigoplus_{n \\ge 0} \\Gamma(X, \\mathcal{L}^{\\otimes n})"
},
{
"math_id": 51,
"text": "p \\colon X \\to \\operatorname{Spec} \\mathbf{Z}"
},
{
"math_id": 52,
"text": "\\textstyle p^*(\\tilde S) \\to \\bigoplus_{n \\ge 0} \\mathcal{L}^{\\otimes n}"
},
{
"math_id": 53,
"text": "\\varepsilon"
},
{
"math_id": 54,
"text": "\\operatorname{Proj}"
},
{
"math_id": 55,
"text": "G(\\varepsilon)"
},
{
"math_id": 56,
"text": "\\operatorname{Proj} S"
},
{
"math_id": 57,
"text": "n \\ge 0"
},
{
"math_id": 58,
"text": "G(\\varepsilon) = X"
},
{
"math_id": 59,
"text": "G(\\varepsilon) \\to \\operatorname{Proj} S"
},
{
"math_id": 60,
"text": "\\mathcal{F}"
},
{
"math_id": 61,
"text": "\\bigoplus_{n \\ge 0} \\Gamma(X, \\mathcal{F} \\otimes_{\\mathcal{O}_X} \\mathcal{L}^{\\otimes n}) \\otimes_{\\mathbf{Z}} \\mathcal{L}^{\\otimes{-n}} \\to \\mathcal{F}"
},
{
"math_id": 62,
"text": "\\mathcal{J}"
},
{
"math_id": 63,
"text": "\\bigoplus_{n \\ge 0} \\Gamma(X, \\mathcal{J} \\otimes_{\\mathcal{O}_X} \\mathcal{L}^{\\otimes n}) \\otimes_{\\mathbf{Z}} \\mathcal{L}^{\\otimes{-n}} \\to \\mathcal{J}"
},
{
"math_id": 64,
"text": "n_0"
},
{
"math_id": 65,
"text": "n \\ge n_0"
},
{
"math_id": 66,
"text": "\\mathcal{F} \\otimes \\mathcal{L}^{\\otimes n}"
},
{
"math_id": 67,
"text": "n > 0"
},
{
"math_id": 68,
"text": "k > 0"
},
{
"math_id": 69,
"text": "\\mathcal{L}^{\\otimes(-n)} \\otimes \\mathcal{O}_X^k"
},
{
"math_id": 70,
"text": "\\mathcal{L}^{\\otimes r}"
},
{
"math_id": 71,
"text": "F\\otimes L^{\\otimes r}"
},
{
"math_id": 72,
"text": "r\\geq s"
},
{
"math_id": 73,
"text": "H^i(X,F\\otimes L^{\\otimes r})=0"
},
{
"math_id": 74,
"text": "i>0"
},
{
"math_id": 75,
"text": "L=f^*O(1)"
},
{
"math_id": 76,
"text": "\\mathbb{P}^1_{\\C}"
},
{
"math_id": 77,
"text": "d\\geq 0"
},
{
"math_id": 78,
"text": "\\mathbb{P}^1\\to\\mathbb{P}^{d}"
},
{
"math_id": 79,
"text": "[x,y]\\mapsto [x^d,x^{d-1}y,\\ldots,y^d]."
},
{
"math_id": 80,
"text": "d\\geq 1"
},
{
"math_id": 81,
"text": "\\mathbb{P}^d"
},
{
"math_id": 82,
"text": "d\\geq 3"
},
{
"math_id": 83,
"text": "\\mathbb{P}^{d-1}"
},
{
"math_id": 84,
"text": "X\\to\\mathbb{P}^1"
},
{
"math_id": 85,
"text": "\\mathbb{P}^2"
},
{
"math_id": 86,
"text": "L=O(2p-q)"
},
{
"math_id": 87,
"text": "H^0(X,L)=0"
},
{
"math_id": 88,
"text": "D\\cdot C"
},
{
"math_id": 89,
"text": "c_1(L)"
},
{
"math_id": 90,
"text": "h^0(X,L\\otimes O(-x-y))=h^0(X,L)-2"
},
{
"math_id": 91,
"text": "K_X"
},
{
"math_id": 92,
"text": "g\\geq 2"
},
{
"math_id": 93,
"text": "\\int_Y c_1(L)^{\\text{dim}(Y)}>0"
},
{
"math_id": 94,
"text": "D^{\\text{dim}(Y)}\\cdot Y>0"
},
{
"math_id": 95,
"text": "D^2"
},
{
"math_id": 96,
"text": "D\\cdot C>0"
},
{
"math_id": 97,
"text": "N_1(X)"
},
{
"math_id": 98,
"text": "N^1(X)"
},
{
"math_id": 99,
"text": "c_1(L)^2>0"
},
{
"math_id": 100,
"text": "c_1(L)^2\\geq 0"
},
{
"math_id": 101,
"text": "c_1(L)^2=0"
},
{
"math_id": 102,
"text": "H+aE"
},
{
"math_id": 103,
"text": "f\\colon X\\to Y"
},
{
"math_id": 104,
"text": "X_y"
},
{
"math_id": 105,
"text": "f^{-1}(U)"
},
{
"math_id": 106,
"text": "Y\\sub X"
},
{
"math_id": 107,
"text": "s\\in H^0(Y,\\mathcal L^{\\otimes r})"
},
{
"math_id": 108,
"text": "\\chi(Y,\\mathcal L^{\\otimes r})\\to\\infty"
},
{
"math_id": 109,
"text": " r\\to \\infty"
},
{
"math_id": 110,
"text": "\\mathcal{O}(1)"
},
{
"math_id": 111,
"text": "\\mathbb{P}(F)"
},
{
"math_id": 112,
"text": "H^i"
},
{
"math_id": 113,
"text": "c_r(F)"
},
{
"math_id": 114,
"text": "1\\leq r\\leq \\text{rank}(F)"
},
{
"math_id": 115,
"text": "j_0"
},
{
"math_id": 116,
"text": "h^0(X,L^{\\otimes j})\\geq aj^n"
},
{
"math_id": 117,
"text": "j\\geq j_0"
},
{
"math_id": 118,
"text": "h^0(X,L^{\\otimes j})\\leq bj^n"
},
{
"math_id": 119,
"text": "\\mathbb P(H^0(X,L^{\\otimes r}))"
},
{
"math_id": 120,
"text": "H^0(X,B)\\neq 0"
},
{
"math_id": 121,
"text": "\\pi\\colon X\\to\\mathbb{P}^2"
},
{
"math_id": 122,
"text": "(H+E)\\cdot E=E^2=-1<0."
},
{
"math_id": 123,
"text": "f : X \\to S"
},
{
"math_id": 124,
"text": "U \\subset S"
},
{
"math_id": 125,
"text": "X \\hookrightarrow \\operatorname{Proj}_S(\\mathcal{R}), \\, \\mathcal{R} := f_*\\left( \\bigoplus_0^{\\infty} L^{\\otimes n} \\right)"
},
{
"math_id": 126,
"text": "f^* \\mathcal{R} \\to \\bigoplus_0^{\\infty} L^{\\otimes n}"
},
{
"math_id": 127,
"text": "\\mathcal{O}(1)= L"
}
] | https://en.wikipedia.org/wiki?curid=1186896 |
11869942 | Clutter (radar) | Unwanted echoes
Clutter is the unwanted return (echoes) in electronic systems, particularly in reference to radars. Such echoes are typically returned from ground, sea, rain, animals/insects, chaff and atmospheric turbulences, and can cause serious performance issues with radar systems. What one person considers to be unwanted clutter, another may consider to be a wanted target. However, targets usually refer to point scatterers and clutter to extended scatterers (covering many range, angle, and Doppler cells). The clutter may fill a volume (such as rain) or be confined to a surface (like land). A knowledge of the volume or surface area illuminated is required to estimated the echo per unit volume, η, or echo per unit surface area, σ° (the radar backscatter coefficient).
Causes.
Clutter may be caused by man-made objects such as buildings and — intentionally — by radar countermeasures such as chaff. Other causes include natural objects such as terrain features, sea, precipitation, hail spike, dust storms, birds, turbulence in the atmospheric circulation, and meteor trails. Radar clutter can also be caused by other atmospheric phenomena, such as disturbances in the ionosphere caused by geomagnetic storms or other space weather events. This phenomenon is especially apparent near the geomagnetic poles, where the action of the solar wind on the earth’s magnetosphere produces convection patterns in the ionospheric plasma. Radar clutter can degrade the ability of over-the-horizon radar to detect targets. Clutter may also originate from multipath echoes from valid targets caused by ground reflection, atmospheric ducting or ionospheric reflection/refraction (e.g., anomalous propagation). This clutter type is especially bothersome since it appears to move and behave like common targets of interest, such as aircraft or weather balloons.
Clutter-limited or noise-limited radar.
Electromagnetic signals processed by a radar receiver consist of three main components: useful signal (e.g., echoes from aircraft), clutter, and noise. The total signal competing with the target return is thus clutter plus noise. In practice there is often either no clutter or clutter dominates and the noise can be ignored. In the first case, the radar is said to be noise-limited, while in the second it is clutter-limited.
Volume clutter.
Rain, hail, snow and chaff are examples of volume clutter. For example, suppose an airborne target, at range formula_0, is within a rainstorm. What is the effect on the detectability of the target?
First find the magnitude of the clutter return. Assume that the clutter fills the cell containing the target, that scatterers are statistically independent and that the scatterers are uniformly distributed through the volume. The clutter volume illuminated by a pulse can be calculated from the beam widths and the pulse duration, Figure 1. If "c" is the speed of light and formula_1 is the time duration of the transmitted pulse then the pulse returning from a target is equivalent to a physical extent of "c"formula_1, as is the return from any individual element of the clutter. The azimuth and elevation beamwidths, at a range formula_0, are formula_2 and formula_3 respectively if the illuminated cell is assumed to have an elliptical cross section.
The volume of the illuminated cell is thus:
formula_4
For small angles this simplifies to:
formula_5
The clutter is assumed to be a large number of independent scatterers that fill the cell containing the target uniformly. The clutter return from the volume is calculated as for the normal radar equation but the radar cross section is replaced by the product of the volume backscatter coefficient, formula_6, and the clutter cell volume as derived above. The clutter return is then
formula_7
where
A correction must be made to allow for the fact that the illumination of the clutter is not uniform across the beamwidth. In practice the beam shape will approximate to a sinc function which itself approximates to a Gaussian function. The correction factor is found by integrating across the beam width the Gaussian approximation of the antenna. The corrected back scattered power is
formula_11
A number of simplifying substitutions can be made.
The receiving antenna aperture is related to its gain by:
formula_12
and the antenna gain is related to the two beamwidths by:
formula_13
The same antenna is generally used both for transmission and reception thus the received clutter power is:
formula_14
If the Clutter Return Power is greater than the System Noise Power then the Radar is clutter limited and the Signal to Clutter Ratio must be equal to or greater than the Minimum Signal to Noise Ratio for the target to be detectable.
From the radar equation the return from the target itself will be
formula_15
with a resulting expression for the signal to clutter ratio of
formula_16
The implication is that when the radar is noise limited the variation of signal to noise ratio is an inverse fourth power. Halving the distance will cause the signal to noise ratio to increase (improve) by a factor of 16. When the radar is volume clutter limited, however, the variation is an inverse square law and halving the distance will cause the signal to clutter to improve by only 4 times.
Since
formula_13
it follows that
formula_17
Clearly narrow beamwidths and short pulses are required to reduce the effect of clutter by reducing the volume of the clutter cell. If pulse compression is used then the appropriate pulse duration to be used in the calculation is that of the compressed pulse, not the transmitted pulse.
Problems in calculating signal to volume clutter ratio.
A problem with volume clutter, e.g. rain, is that the volume illuminated may not be completely filled, in which case the fraction filled must be known, and the scatterers may not be uniformly distributed. Consider a beam 10° in elevation. At a range of 10 km the beam could cover from ground level to a height of 1750 metres. There could be rain at ground level but the top of the beam could be above cloud level. In the part of the beam containing rain the rainfall rate will not be constant. One would need to know how the rain was distributed to make any accurate assessment of the clutter and the signal to clutter ratio. All that can be expected from the equation is an estimate to the nearest 5 or 10 dB.
Surface clutter.
The surface clutter return depends upon the nature of the surface, its roughness, the grazing angle (angle the beam makes with the surface), the frequency and the polarisation. The reflected signal is the phasor sum of a large number of individual returns from a variety of sources, some of them capable of movement (leaves, rain drops, ripples) and some of them stationary (pylons, buildings, tree trunks). Individual samples of clutter vary from one resolution cell to another (spatial variation) and vary with time for a given cell (temporal variation).
Beam filling.
For a target close to the Earth's surface such that the earth and target are in the same range resolution cell one of two conditions are possible. The most common case is when the beam intersects the surface at such an angle that the area illuminated at any one time is only a fraction of the surface intersected by the beam as illustrated in Figure 2.
Pulse length limited case.
For the pulse length limited case the area illuminated depends upon the azimuth width of the beam and the length of the pulse, measured along the surface. The illuminated patch has a width in azimuth of
formula_18.
The length measured along the surface is
formula_19.
The area illuminated by the radar is then given by
formula_20
For 'small' beamwidths this approximates to
formula_21
The clutter return is then
formula_22 Watts
Substituting for the illuminated area formula_23
formula_24 Watts
where formula_25 is the back scatter coefficient of the clutter.
Converting formula_26 to degrees and putting in the numerical values gives
formula_27 Watts
The expression for the target return remains unchanged thus the signal to clutter ratio is
formula_28 Watts
This simplifies to
formula_29
In the case of surface clutter the signal to clutter now varies inversely with R. Halving the distance only causes a doubling of the ratio (a factor of two improvement).
Problems in calculating clutter for the pulse length limited case.
There are a number of problems in calculating the signal to clutter ratio. The clutter in the main beam is extended over a range of grazing angles and the backscatter coefficient depends upon grazing angle. Clutter will appear in the antenna sidelobes, which again will involve a range of grazing angles and may even involve clutter of a different nature.
Beam width limited case.
The calculation is similar to the previous examples, in this case the illuminated area is
formula_30
which for small beamwidths simplifies to
formula_31
The clutter return is as before
formula_22 Watts
Substituting for the illuminated area formula_23
formula_32 Watts
This can be simplified to:
formula_33 Watts
Converting formula_26 to degrees
formula_34 Watts
The target return remains unchanged thus
formula_35
Which simplifies to
formula_36
As in the case of Volume Clutter the Signal to clutter ratio follows an inverse square law.
General problems in calculating surface clutter.
The general significant problem is that the backscatter coefficient cannot in general be calculated and must be measured. The problem is the validity of measurements taken in one location under one set of conditions being used for a different location under different conditions. Various empirical formulae and graphs exist which enable an estimate to be made but the results need to be used with caution.
Clutter folding.
Clutter folding is a term used in describing "clutter" seen by radar systems. Clutter folding becomes a problem when the range extent of the clutter (seen by the radar) exceeds the pulse repetition frequency interval of the radar, and it no longer provides adequate clutter suppression, and the clutter "folds" back in range. The solution to this problem is usually to add fill pulses to each coherent dwell of the radar, increasing the range over which clutter suppression is applied by the system.
The tradeoff for doing this is that adding fill pulses will degrade the performance, due to wasted transmitter power and a longer dwell time.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R"
},
{
"math_id": 1,
"text": "\\tau"
},
{
"math_id": 2,
"text": "\\theta/2"
},
{
"math_id": 3,
"text": "\\phi/2"
},
{
"math_id": 4,
"text": "\\ V_m=\\pi R \\tan(\\theta/2)R \\tan(\\phi/2)(c\\tau/2)"
},
{
"math_id": 5,
"text": "\\ V_m\\approx\\frac{\\pi}{4}(R\\theta)(R\\phi)(c\\tau/2)"
},
{
"math_id": 6,
"text": "\\eta"
},
{
"math_id": 7,
"text": "\\ C=\\frac{P_tG_tA_r}{(4\\pi)^2R^4}\\frac{\\pi}{4}(R\\theta)(R\\phi)(c\\tau/2)\\eta"
},
{
"math_id": 8,
"text": "P_t"
},
{
"math_id": 9,
"text": "G_t"
},
{
"math_id": 10,
"text": "A_r"
},
{
"math_id": 11,
"text": "\\ C=\\frac{P_tG_tA_r}{2\\log2(4\\pi)^2R^4}\\frac{\\pi}{4}(R\\theta)(R\\phi)(c\\tau/2)\\eta"
},
{
"math_id": 12,
"text": "\\ A_r=\\frac{G\\lambda^2}{4\\pi}"
},
{
"math_id": 13,
"text": "\\ G=\\frac{\\pi^2}{\\theta\\phi}"
},
{
"math_id": 14,
"text": "\\ C=\\frac{P_tG\\lambda^2}{1024(\\log2)R^2}c\\tau\\eta"
},
{
"math_id": 15,
"text": "\\ S=\\frac{P_tG^2\\lambda^2}{(4\\pi)^3R^4}\\sigma"
},
{
"math_id": 16,
"text": "\\ \\frac{S}{C} = \\frac{1024(\\log2)G\\sigma}{(4\\pi)^3R^2c\\tau\\eta}"
},
{
"math_id": 17,
"text": "\\ \\frac{S}{C} = \\frac{16(\\log2)\\sigma}{\\pi R^2\\theta\\phi c\\tau\\eta}"
},
{
"math_id": 18,
"text": "\\ 2R\\tan\\theta /2"
},
{
"math_id": 19,
"text": "\\ (c\\tau/2)\\sec\\psi"
},
{
"math_id": 20,
"text": "\\ A = 2R(c\\tau/2)(\\tan\\theta/2)\\sec\\psi"
},
{
"math_id": 21,
"text": "\\ A = R(c\\tau/2)\\theta\\sec\\psi"
},
{
"math_id": 22,
"text": "\\ C=\\frac{P_tG^2\\lambda^2}{(4\\pi)^3R^4}A\\sigma^o"
},
{
"math_id": 23,
"text": "A"
},
{
"math_id": 24,
"text": "\\ C=\\frac{c}{2^7\\pi^3}\\frac{P_tG^2\\lambda^2}{R^3}\\tau\\theta\\sec\\psi\\sigma^o"
},
{
"math_id": 25,
"text": "\\sigma^o"
},
{
"math_id": 26,
"text": "\\theta"
},
{
"math_id": 27,
"text": "\\ C=1300\\frac{P_tG^2\\lambda^2}{R^3}\\tau\\theta^o\\sec\\psi\\sigma^o"
},
{
"math_id": 28,
"text": "\\ \\frac{S}{C}=\\frac{1}{1300}\\frac{R^3}{P_tG^2\\lambda^2}\\frac{1}{\\tau\\theta\\sec\\psi\\sigma^o}\\frac{P_tG^2\\lambda^2}{(4\\pi)^3R^4}\\sigma"
},
{
"math_id": 29,
"text": "\\ \\frac{S}{C}=4\\times10^{-7}\\frac{\\cos\\psi}{R\\tau\\theta}\\frac{\\sigma}{\\sigma^o}"
},
{
"math_id": 30,
"text": "\\ A=\\pi R^2\\tan^2\\theta/2"
},
{
"math_id": 31,
"text": "\\ A \\approx\\pi R^2\\theta^2/4"
},
{
"math_id": 32,
"text": "\\ C=\\frac{P_tG^2\\lambda^2}{(4\\pi)^3R^4}\\pi R^2(\\theta/2)^2\\sigma^o"
},
{
"math_id": 33,
"text": "\\ C=\\frac{P_tG^2\\lambda^2}{4^4\\pi^2R^2}\\theta^2\\sigma^o"
},
{
"math_id": 34,
"text": "\\ C=\\frac{P_tG^2\\lambda^2}{4^4R^2}(\\theta^o/180)^2\\sigma^o"
},
{
"math_id": 35,
"text": "\\ \\frac{S}{C}=\\frac{4^4R^2}{P_tG^2\\lambda^2}(180/\\theta^o)^2\\frac{1}{\\sigma^o}\\frac{P_tG^2\\lambda^2}{(4\\pi)^3R^4}\\sigma"
},
{
"math_id": 36,
"text": "\\ \\frac{S}{C}=5.25\\times 10^4\\frac{1}{\\theta^{o2}R^2}\\frac{\\sigma}{\\sigma^o}"
}
] | https://en.wikipedia.org/wiki?curid=11869942 |
11871026 | Logrank test | Hypothesis test to compare the survival distributions of two samples
The logrank test, or log-rank test, is a hypothesis test to compare the survival distributions of two samples. It is a nonparametric test and appropriate to use when the data are right skewed and censored (technically, the censoring must be non-informative). It is widely used in clinical trials to establish the efficacy of a new treatment in comparison with a control treatment when the measurement is the time to event (such as the time from initial treatment to a heart attack). The test is sometimes called the Mantel–Cox test. The logrank test can also be viewed as a time-stratified Cochran–Mantel–Haenszel test.
The test was first proposed by Nathan Mantel and was named the "logrank test" by Richard and Julian Peto.
Definition.
The logrank test statistic compares estimates of the hazard functions of the two groups at each observed event time. It is constructed by computing the observed and expected number of events in one of the groups at each observed event time and then adding these to obtain an overall summary across all-time points where there is an event.
Consider two groups of patients, e.g., treatment vs. control. Let formula_0 be the distinct times of observed events in either group. Let formula_1 and formula_2 be the number of subjects "at risk" (who have not yet had an event or been censored) at the start of period formula_3 in the groups, respectively. Let formula_4 and formula_5 be the observed number of events in the groups at time formula_3. Finally, define formula_6 and formula_7.
The null hypothesis is that the two groups have identical hazard functions, formula_8. Hence, under formula_9, for each group formula_10, formula_11 follows a hypergeometric distribution with parameters formula_12, formula_13, formula_14. This distribution has expected value formula_15 and variance formula_16.
For all formula_17, the logrank statistic compares formula_11 to its expectation formula_18 under formula_9. It is defined as
formula_19 (for formula_20 or formula_21)
By the central limit theorem, the distribution of each formula_22 converges to that of a standard normal distribution as formula_23 approaches infinity and therefore can be approximated by the standard normal distribution for a sufficiently large formula_23. An improved approximation can be obtained by equating this quantity to Pearson type I or II (beta) distributions with matching first four moments, as described in Appendix B of the Peto and Peto paper.
Asymptotic distribution.
If the two groups have the same survival function, the logrank statistic is approximately standard normal. A one-sided level formula_24 test will reject the null hypothesis if formula_25 where formula_26 is the upper formula_24 quantile of the standard normal distribution. If the hazard ratio is formula_27, there are formula_28 total subjects, formula_29 is the probability a subject in either group will eventually have an event (so that formula_30 is the expected number of events at the time of the analysis), and the proportion of subjects randomized to each group is 50%, then the logrank statistic is approximately normal with mean formula_31 and variance 1. For a one-sided level formula_24 test with power formula_32, the sample size required is
formula_33
where formula_26 and formula_34 are the quantiles of the standard normal distribution.
Joint distribution.
Suppose formula_35 and formula_36 are the logrank statistics at two different time points in the same study (formula_35 earlier). Again, assume the hazard functions in the two groups are proportional with hazard ratio formula_27 and formula_37 and formula_38 are the probabilities that a subject will have an event at the two time points where formula_39. formula_35 and formula_36 are approximately bivariate normal with means formula_40 and formula_41 and correlation formula_42. Calculations involving the joint distribution are needed to correctly maintain the error rate when the data are examined multiple times within a study by a Data Monitoring Committee.
Test assumptions.
The logrank test is based on the same assumptions as the Kaplan-Meier survival curve—namely, that censoring is unrelated to prognosis, the survival probabilities are the same for subjects recruited early and late in the study, and the events happened at the times specified. Deviations from these assumptions matter most if they are satisfied differently in the groups being compared, for example if censoring is more likely in one group than another.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "1, \\ldots, J"
},
{
"math_id": 1,
"text": "N_{1,j}"
},
{
"math_id": 2,
"text": "N_{2,j}"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "O_{1,j}"
},
{
"math_id": 5,
"text": "O_{2,j}"
},
{
"math_id": 6,
"text": "N_j = N_{1,j} + N_{2,j}"
},
{
"math_id": 7,
"text": "O_j = O_{1,j} + O_{2,j}"
},
{
"math_id": 8,
"text": " H_0 : h_1(t) = h_2(t)"
},
{
"math_id": 9,
"text": "H_0"
},
{
"math_id": 10,
"text": "i = 1, 2"
},
{
"math_id": 11,
"text": "O_{i,j}"
},
{
"math_id": 12,
"text": "N_j"
},
{
"math_id": 13,
"text": "N_{i,j}"
},
{
"math_id": 14,
"text": "O_j"
},
{
"math_id": 15,
"text": "E_{i,j} = O_j \\frac{N_{i,j}}{N_j}"
},
{
"math_id": 16,
"text": "V_{i,j} = E_{i,j} \\left( \\frac{N_j - O_j}{N_j} \\right) \\left( \\frac{N_j - N_{i,j}}{N_j - 1} \\right)"
},
{
"math_id": 17,
"text": "j = 1, \\ldots, J"
},
{
"math_id": 18,
"text": "E_{i,j}"
},
{
"math_id": 19,
"text": "Z_i = \\frac {\\sum_{j=1}^J (O_{i,j} - E_{i,j})} {\\sqrt {\\sum_{j=1}^J V_{i,j}}}\\ \\xrightarrow{d}\\ \\mathcal N(0,1)"
},
{
"math_id": 20,
"text": "i=1"
},
{
"math_id": 21,
"text": "2"
},
{
"math_id": 22,
"text": "Z_i"
},
{
"math_id": 23,
"text": "J"
},
{
"math_id": 24,
"text": "\\alpha"
},
{
"math_id": 25,
"text": "Z>z_\\alpha"
},
{
"math_id": 26,
"text": "z_\\alpha"
},
{
"math_id": 27,
"text": "\\lambda"
},
{
"math_id": 28,
"text": "n"
},
{
"math_id": 29,
"text": "d"
},
{
"math_id": 30,
"text": "nd"
},
{
"math_id": 31,
"text": " (\\log{\\lambda}) \\, \\sqrt {\\frac {n \\, d} {4}} "
},
{
"math_id": 32,
"text": "1-\\beta"
},
{
"math_id": 33,
"text": " n = \\frac {4 \\, (z_\\alpha + z_\\beta)^2 } {d\\log^2{\\lambda}}"
},
{
"math_id": 34,
"text": "z_\\beta"
},
{
"math_id": 35,
"text": " Z_1 "
},
{
"math_id": 36,
"text": " Z_2 "
},
{
"math_id": 37,
"text": " d_1 "
},
{
"math_id": 38,
"text": " d_2 "
},
{
"math_id": 39,
"text": " d_1 \\leq d_2 "
},
{
"math_id": 40,
"text": " \\log{\\lambda} \\, \\sqrt {\\frac {n \\, d_1} {4}} "
},
{
"math_id": 41,
"text": " \\log{\\lambda} \\, \\sqrt {\\frac {n \\, d_2} {4}} "
},
{
"math_id": 42,
"text": "\\sqrt {\\frac {d_1} {d_2}} "
},
{
"math_id": 43,
"text": " Z "
},
{
"math_id": 44,
"text": " D "
},
{
"math_id": 45,
"text": "\\hat {\\lambda} "
},
{
"math_id": 46,
"text": " \\log{\\hat {\\lambda}} \\approx Z \\, \\sqrt{4/D} "
}
] | https://en.wikipedia.org/wiki?curid=11871026 |
1187311 | Belief revision | Process of changing beliefs to take into account a new piece of information
Belief revision (also called belief change) is the process of changing beliefs to take into account a new piece of information. The logical formalization of belief revision is researched in philosophy, in databases, and in artificial intelligence for the design of rational agents.
What makes belief revision non-trivial is that several different ways for performing this operation may be possible. For example, if the current knowledge includes the three facts "formula_0 is true", "formula_1 is true" and "if formula_0 and formula_1 are true then formula_2 is true", the introduction of the new information "formula_2 is false" can be done preserving consistency only by removing at least one of the three facts. In this case, there are at least three different ways for performing revision. In general, there may be several different ways for changing knowledge.
Revision and update.
Two kinds of changes are usually distinguished:
The main assumption of belief revision is that of minimal change: the knowledge before and after the change should be as similar as possible. In the case of update, this principle formalizes the assumption of inertia. In the case of revision, this principle enforces as much information as possible to be preserved by the change.
Example.
The following classical example shows that the operations to perform in the two settings of update and revision are not the same. The example is based on two different interpretations of the set of beliefs formula_3 and the new piece of information formula_4:
This example shows that revising the belief formula_8 with the new information formula_4 produces two different results formula_9 and formula_7 depending on whether the setting is that of update or revision.
Contraction, expansion, revision, consolidation, and merging.
In the setting in which all beliefs refer to the same situation, a distinction between various operations that can be performed is made:
Revision and merging differ in that the first operation is done when the new belief to incorporate is considered more reliable than the old ones; therefore, consistency is maintained by removing some of the old beliefs. Merging is a more general operation, in that the priority among the belief sets may or may not be the same.
Revision can be performed by first incorporating the new fact and then restoring consistency via consolidation. This is actually a form of merging rather than revision, as the new information is not always treated as more reliable than the old knowledge.
The AGM postulates.
The AGM postulates (named after their proponents Alchourrón, Gärdenfors, and Makinson) are properties that an operator that performs revision should satisfy in order for that operator to be considered rational. The considered setting is that of revision, that is, different pieces of information referring to the same situation. Three operations are considered: expansion (addition of a belief without a consistency check), revision (addition of a belief while maintaining consistency), and contraction (removal of a belief).
The first six postulates are called "the basic AGM postulates". In the settings considered by Alchourrón, Gärdenfors, and Makinson, the current set of beliefs is represented by a deductively closed set of logical formulae formula_10 called belief set, the new piece of information is a logical formula formula_11, and revision is performed by a binary operator formula_12 that takes as its operands the current beliefs and the new information and produces as a result a belief set representing the result of the revision. The formula_13 operator denoted expansion: formula_14 is the deductive closure of formula_15. The AGM postulates for revision are:
A revision operator that satisfies all eight postulates is the full meet revision, in which formula_16 is equal to formula_14 if consistent, and to the deductive closure of formula_11 otherwise. While satisfying all AGM postulates, this revision operator has been considered to be too conservative, in that no information from the old knowledge base is maintained if the revising formula is inconsistent with it.
Conditions equivalent to the AGM postulates.
The AGM postulates are equivalent to several different conditions on the revision operator; in particular, they are equivalent to the revision operator being definable in terms of structures known as selection functions, epistemic entrenchments, systems of spheres, and preference relations. The latter are reflexive, transitive, and total relations over the set of models.
Each revision operator formula_12 satisfying the AGM postulates is associated to a set of preference relations formula_23, one for each possible belief set formula_10, such that the models of formula_10 are exactly the minimal of all models according to formula_23. The revision operator and its associated family of orderings are related by the fact that formula_16 is the set of formulae whose set of models contains all the minimal models of formula_11 according to formula_23. This condition is equivalent to the set of models of formula_16 being exactly the set of the minimal models of formula_11 according to the ordering formula_23.
A preference ordering formula_23 represents an order of implausibility among all situations, including those that are conceivable but yet currently considered false. The minimal models according to such an ordering are exactly the models of the knowledge base, which are the models that are currently considered the most likely. All other models are greater than these ones and are indeed considered less plausible. In general, formula_24 indicates that the situation represented by the model formula_25 is believed to be more plausible than the situation represented by formula_26. As a result, revising by a formula having formula_25 and formula_26 as models should select only formula_25 to be a model of the revised knowledge base, as this model represent the most likely scenario among those supported by formula_11.
Contraction.
Contraction is the operation of removing a belief formula_11 from a knowledge base formula_10; the result of this operation is denoted by formula_27. The operators of revision and contractions are related by the Levi and Harper identities:
formula_28
formula_29
Eight postulates have been defined for contraction. Whenever a revision operator satisfies the eight postulates for revision, its corresponding contraction operator satisfies the eight postulates for contraction and vice versa. If a contraction operator satisfies at least the first six postulates for contraction, translating it into a revision operator and then back into a contraction operator using the two identities above leads to the original contraction operator. The same holds starting from a revision operator.
One of the postulates for contraction has been longly discussed: the recovery postulate:
formula_30
According to this postulate, the removal of a belief formula_11 followed by the reintroduction of the same belief in the belief set should lead to the original belief set. There are some examples showing that such behavior is not always reasonable: in particular, the contraction by a general condition such as formula_8 leads to the removal of more specific conditions such as formula_5 from the belief set; it is then unclear why the reintroduction of formula_8 should also lead to the reintroduction of the more specific condition formula_5. For example, if George was previously believed to have German citizenship, he was also believed to be European. Contracting this latter belief amounts to ceasing to believe that George is European; therefore, that George has German citizenship is also retracted from the belief set. If George is later discovered to have Austrian citizenship, then the fact that he is European is also reintroduced. According to the recovery postulate, however, the belief that he also has German citizenship should also be reintroduced.
The correspondence between revision and contraction induced by the Levi and Harper identities is such that a contraction not satisfying the recovery postulate is translated into a revision satisfying all eight postulates, and that a revision satisfying all eight postulates is translated into a contraction satisfying all eight postulates, including recovery. As a result, if recovery is excluded from consideration, a number of contraction operators are translated into a single revision operator, which can be then translated back into exactly one contraction operator. This operator is the only one of the initial group of contraction operators that satisfies recovery; among this group, it is the operator that preserves as much information as possible.
The Ramsey test.
The evaluation of a counterfactual conditional formula_31 can be done, according to the Ramsey test (named for Frank P. Ramsey), to the hypothetical addition of formula_5 to the set of current beliefs followed by a check for the truth of formula_6. If formula_10 is the set of beliefs currently held, the Ramsey test is formalized by the following correspondence:
formula_32 if and only if formula_33
If the considered language of the formulae representing beliefs is propositional, the Ramsey test gives a consistent definition for counterfactual conditionals in terms of a belief revision operator. However, if the language of formulae representing beliefs itself includes the counterfactual conditional connective formula_34, the Ramsey test leads to the Gärdenfors triviality result: there is no non-trivial revision operator that satisfies both the AGM postulates for revision and the condition of the Ramsey test. This result holds in the assumption that counterfactual formulae like formula_35 can be present in belief sets and revising formulae. Several solutions to this problem have been proposed.
Non-monotonic inference relation.
Given a fixed knowledge base formula_10 and a revision operator formula_12, one can define a non-monotonic inference relation using the following definition: formula_36 if and only if formula_37. In other words, a formula formula_11 entails another formula formula_38 if the addition of the first formula to the current knowledge base leads to the derivation of formula_38. This inference relation is non-monotonic.
The AGM postulates can be translated into a set of postulates for this inference relation. Each of these postulates is entailed by some previously considered set of postulates for non-monotonic inference relations. Vice versa, conditions that have been considered for non-monotonic inference relations can be translated into postulates for a revision operator. All these postulates are entailed by the AGM postulates.
Foundational revision.
In the AGM framework, a belief set is represented by a deductively closed set of propositional formulae. While such sets are infinite, they can always be finitely representable. However, working with deductively closed sets of formulae leads to the implicit assumption that equivalent belief sets should be considered equal when revising. This is called the "principle of irrelevance of syntax".
This principle has been and is currently debated: while formula_39 and formula_40 are two equivalent sets, revising by formula_4 should produce different results. In the first case, formula_5 and formula_6 are two separate beliefs; therefore, revising by formula_4 should not produce any effect on formula_6, and the result of revision is formula_41. In the second case, formula_42 is taken a single belief. The fact that formula_5 is false contradicts this belief, which should therefore be removed from the belief set. The result of revision is therefore formula_43 in this case.
The problem of using deductively closed knowledge bases is that no distinction is made between pieces of knowledge that are known by themselves and pieces of knowledge that are merely consequences of them. This distinction is instead done by the "foundational" approach to belief revision, which is related to foundationalism in philosophy. According to this approach, retracting a non-derived piece of knowledge should lead to retracting all its consequences that are not otherwise supported (by other non-derived pieces of knowledge). This approach can be realized by using knowledge bases that are not deductively closed and assuming that all formulae in the knowledge base represent self-standing beliefs, that is, they are not derived beliefs. In order to distinguish the foundational approach to belief revision to that based on deductively closed knowledge bases, the latter is called the "coherentist" approach. This name has been chosen because the coherentist approach aims at restoring the coherence
(consistency) among "all" beliefs, both self-standing and derived ones. This approach is related to coherentism in philosophy.
Foundationalist revision operators working on non-deductively closed belief sets typically select some subsets of formula_10 that are consistent with formula_11, combined them in some way, and then conjoined them with formula_11. The following are two non-deductively closed base revision operators.
A different realization of the foundational approach to belief revision is based on explicitly declaring the dependences among beliefs. In the truth maintenance systems, dependence links among beliefs can be specified. In other words, one can explicitly declare that a given fact is believed because of one or more other facts; such a dependency is called a "justification". Beliefs not having any justifications play the role of non-derived beliefs in the non-deductively closed knowledge base approach.
Model-based revision and update.
A number of proposals for revision and update based on the set of models of the involved formulae were developed independently of the AGM framework. The principle behind this approach is that a knowledge base is equivalent to a set of "possible worlds", that is, to a set of scenarios that are considered possible according to that knowledge base. Revision can therefore be performed on the sets of possible worlds rather than on the corresponding knowledge bases.
The revision and update operators based on models are usually identified by the name of their authors: Winslett, Forbus, Satoh, Dalal, Hegner, and Weber. According to the first four of these proposal, the result of revising/updating a formula formula_10 by another formula formula_11 is characterized by the set of models of formula_11 that are the closest to the models of formula_10. Different notions of closeness can be defined, leading to the difference among these proposals.
The revision operator defined by Hegner makes formula_10 not to affect the value of the variables that are mentioned in formula_11. What results from this operation is a formula formula_45 that is consistent with formula_11, and can therefore be conjoined with it. The revision operator by Weber is similar, but the literals that are removed from formula_10 are not all literals of formula_11, but only the literals that are evaluated differently by a pair of closest models of formula_10 and formula_11 according to the Satoh measure of closeness.
Iterated revision.
The AGM postulates are equivalent to a preference ordering (an ordering over models) to be associated to every knowledge base formula_10. However, they do not relate the orderings corresponding to two non-equivalent knowledge bases. In particular, the orderings associated to a knowledge base formula_10 and its revised version formula_16 can be completely different. This is a problem for performing a second revision, as the ordering associated with formula_16 is necessary to calculate formula_46.
Establishing a relation between the ordering associated with formula_10 and formula_16 has been however recognized not to be the right solution to this problem. Indeed, the preference relation should depend on the previous history of revisions, rather than on the resulting knowledge base only. More generally, a preference relation gives more information about the state of mind of an agent than a simple knowledge base. Indeed, two states of mind might represent the same piece of knowledge formula_10 while at the same time being different in the way a new piece of knowledge would be incorporated. For example, two people might have the same idea as to where to go on holiday, but they differ on how they would change this idea if they win a million-dollar lottery. Since the basic condition of the preference ordering is that their minimal models are exactly the models of their associated knowledge base, a knowledge base can be considered implicitly represented by a preference ordering (but not vice versa).
Given that a preference ordering allows deriving its associated knowledge base but also allows performing a single step of revision, studies on iterated revision have been concentrated on how a preference ordering should be changed in response of a revision. While single-step revision is about how a knowledge base formula_10 has to be changed into a new knowledge base formula_16, iterated revision is about how a preference ordering (representing both the current knowledge and how much situations believed to be false are considered possible) should be turned into a new preference relation when formula_11 is learned. A single step of iterated revision produces a new ordering that allows for further revisions.
Two kinds of preference ordering are usually considered: numerical and non-numerical. In the first case, the level of plausibility of a model is representing by a non-negative integer number; the lower the rank, the more plausible the situation corresponding to the model. Non-numerical preference orderings correspond to the preference relations used in the AGM framework: a possibly total ordering over models. The non-numerical preference relation were initially considered unsuitable for iterated revision because of the impossibility of reverting a revision by a number of other revisions, which is instead possible in the numerical case.
Darwiche and Pearl formulated the following postulates for iterated revision.
Specific iterated revision operators have been proposed by Spohn, Boutilier, Williams, Lehmann, and others. Williams also provided a general iterated revision operator.
Merging.
The assumption implicit in the revision operator is that the new piece of information formula_11 is always to be considered more reliable than the old knowledge base formula_10. This is formalized by the second of the AGM postulates: formula_11 is always believed after revising formula_10 with formula_11. More generally, one can consider the process of merging several pieces of information (rather than just two) that might or might not have the same reliability. Revision becomes the particular instance of this process when a less reliable piece of information formula_10 is merged with a more reliable formula_11.
While the input to the revision process is a pair of formulae formula_10 and formula_11, the input to merging is a multiset of formulae formula_10, formula_54, etc. The use of multisets is necessary as two sources to the merging process might be identical.
When merging a number of knowledge bases with the same degree of plausibility, a distinction is made between arbitration and majority. This distinction depends on the assumption that is made about the information and how it has to be put together.
The above is the original definition of arbitration. According to a newer definition, an arbitration operator is a merging operator that is insensitive to the number of equivalent knowledge bases to merge. This definition makes arbitration the exact opposite of majority.
Postulates for both arbitration and merging have been proposed. An example of an arbitration operator satisfying all postulates is the classical disjunction. An example of a majority operator satisfying all postulates is that selecting all models that have a minimal total Hamming distance to models of the knowledge bases to merge.
A merging operator can be expressed as a family of orderings over models, one for each possible multiset of knowledge bases to merge: the models of the result of merging a multiset of knowledge bases are the minimal models of the ordering associated to the multiset. A merging operator defined in this way satisfies the postulates for merging if and only if the family of orderings meets a given set of conditions. For the old definition of arbitration, the orderings are not on models but on pairs (or, in general, tuples) of models.
Social choice theory.
Many revision proposals involve orderings over models representing the relative plausibility of the possible alternatives. The problem of merging amounts to combine a set of orderings into a single one expressing the combined plausibility of the alternatives. This is similar with what is done in social choice theory, which is the study of how the preferences of a group of agents can be combined in a rational way. Belief revision and social choice theory are similar in that they combine a set of orderings into one. They differ on how these orderings are interpreted: preferences in social choice theory; plausibility in belief revision. Another difference is that the alternatives are explicitly enumerated in social choice theory, while they are the propositional models over a given alphabet in belief revision.
Complexity.
From the point of view of computational complexity, the most studied problem about belief revision is that of query answering in the propositional case. This is the problem of establishing whether a formula follows from the result of a revision, that is, formula_37, where formula_10, formula_11, and formula_38 are propositional formulae. More generally, query answering is the problem of telling whether a formula is entailed by the result of a belief revision, which could be update, merging, revision, iterated revision, etc. Another problem that has received some attention is that of model checking, that is, checking whether a model satisfies the result of a belief revision. A related question is whether such result can be represented in space polynomial in that of its arguments.
Since a deductively closed knowledge base is infinite, complexity studies on belief revision operators working on deductively closed knowledge bases are done in the assumption that such deductively closed knowledge base are given in the form of an equivalent finite knowledge base.
A distinction is made among belief revision operators and belief revision schemes. While the former are simple mathematical operators mapping a pair of formulae into another formula, the latter depend on further information such as a preference relation. For example, the Dalal revision is an operator because, once two formulae formula_10 and formula_11 are given, no other information is needed to compute formula_16. On the other hand, revision based on a preference relation is a revision scheme, because formula_10 and formula_11 do not allow determining the result of revision if the family of preference orderings between models is not given. The complexity for revision schemes is determined in the assumption that the extra information needed to compute revision is given in some compact form. For example, a preference relation can be represented by a sequence of formulae whose models are increasingly preferred. Explicitly storing the relation as a set of pairs of models is instead not a compact representation of preference because the space required is exponential in the number of propositional letters.
The complexity of query answering and model checking in the propositional case is in the second level of the polynomial hierarchy for most belief revision operators and schemas. Most revision operators suffer from the problem of representational blow up: the result of revising two formulae is not necessarily representable in space polynomial in that of the two original formulae. In other words, revision may exponentially increase the size of the knowledge base.
Relevance.
New breakthrough results that demonstrate how relevance can be employed in belief revision have been achieved. Williams, Peppas, Foo and Chopra reported the results in the "Artificial Intelligence" journal.
Belief revision has also been used to demonstrate the acknowledgement of intrinsic social capital in closed networks.
Implementations.
Systems specifically implementing belief revision are:
Two systems including a belief revision feature are SNePS and Cyc.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "\\{a \\vee b\\}"
},
{
"math_id": 4,
"text": "\\neg a"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "b"
},
{
"math_id": 7,
"text": "\\neg a \\wedge b"
},
{
"math_id": 8,
"text": "a \\vee b"
},
{
"math_id": 9,
"text": "\\neg a "
},
{
"math_id": 10,
"text": "K"
},
{
"math_id": 11,
"text": "P"
},
{
"math_id": 12,
"text": "*"
},
{
"math_id": 13,
"text": "+"
},
{
"math_id": 14,
"text": "K+P"
},
{
"math_id": 15,
"text": "K \\cup \\{P\\}"
},
{
"math_id": 16,
"text": "K*P"
},
{
"math_id": 17,
"text": "P \\in K*P"
},
{
"math_id": 18,
"text": "K*P \\subseteq K+P"
},
{
"math_id": 19,
"text": "\\text{If }(\\neg P) \\not \\in K,\\text{ then }K*P=K+P"
},
{
"math_id": 20,
"text": "\\text{If }P\\text{ and }Q\\text{ are logically equivalent, then }K*P=K*Q"
},
{
"math_id": 21,
"text": "K*(P \\wedge Q) \\subseteq (K*P)+Q"
},
{
"math_id": 22,
"text": "\\text{If }(\\neg Q) \\not\\in K*P\\text{ then }(K*P)+Q \\subseteq K*(P \\wedge Q)"
},
{
"math_id": 23,
"text": "\\leq_K"
},
{
"math_id": 24,
"text": "I <_K J"
},
{
"math_id": 25,
"text": "I"
},
{
"math_id": 26,
"text": "J"
},
{
"math_id": 27,
"text": "K-P"
},
{
"math_id": 28,
"text": "K*P=(K-\\neg P)+P"
},
{
"math_id": 29,
"text": "K-P=K \\cap (K*\\neg P)"
},
{
"math_id": 30,
"text": "K=(K-P)+P"
},
{
"math_id": 31,
"text": "a > b"
},
{
"math_id": 32,
"text": "a > b \\in K"
},
{
"math_id": 33,
"text": "b \\in K * a"
},
{
"math_id": 34,
"text": ">"
},
{
"math_id": 35,
"text": "a>b"
},
{
"math_id": 36,
"text": "P \\vdash Q"
},
{
"math_id": 37,
"text": "K*P \\models Q"
},
{
"math_id": 38,
"text": "Q"
},
{
"math_id": 39,
"text": "\\{a, b\\}"
},
{
"math_id": 40,
"text": "\\{a \\wedge b\\}"
},
{
"math_id": 41,
"text": "\\{\\neg a, b\\}"
},
{
"math_id": 42,
"text": "a \\wedge b"
},
{
"math_id": 43,
"text": "\\{\\neg a\\}"
},
{
"math_id": 44,
"text": "K \\wedge P"
},
{
"math_id": 45,
"text": "K'"
},
{
"math_id": 46,
"text": "K*P*Q"
},
{
"math_id": 47,
"text": "\\alpha \\models \\mu"
},
{
"math_id": 48,
"text": "(\\psi * \\mu) * \\alpha \\equiv \\psi * \\alpha"
},
{
"math_id": 49,
"text": "\\alpha \\models \\neg \\mu"
},
{
"math_id": 50,
"text": "\\psi * \\alpha \\models \\mu"
},
{
"math_id": 51,
"text": "(\\psi * \\mu) * \\alpha \\models \\mu"
},
{
"math_id": 52,
"text": "\\psi * \\alpha \\not\\models \\neg \\mu"
},
{
"math_id": 53,
"text": "(\\psi * \\mu) * \\alpha \\not\\models \\neg \\mu"
},
{
"math_id": 54,
"text": "T"
},
{
"math_id": 55,
"text": "K \\vee T"
}
] | https://en.wikipedia.org/wiki?curid=1187311 |
1187337 | Pressure-gradient force | Force resulting from a difference in pressure across a surface
In fluid mechanics, the pressure-gradient force is the force that results when there is a difference in pressure across a surface. In general, a pressure is a force per unit area across a surface. A difference in pressure across a surface then implies a difference in force, which can result in an acceleration according to Newton's second law of motion, if there is no additional force to balance it. The resulting force is always directed from the region of higher-pressure to the region of lower-pressure. When a fluid is in an equilibrium state (i.e. there are no net forces, and no acceleration), the system is referred to as being in hydrostatic equilibrium. In the case of atmospheres, the pressure-gradient force is balanced by the gravitational force, maintaining hydrostatic equilibrium. In Earth's atmosphere, for example, air pressure decreases at altitudes above Earth's surface, thus providing a pressure-gradient force which counteracts the force of gravity on the atmosphere.
Magnus effect.
The Magnus effect is an observable phenomenon that is commonly associated with a spinning object moving through a fluid. The path of the spinning object is deflected in a manner that is not present when the object is not spinning. The deflection can be explained by the difference in pressure of the fluid on opposite sides of the spinning object. The Magnus effect is dependent on the speed of rotation.
Formalism.
Consider a cubic parcel of fluid with a density formula_0, a height formula_1, and a surface area formula_2. The mass of the parcel can be expressed as, formula_3. Using Newton's second law, formula_4, we can then examine a pressure difference formula_5 (assumed to be only in the formula_6-direction) to find the resulting force, formula_7.
The acceleration resulting from the pressure gradient is then,
formula_8
The effects of the pressure gradient are usually expressed in this way, in terms of an acceleration, instead of in terms of a force. We can express the acceleration more precisely, for a general pressure formula_9 as,
formula_10
The direction of the resulting force (acceleration) is thus in the opposite direction of the most rapid increase of pressure. | [
{
"math_id": 0,
"text": "\\rho"
},
{
"math_id": 1,
"text": "dz"
},
{
"math_id": 2,
"text": "dA"
},
{
"math_id": 3,
"text": "m = \\rho \\, dA \\, dz"
},
{
"math_id": 4,
"text": "F = m a"
},
{
"math_id": 5,
"text": "dP"
},
{
"math_id": 6,
"text": "z"
},
{
"math_id": 7,
"text": " F = - dP \\, dA = \\rho a \\, dA \\, dz"
},
{
"math_id": 8,
"text": "a = -\\frac{1}{\\rho} \\frac{dP}{dz} ."
},
{
"math_id": 9,
"text": "P"
},
{
"math_id": 10,
"text": " \\vec{a} = -\\frac{1}{\\rho} \\vec\\nabla P."
}
] | https://en.wikipedia.org/wiki?curid=1187337 |
1187410 | Semidefinite embedding | Maximum Variance Unfolding (MVU), also known as Semidefinite Embedding (SDE), is an algorithm in computer science that uses semidefinite programming to perform non-linear dimensionality reduction of high-dimensional vectorial input data.
It is motivated by the observation that kernel Principal Component Analysis (kPCA) does not reduce the data dimensionality, as it leverages the Kernel trick to non-linearly map the original data into an inner-product space.
Algorithm.
MVU creates a mapping from the high dimensional input vectors to some low dimensional Euclidean vector space in the following steps:
The steps of applying semidefinite programming followed by a linear dimensionality reduction step to recover a low-dimensional embedding into a Euclidean space were first proposed by Linial, London, and Rabinovich.
Optimization formulation.
Let formula_0 be the original input and formula_1 be the embedding. If formula_2 are two neighbors, then the local isometry constraint that needs to be satisfied is:
formula_3
Let formula_4 be the Gram matrices of formula_5 and formula_6 (i.e.: formula_7). We can express the above constraint for every neighbor points formula_2 in term of formula_4:
formula_8
In addition, we also want to constrain the embedding formula_6 to center at the origin:
formula_9
As described above, except the distances of neighbor points are preserved, the algorithm aims to maximize the pairwise distance of every pair of points. The objective function to be maximized is:
formula_10
Intuitively, maximizing the function above is equivalent to pulling the points as far away from each other as possible and therefore "unfold" the manifold. The local isometry constraint
Let formula_11 where
formula_12
prevents the objective function from diverging (going to infinity).
Since the graph has N points, the distance between any two points formula_13. We can then bound the objective function as follows:
formula_14
The objective function can be rewritten purely in the form of the Gram matrix:
formula_15
Finally, the optimization can be formulated as:
formula_16
After the Gram matrix formula_17 is learned by semidefinite programming, the output formula_18 can be obtained via Cholesky decomposition.
In particular, the Gram matrix can be written as formula_19 where formula_20 is the i-th element of eigenvector formula_21 of the eigenvalue formula_22.
It follows that the formula_23-th element of the output formula_24 is formula_25.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X \\,\\!"
},
{
"math_id": 1,
"text": "Y\\,\\!"
},
{
"math_id": 2,
"text": "i,j\\,\\!"
},
{
"math_id": 3,
"text": "|X_{i}-X_{j}|^{2}=|Y_{i}-Y_{j}|^{2}\\,\\!"
},
{
"math_id": 4,
"text": "G, K\\,\\!"
},
{
"math_id": 5,
"text": " X \\,\\!"
},
{
"math_id": 6,
"text": " Y \\,\\!"
},
{
"math_id": 7,
"text": "G_{ij}=X_i \\cdot X_j,K_{ij}=Y_i \\cdot Y_j \\,\\!"
},
{
"math_id": 8,
"text": "G_{ii}+G_{jj}-G_{ij}-G_{ji}=K_{ii}+K_{jj}-K_{ij}-K_{ji}\\,\\!"
},
{
"math_id": 9,
"text": "0 = |\\sum_{i}Y_{i}|^2\\Leftrightarrow(\\sum_{i}Y_{i}) \\cdot (\\sum_{i}Y_{i})\\Leftrightarrow\\sum_{i,j}Y_{i} \\cdot Y_{j}\\Leftrightarrow\\sum_{i,j}K_{ij}"
},
{
"math_id": 10,
"text": "T(Y)=\\dfrac{1}{2N}\\sum_{i,j}|Y_{i}-Y_{j}|^{2}"
},
{
"math_id": 11,
"text": "\\tau = max \\{\\eta_{ij}|Y_{i}-Y_{j}|^2\\} \\,\\!"
},
{
"math_id": 12,
"text": "\\eta_{ij} := \\begin{cases}\n 1 & \\mbox{if}\\ i \\mbox{ is a neighbour of } j \\\\\n 0 & \\mbox{otherwise} .\n\\end{cases}"
},
{
"math_id": 13,
"text": "|Y_{i}-Y_{j}|^2 \\leq N \\tau \\,\\!"
},
{
"math_id": 14,
"text": "T(Y)=\\dfrac{1}{2N}\\sum_{i,j}|Y_{i}-Y_{j}|^{2} \\leq \\dfrac{1}{2N}\\sum_{i,j}(N\\tau)^2 = \\dfrac{N^3\\tau^2}{2} \\,\\!"
},
{
"math_id": 15,
"text": " \n\\begin{align}\n T(Y) &{}= \\dfrac{1}{2N}\\sum_{i,j}|Y_{i}-Y_{j}|^{2} \\\\\n &{}= \\dfrac{1}{2N}\\sum_{i,j}(Y_{i}^2+Y_{j}^2-Y_{i} \\cdot Y_{j} - Y_{j} \\cdot Y_{i})\\\\\n &{}= \\dfrac{1}{2N}(\\sum_{i,j}Y_{i}^2+\\sum_{i,j}Y_{j}^2-\\sum_{i,j}Y_{i} \\cdot Y_{j} -\\sum_{i,j}Y_{j} \\cdot Y_{i})\\\\ \n &{}= \\dfrac{1}{2N}(\\sum_{i,j}Y_{i}^2+\\sum_{i,j}Y_{j}^2-0 -0)\\\\\n &{}= \\dfrac{1}{N}(\\sum_{i}Y_{i}^2)=\\dfrac{1}{N}(Tr(K))\\\\\n\\end{align}\n\\,\\!"
},
{
"math_id": 16,
"text": "\n \\begin{align}\n& \\text{Maximize} && Tr(\\mathbf{K})\\\\\n& \\text{subject to} && \\mathbf{K} \\succeq 0, \\sum_{ij}\\mathbf{K}_{ij} = 0 \\\\\n& \\text{and} && G_{ii}+G_{jj}-G_{ij}-G_{ji}=K_{ii}+K_{jj}-K_{ij}-K_{ji}, \\forall i, j \\mbox{ where } \\eta_{ij} = 1,\n\\end{align} \n"
},
{
"math_id": 17,
"text": "K \\,\\!"
},
{
"math_id": 18,
"text": "Y \\,\\!"
},
{
"math_id": 19,
"text": " K_{ij}=\\sum_{\\alpha = 1}^{N}(\\lambda_{\\alpha } V_{\\alpha i} V_{\\alpha j}) \\,\\!"
},
{
"math_id": 20,
"text": " V_{\\alpha i} \\,\\!"
},
{
"math_id": 21,
"text": " V_{\\alpha} \\,\\!"
},
{
"math_id": 22,
"text": " \\lambda_{\\alpha } \\,\\!"
},
{
"math_id": 23,
"text": " \\alpha \\,\\!"
},
{
"math_id": 24,
"text": " Y_i \\,\\!"
},
{
"math_id": 25,
"text": " \\sqrt{\\lambda_{\\alpha }} V_{\\alpha i} \\,\\!"
}
] | https://en.wikipedia.org/wiki?curid=1187410 |
11876741 | Simon's problem | Problem in computer science
In computational complexity theory and quantum computing, Simon's problem is a computational problem that is proven to be solved exponentially faster on a quantum computer than on a classical (that is, traditional) computer. The quantum algorithm solving Simon's problem, usually called Simon's algorithm, served as the inspiration for Shor's algorithm. Both problems are special cases of the abelian hidden subgroup problem, which is now known to have efficient quantum algorithms.
The problem is set in the model of decision tree complexity or query complexity and was conceived by Daniel R. Simon in 1994. Simon exhibited a quantum algorithm that solves Simon's problem exponentially faster with exponentially fewer queries than the best probabilistic (or deterministic) classical algorithm. In particular, Simon's algorithm uses a linear number of queries and any classical probabilistic algorithm must use an exponential number of queries.
This problem yields an oracle separation between the complexity classes BPP (bounded-error classical query complexity) and BQP (bounded-error quantum query complexity). This is the same separation that the Bernstein–Vazirani algorithm achieves, and different from the separation provided by the Deutsch–Jozsa algorithm, which separates P and EQP. Unlike the Bernstein–Vazirani algorithm, Simon's algorithm's separation is "exponential".
Because this problem assumes the existence of a highly-structured "black box" oracle to achieve its speedup, this problem has little practical value. However, without such an oracle, exponential speedups cannot easily be proven, since this would prove that P is different from PSPACE.
Problem description.
Given a function (implemented by a black box or oracle) formula_0 with the promise that, for some unknown formula_1, for all formula_2,
formula_3 if and only if formula_4,
where formula_5 denotes bitwise XOR. The goal is to identify formula_6 by making as few queries to formula_7 as possible. Note that
formula_8 if and only if formula_9
Furthermore, for some formula_10 and formula_6 in formula_11, formula_12 is unique (not equal to formula_10) if and only if formula_13. This means that formula_14 is two-to-one when formula_13, and one-to-one when formula_15. It is also the case that formula_11 implies formula_16, meaning thatformula_17which shows how formula_14 is periodic.
The associated decision problem formulation of Simon's problem is to distinguish when formula_15 (formula_14 is one-to-one), and when formula_13 (formula_14 is two-to-one).
Example.
The following function is an example of a function that satisfies the required property for formula_18:
In this case, formula_19 (i.e. the solution). Every output of formula_14 occurs twice, and the two input strings corresponding to any one given output have bitwise XOR equal to formula_19.
For example, the input strings formula_20 and formula_21 are both mapped (by formula_14) to the same output string formula_22. That is, formula_23 and formula_24. Applying XOR to 010 and 100 obtains 110, that is formula_25
formula_26 can also be verified using input strings 001 and 111 that are both mapped (by f) to the same output string 010. Applying XOR to 001 and 111 obtains 110, that is formula_27. This gives the same solution formula_26 as before.
In this example the function "f" is indeed a two-to-one function where formula_28.
Problem hardness.
Intuitively, this is a hard problem to solve in a "classical" way, even if one uses randomness and accepts a small probability of error. The intuition behind the hardness is reasonably simple: if you want to solve the problem classically, you need to find two different inputs formula_10 and formula_12 for which formula_3. There is not necessarily any structure in the function formula_14 that would help us to find two such inputs: more specifically, we can discover something about formula_14 (or what it does) only when, for two different inputs, we obtain the same output. In any case, we would need to guess formula_29 different inputs before being likely to find a pair on which formula_30 takes the same output, as per the birthday problem. Since, classically to find "s" with a 100% certainty it would require checking formula_31 inputs, Simon's problem seeks to find "s" using fewer queries than this classical method.
Simon's algorithm.
The algorithm as a whole uses a subroutine to execute the following two steps:
Quantum subroutine.
The quantum circuit (see the picture) is the implementation of the quantum part of Simon's algorithm. The quantum subroutine of the algorithm makes use of the Hadamard transformformula_36where formula_37, where formula_5 denotes XOR.
First, the algorithm starts with two registers, initialized to formula_38. Then, we apply the Hadamard transform to the first register, which gives the state
formula_39
Query the oracle formula_40 to get the state
formula_41.
Apply another Hadamard transform to the first register. This will produce the state
formula_42
Finally, we measure the first register (the algorithm also works if the second register is measured before the first, but this is unnecessary). The probability of measuring a state formula_43 isformula_44This is due to the fact that taking the magnitude of this vector and squaring it sums up all the probabilities of all the possible measurements of the second register that must have the first register as formula_43. There are two cases for our measurement:
For the first case, formula_45since in this case, formula_14 is one-to-one, implying that the range of formula_14 is formula_46, meaning that the summation is over every basis vector. For the second case, note that there exist two strings, formula_47 and formula_48, such that formula_49, where formula_50. Thus, formula_51Furthermore, since formula_52, formula_53, and soformula_54This expression is now easy to evaluate. Recall that we are measuring formula_55. When formula_56, then this expression will evaluate to formula_57, and when formula_58, then this expression will be formula_59.
Thus, both when formula_15 and when formula_13, our measured formula_55 satisfies formula_58.
Classical post-processing.
We run the quantum part of the algorithm until we have a linearly independent list of bitstrings formula_60, and each formula_34 satisfies formula_35. Thus, we can efficiently solve this system of equations classically to find formula_6.
The probability that formula_61 are linearly independent is at leastformula_62Once we solve the system of equations, and produce a solution formula_63, we can test if formula_64. If this is true, then we know formula_65, since formula_66. If it is the case that formula_67, then that means that formula_15, and formula_68 since formula_14 is one-to-one.
We can repeat Simon's algorithm a constant number of times to increase the probability of success arbitrarily, while still having the same time complexity.
Explicit examples of Simon's algorithm for few qubits.
One qubit.
Consider the simplest instance of the algorithm, with formula_69. In this case evolving the input state through an Hadamard gate and the oracle results in the state (up to renormalization):
formula_70
If formula_71, that is, formula_72, then measuring the second register always gives the outcome formula_73, and always results in the first register collapsing to the state (up to renormalization):
formula_74
Thus applying an Hadamard and measuring the first register always gives the outcome formula_75. On the other hand, if formula_14 is one-to-one, that is, formula_76, then measuring the first register after the second Hadamard can result in both formula_75 and formula_77, with equal probability.
We recover formula_6 from the measurement outcomes by looking at whether we measured always formula_75, in which case formula_71, or we measured both formula_75 and formula_77 with equal probability, in which case we infer that formula_76. This scheme will fail if formula_76 but we nonetheless always found the outcome formula_75, but the probability of this event is formula_78 with formula_79 the number of performed measurements, and can thus be made exponentially small by increasing the statistics.
Two qubits.
Consider now the case with formula_80. The initial part of the algorithm results in the state (up to renormalization):formula_81If formula_82, meaning formula_14 is injective, then finding formula_83 on the second register always collapses the first register to formula_84, for all formula_85. In other words, applying Hadamard gates and measuring the first register the four outcomes formula_86 are thus found with equal probability.
Suppose on the other hand formula_87, for example, formula_88. Then measuring formula_89 on the second register collapses the first register to the state formula_90. And more generally, measuring formula_91 gives formula_92 on the first register. Applying Hadamard gates and measuring on the first register can thus result in the outcomes formula_93 and formula_94 with equal probabilities.
Similar reasoning applies to the other cases: if formula_95 then the possible outcomes are formula_93 and formula_96, while if formula_97 the possible outcomes are formula_93 and formula_98, compatibly with the formula_99 rule discussed in the general case.
To recover formula_6 we thus only need to distinguish between these four cases, collecting enough statistics to ensure that the probability of mistaking one outcome probability distribution for another is sufficiently small.
Complexity.
Simon's algorithm requires formula_32 queries to the black box, whereas a classical algorithm would need at least formula_100 queries. It is also known that Simon's algorithm is optimal in the sense that "any" quantum algorithm to solve this problem requires formula_101 queries.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f:\\{0,1\\}^n\\rightarrow \\{0,1\\}^n"
},
{
"math_id": 1,
"text": "s\\in \\{0,1\\}^n"
},
{
"math_id": 2,
"text": "x, y\\in\\{0,1\\}^n "
},
{
"math_id": 3,
"text": "f(x)=f(y)"
},
{
"math_id": 4,
"text": "x \\oplus y \\in \\{0^n, s\\}"
},
{
"math_id": 5,
"text": "\\oplus"
},
{
"math_id": 6,
"text": "s"
},
{
"math_id": 7,
"text": "f(x)"
},
{
"math_id": 8,
"text": "a \\oplus b = 0^n"
},
{
"math_id": 9,
"text": "a = b"
},
{
"math_id": 10,
"text": "x"
},
{
"math_id": 11,
"text": "x \\oplus y = s"
},
{
"math_id": 12,
"text": "y"
},
{
"math_id": 13,
"text": "s \\neq 0^n"
},
{
"math_id": 14,
"text": "f"
},
{
"math_id": 15,
"text": "s = 0^n"
},
{
"math_id": 16,
"text": "y = s \\oplus x"
},
{
"math_id": 17,
"text": "f(x) = f(y) = f(x \\oplus {s})"
},
{
"math_id": 18,
"text": "n = 3"
},
{
"math_id": 19,
"text": "s = 110"
},
{
"math_id": 20,
"text": "010"
},
{
"math_id": 21,
"text": "100"
},
{
"math_id": 22,
"text": "000"
},
{
"math_id": 23,
"text": "{\\displaystyle f(010)=000}\n"
},
{
"math_id": 24,
"text": "\n{\\displaystyle f(100)=000}"
},
{
"math_id": 25,
"text": "{\\displaystyle 010\\oplus 100=110=s}."
},
{
"math_id": 26,
"text": "s=110"
},
{
"math_id": 27,
"text": "001\\oplus111=110=s"
},
{
"math_id": 28,
"text": "{\\displaystyle s\\neq 0^{n}}"
},
{
"math_id": 29,
"text": " {\\displaystyle \\Omega ({\\sqrt {2^{n}}})}"
},
{
"math_id": 30,
"text": " f"
},
{
"math_id": 31,
"text": "{\\displaystyle \\Theta ({\\sqrt {2^{n}}})}"
},
{
"math_id": 32,
"text": "O(n)"
},
{
"math_id": 33,
"text": "y_1, ..., y_{n - 1}"
},
{
"math_id": 34,
"text": "y_k"
},
{
"math_id": 35,
"text": "y_k \\cdot s = 0"
},
{
"math_id": 36,
"text": "H^{\\otimes n}|k\\rangle = \\frac{1}{\\sqrt{2^n}} \\sum_{j = 0}^{2^n - 1} (-1)^{k \\cdot j} |j\\rangle"
},
{
"math_id": 37,
"text": "k \\cdot j = k_1 j_1 \\oplus \\ldots \\oplus k_n j_n"
},
{
"math_id": 38,
"text": "|0\\rangle^{\\otimes n}|0\\rangle^{\\otimes n}"
},
{
"math_id": 39,
"text": "\\frac{1}{\\sqrt{2^n}} \\sum_{k = 0}^{2^n - 1} |k\\rangle |0\\rangle^{\\otimes n}."
},
{
"math_id": 40,
"text": "U_f"
},
{
"math_id": 41,
"text": "\\frac{1}{\\sqrt{2^n}} \\sum_{k = 0}^{2^n - 1} |k\\rangle |f(k)\\rangle"
},
{
"math_id": 42,
"text": "\\frac{1}{\\sqrt{2^n}} \\sum_{k = 0}^{2^n - 1} \\left[\\frac{1}{\\sqrt{2^n}} \\sum_{j = 0}^{2^n - 1} (-1)^{j \\cdot k} |j\\rangle \\right] |f(k)\\rangle\n= \\sum_{j = 0}^{2^n - 1} |j\\rangle \\left[\\frac{1}{2^n} \\sum_{k = 0}^{2^n - 1} (-1)^{j \\cdot k} |f(k)\\rangle \\right]."
},
{
"math_id": 43,
"text": "|j\\rangle"
},
{
"math_id": 44,
"text": "\\left|\\left| \\frac{1}{2^n} \\sum_{k = 0}^{2^n - 1} (-1)^{j \\cdot k} |f(k)\\rangle \\right|\\right|^2"
},
{
"math_id": 45,
"text": "\\left|\\left| \\frac{1}{2^n} \\sum_{k = 0}^{2^n - 1} (-1)^{j \\cdot k} |f(k)\\rangle \\right|\\right|^2 = \\frac{1}{2^n}"
},
{
"math_id": 46,
"text": "\\{0, 1\\}^n"
},
{
"math_id": 47,
"text": "x_1"
},
{
"math_id": 48,
"text": "x_2"
},
{
"math_id": 49,
"text": "f(x_1) = f(x_2) = z"
},
{
"math_id": 50,
"text": "z \\in \\mathrm{range}(f)"
},
{
"math_id": 51,
"text": "\\left|\\left| \\frac{1}{2^n} \\sum_{k = 0}^{2^n - 1} (-1)^{j \\cdot k} |f(k)\\rangle \\right|\\right|^2\n= \\left|\\left| \\frac{1}{2^n} \\sum_{z\\, \\in \\,\\mathrm{range}(f)} ((-1)^{j \\cdot x_1} + (-1)^{j \\cdot x_2}) |z\\rangle \\right|\\right|^2"
},
{
"math_id": 52,
"text": "x_1 \\oplus x_2 = s"
},
{
"math_id": 53,
"text": "x_2 = x_1 \\oplus s"
},
{
"math_id": 54,
"text": "\\begin{align}\n\\left|\\left| \\frac{1}{2^n} \\sum_{z\\, \\in \\,\\mathrm{range}(f)} ((-1)^{j \\cdot x_1} + (-1)^{j \\cdot x_2}) |z\\rangle \\right|\\right|^2\n&= \\left|\\left| \\frac{1}{2^n} \\sum_{z\\, \\in \\,\\mathrm{range}(f)} ((-1)^{j \\cdot x_1} + (-1)^{j \\cdot (x_1 \\oplus s)}) |z\\rangle \\right|\\right|^2 \\\\\n&= \\left|\\left| \\frac{1}{2^n} \\sum_{z\\, \\in \\,\\mathrm{range}(f)} ((-1)^{j \\cdot x_1} + (-1)^{j \\cdot x_1 \\oplus j \\cdot s}) |z\\rangle \\right|\\right|^2 \\\\\n&= \\left|\\left| \\frac{1}{2^n} \\sum_{z\\, \\in \\,\\mathrm{range}(f)} (-1)^{j \\cdot x_1}(1 + (-1)^{j \\cdot s}) |z\\rangle \\right|\\right|^2\n\\end{align}"
},
{
"math_id": 55,
"text": "j"
},
{
"math_id": 56,
"text": "j \\cdot s = 1"
},
{
"math_id": 57,
"text": "0"
},
{
"math_id": 58,
"text": "j \\cdot s = 0"
},
{
"math_id": 59,
"text": "2^{-n + 1}"
},
{
"math_id": 60,
"text": "y_1, \\ldots, y_{n - 1}"
},
{
"math_id": 61,
"text": "y_1, y_2, \\dots, y_{n-1}"
},
{
"math_id": 62,
"text": "\n\\prod_{k=1}^\\infty \\left( 1 - \\frac{1}{2^k} \\right) = 0.288788\\dots\n"
},
{
"math_id": 63,
"text": "s'"
},
{
"math_id": 64,
"text": "f(0^n) = f(s')"
},
{
"math_id": 65,
"text": "s' = s"
},
{
"math_id": 66,
"text": "f(0^n) = f(0^n \\oplus s) = f(s)"
},
{
"math_id": 67,
"text": "f(0^n) \\neq f(s')"
},
{
"math_id": 68,
"text": "\nf(0^n) \\neq f(s')\n"
},
{
"math_id": 69,
"text": "n=1"
},
{
"math_id": 70,
"text": "|0\\rangle |f(0)\\rangle + |1\\rangle|f(1)\\rangle."
},
{
"math_id": 71,
"text": "s=1"
},
{
"math_id": 72,
"text": "f(0)=f(1)"
},
{
"math_id": 73,
"text": "|f(0)\\rangle"
},
{
"math_id": 74,
"text": "|0\\rangle+ |1\\rangle."
},
{
"math_id": 75,
"text": "|0\\rangle"
},
{
"math_id": 76,
"text": "s=0"
},
{
"math_id": 77,
"text": "|1\\rangle"
},
{
"math_id": 78,
"text": "2^{-N}"
},
{
"math_id": 79,
"text": "N"
},
{
"math_id": 80,
"text": "n=2"
},
{
"math_id": 81,
"text": "|00\\rangle|f(00)\\rangle +\n|01\\rangle|f(01)\\rangle +\n|10\\rangle|f(10)\\rangle +\n|11\\rangle|f(11)\\rangle."
},
{
"math_id": 82,
"text": "s=(00)"
},
{
"math_id": 83,
"text": "|f(x)\\rangle"
},
{
"math_id": 84,
"text": "|x\\rangle"
},
{
"math_id": 85,
"text": "x\\in\\{0,1\\}^2"
},
{
"math_id": 86,
"text": "00,01,10,11"
},
{
"math_id": 87,
"text": "s\\neq(00)"
},
{
"math_id": 88,
"text": "s=(01)"
},
{
"math_id": 89,
"text": "|f(00)\\rangle"
},
{
"math_id": 90,
"text": "|00\\rangle+|10\\rangle"
},
{
"math_id": 91,
"text": "|f(xy)\\rangle"
},
{
"math_id": 92,
"text": "|x,y\\rangle+|x,y\\oplus1\\rangle=|x\\rangle(|0\\rangle+|1\\rangle)"
},
{
"math_id": 93,
"text": "00"
},
{
"math_id": 94,
"text": "10"
},
{
"math_id": 95,
"text": "s=(10)"
},
{
"math_id": 96,
"text": "01"
},
{
"math_id": 97,
"text": "s=(11)"
},
{
"math_id": 98,
"text": "11"
},
{
"math_id": 99,
"text": "j\\cdot s=0"
},
{
"math_id": 100,
"text": "\\Omega(2^{n/2})"
},
{
"math_id": 101,
"text": "\\Omega(n)"
}
] | https://en.wikipedia.org/wiki?curid=11876741 |
11877586 | Number sentence | A simple equation or inequality using numbers and basic operations (e.g +, -, ×, ÷)
In mathematics education, a number sentence is an equation or inequality expressed using numbers and mathematical symbols. The term is used in primary level mathematics teaching in the US, Canada, UK, Australia, New Zealand and South Africa.
Usage.
The term is used as means of asking students to write down equations using simple mathematical symbols (numerals, the four main basic mathematical operators, equality symbol). Sometimes boxes or shapes are used to indicate unknown values. As such, number sentences are used to introduce students to notions of structure and elementary algebra prior to a more formal treatment of these concepts.
A number sentence without unknowns is equivalent to a logical proposition expressed using the notation of arithmetic.
Examples.
Some students will use a direct computational approach. They will carry out the addition 26 + 39 = 65, put 65 = 26 + formula_0, and then find that formula_0 = 39.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Box"
}
] | https://en.wikipedia.org/wiki?curid=11877586 |
1187923 | Eqn (software) | Preprocessor that formats equations for drawing
Part of the troff suite of Unix document layout tools, eqn is a preprocessor that formats equations for printing. A similar program, neqn, accepted the same input as eqn, but produced output tuned to look better in nroff. The eqn program was created in 1974 by Brian Kernighan and Lorinda Cherry.
It was implemented using yacc compiler-compiler.
The input language used by eqn allows the user to write mathematical expressions in much the same way as they would be spoken aloud. The language is defined by a context-free grammar, together with operator precedence and operator associativity rules. The eqn language is similar to the mathematical component of TeX, which appeared several years later, but is simpler and less complete.
An independent compatible implementation of the eqn preprocessor has been developed by GNU as part of groff, the GNU version of troff. The GNU implementation extends the original language by adding a number of new keywords such as "smallover" and "accent". mandoc, a specialised compiler for UNIX man pages, also contains a standalone eqn parser/formatter.
History.
Eqn was written using the yacc parser generator.
Syntax examples.
Here is how some examples would be written in eqn (with equivalents in TeX for comparison):
Spaces are important in eqn; tokens are delimited only by whitespace characters, tildes ~, braces {} and double-quotes "". Thus codice_0 results in formula_0, whereas codice_1 is needed to give the intended formula_1.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(pi r^{2)}"
},
{
"math_id": 1,
"text": "f(\\pi r^2)"
}
] | https://en.wikipedia.org/wiki?curid=1187923 |
1187928 | Jensen's alpha | Financial calculation
In finance, Jensen's alpha (or Jensen's Performance Index, ex-post alpha) is used to determine the abnormal return of a security or portfolio of securities over the theoretical expected return. It is a version of the standard alpha based on a theoretical performance instead of a market index.
The security could be any asset, such as stocks, bonds, or derivatives. The theoretical return is predicted by a market model, most commonly the capital asset pricing model (CAPM). The market model uses statistical methods to predict the appropriate risk-adjusted return of an asset. The CAPM for instance uses beta as a multiplier.
History.
Jensen's alpha was first used as a measure in the evaluation of mutual fund managers by Michael Jensen in 1968. The CAPM return is supposed to be 'risk adjusted', which means it takes account of the relative riskiness of the asset.
This is based on the concept that riskier assets should have higher expected returns than less risky assets. If an asset's return is even higher than the risk adjusted return, that asset is said to have "positive alpha" or "abnormal returns". Investors are constantly seeking investments that have higher alpha.
Since Eugene Fama, many academics believe financial markets are too efficient to allow for repeatedly earning positive Alpha, unless by chance. Nevertheless, Alpha is still widely used to evaluate mutual fund and portfolio manager performance, often in conjunction with the Sharpe ratio and the Treynor ratio.
formula_0
Calculation.
In the context of CAPM, calculating alpha requires the following inputs:
An additional way of understanding the definition can be obtained by rewriting it as:
formula_5
If we define the excess return of the fund (market) over the risk free return as formula_6 and formula_7 then Jensen's alpha can be expressed as:
formula_8
Use in quantitative finance.
Jensen's alpha is a statistic that is commonly used in empirical finance to assess the marginal return associated with unit exposure to a given strategy. Generalizing the above definition to the multifactor setting, Jensen's alpha is a measure of the marginal return associated with an additional strategy that is not explained by existing factors.
We obtain the CAPM alpha if we consider excess market returns as the only factor. If we add in the Fama-French factors (of size and value), we obtain the 3-factor alpha. If additional factors were to be added (such as momentum) one could ascertain a 4-factor alpha, and so on. If Jensen's alpha is significant and positive, then the strategy being considered has a history of generating returns on top of what would be expected based on other factors alone. For example, in the 3-factor case, we may regress momentum factor returns on 3-factor returns to find that momentum generates a significant premium on top of size, value, and market returns.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\overset\\text{Jensen's alpha}{\\alpha_J} = \\overset\\text{portfolio return}{R_i} - [\\overset\\text{risk free rate}{R_f} + \\overset\\text{portfolio beta}{\\beta_{iM}} \\cdot (\\overset\\text{market return}{R_M} - \\overset\\text{risk free rate}{R_f})]"
},
{
"math_id": 1,
"text": "R_i"
},
{
"math_id": 2,
"text": "R_M"
},
{
"math_id": 3,
"text": "R_f"
},
{
"math_id": 4,
"text": "\\beta_{iM}"
},
{
"math_id": 5,
"text": "\\alpha_J = (R_i - R_f) - \\beta_{iM} \\cdot (R_M - R_f)"
},
{
"math_id": 6,
"text": "\\Delta_R \\equiv (R_i - R_f) "
},
{
"math_id": 7,
"text": " \\Delta_M \\equiv (R_M - R_f)"
},
{
"math_id": 8,
"text": "\\alpha_J = \\Delta_R - \\beta_{iM} \\Delta_M "
}
] | https://en.wikipedia.org/wiki?curid=1187928 |
1188090 | List of undecidable problems | Computational problems no algorithm can solve
In computability theory, an undecidable problem is a decision problem for which an effective method (algorithm) to derive the correct answer does not exist. More formally, an undecidable problem is a problem whose language is not a recursive set; see the article Decidable language. There are uncountably many undecidable problems, so the list below is necessarily incomplete. Though undecidable languages are not recursive languages, they may be subsets of Turing recognizable languages: i.e., such undecidable languages may be recursively enumerable.
Many, if not most, undecidable problems in mathematics can be posed as word problems: determining when two distinct strings of symbols (encoding some mathematical concept or object) represent the same object or not.
For undecidability in axiomatic mathematics, see List of statements undecidable in ZFC.
formula_0
where "x" is a vector in Rn, "p"("t", "x") is a vector of polynomials in "t" and "x", and "(t0, x0)" belongs to Rn+1.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{dx}{dt} = p(t, x),~x(t_0)=x_0,"
}
] | https://en.wikipedia.org/wiki?curid=1188090 |
11880982 | Material nonimplication | Material nonimplication or abjunction (Latin "ab" = "away", "junctio"= "to join") is a term referring to a logic operation used in generic circuits and Boolean algebra. It is the negation of material implication. That is to say that for any two propositions formula_1 and formula_2, the material nonimplication from formula_1 to formula_2 is true if and only if the negation of the material implication from formula_1 to formula_2 is true. This is more naturally stated as that the material nonimplication from formula_1 to formula_2 is true only if formula_1 is true and formula_2 is false.
It may be written using logical notation as formula_0, formula_3, or "L"pq"" (in Bocheński notation), and is logically equivalent to formula_4, and formula_5.
Definition.
Logical Equivalences.
Material nonimplication may be defined as the negation of material implication.
In classical logic, it is also equivalent to the negation of the disjunction of formula_6 and formula_2, and also the conjunction of formula_1 and formula_7
Properties.
falsehood-preserving: The interpretation under which all variables are assigned a truth value of "false" produces a truth value of "false" as a result of material nonimplication.
Symbol.
The symbol for material nonimplication is simply a crossed-out material implication symbol. Its Unicode symbol is 219B16 (8603 decimal): ↛.
Natural language.
Grammatical.
"p minus q."
"p without q."
Rhetorical.
"p but not q."
"q is false, in spite of p."
Computer science.
Bitwise operation: A&(~B)
Logical operation: A&&(!B)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P \\nrightarrow Q"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "P \\not \\supset Q"
},
{
"math_id": 4,
"text": "\\neg (P \\rightarrow Q)"
},
{
"math_id": 5,
"text": "P \\land \\neg Q"
},
{
"math_id": 6,
"text": "\\neg P"
},
{
"math_id": 7,
"text": "\\neg Q"
}
] | https://en.wikipedia.org/wiki?curid=11880982 |
11880985 | Converse nonimplication | Logical connective
In logic, converse nonimplication is a logical connective which is the negation of converse implication (equivalently, the negation of the converse of implication).
Definition.
Converse nonimplication is notated formula_0, or formula_1, and is logically equivalent to formula_2 and formula_3.
Truth table.
The truth table of formula_4.
Notation.
Converse nonimplication is notated formula_5, which is the left arrow from converse implication (formula_6), negated with a stroke (/).
Alternatives include
Properties.
falsehood-preserving: The interpretation under which all variables are assigned a truth value of 'false' produces a truth value of 'false' as a result of converse nonimplication
Natural language.
Grammatical.
Example,
If it rains (P) then I get wet (Q), just because I am wet (Q) does not mean it is raining, in reality I went to a pool party with the co-ed staff, in my clothes (~P) and that is why I am facilitating this lecture in this state (Q).
Rhetorical.
Q does not imply P.
Boolean algebra.
Converse Nonimplication in a general Boolean algebra is defined as formula_12.
Example of a 2-element Boolean algebra: the 2 elements {0,1} with 0 as zero and 1 as unity element, operators formula_11 as complement operator, formula_13 as join operator and formula_14 as meet operator, build the Boolean algebra of propositional logic.
Example of a 4-element Boolean algebra: the 4 divisors {1,2,3,6} of 6 with 1 as zero and 6 as unity element, operators formula_15 (co-divisor of 6) as complement operator, formula_16 (least common multiple) as join operator and formula_17 (greatest common divisor) as meet operator, build a Boolean algebra.
Properties.
Non-associative.
formula_18 if and only if formula_19 #s5 (In a two-element Boolean algebra the latter condition is reduced to formula_20 or formula_21). Hence in a nontrivial Boolean algebra Converse Nonimplication is nonassociative.
formula_22
Clearly, it is associative if and only if formula_23.
Computer science.
An example for converse nonimplication in computer science can be found when performing a right outer join on a set of tables from a database, if records not matching the join-condition from the "left" table are being excluded.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P \\nleftarrow Q"
},
{
"math_id": 1,
"text": "P \\not \\subset Q"
},
{
"math_id": 2,
"text": "\\neg (P \\leftarrow Q)"
},
{
"math_id": 3,
"text": "\\neg P \\wedge Q"
},
{
"math_id": 4,
"text": " A \\nleftarrow B "
},
{
"math_id": 5,
"text": "p \\nleftarrow q"
},
{
"math_id": 6,
"text": " \\leftarrow"
},
{
"math_id": 7,
"text": "p \\not\\subset q"
},
{
"math_id": 8,
"text": "\\subset"
},
{
"math_id": 9,
"text": "p \\tilde{\\leftarrow} q"
},
{
"math_id": 10,
"text": "\\leftarrow"
},
{
"math_id": 11,
"text": "\\sim"
},
{
"math_id": 12,
"text": "q \\nleftarrow p=q'p"
},
{
"math_id": 13,
"text": "\\vee"
},
{
"math_id": 14,
"text": "\\wedge"
},
{
"math_id": 15,
"text": "\\scriptstyle{ ^{c}}\\!"
},
{
"math_id": 16,
"text": "\\scriptstyle{_\\vee}\\!"
},
{
"math_id": 17,
"text": "\\scriptstyle{_\\wedge}\\!"
},
{
"math_id": 18,
"text": "r \\nleftarrow (q \\nleftarrow p) = (r \\nleftarrow q) \\nleftarrow p"
},
{
"math_id": 19,
"text": "rp = 0"
},
{
"math_id": 20,
"text": "r = 0"
},
{
"math_id": 21,
"text": "p=0"
},
{
"math_id": 22,
"text": "\\begin{align}\n(r \\nleftarrow q) \\nleftarrow p\n&= r'q \\nleftarrow p & \\text{(by definition)} \\\\\n&= (r'q)'p & \\text{(by definition)} \\\\\n&= (r + q')p & \\text{(De Morgan's laws)} \\\\\n&= (r + r'q')p & \\text{(Absorption law)} \\\\\n&= rp + r'q'p \\\\\n&= rp + r'(q \\nleftarrow p) & \\text{(by definition)} \\\\\n&= rp + r \\nleftarrow (q \\nleftarrow p) & \\text{(by definition)} \\\\\n\\end{align}"
},
{
"math_id": 23,
"text": "rp=0"
},
{
"math_id": 24,
"text": "q \\nleftarrow p=p \\nleftarrow q"
},
{
"math_id": 25,
"text": "q = p"
},
{
"math_id": 26,
"text": "0 \\nleftarrow p=p"
},
{
"math_id": 27,
"text": "{p \\nleftarrow 0=0}"
},
{
"math_id": 28,
"text": "1 \\nleftarrow p=0"
},
{
"math_id": 29,
"text": "p \\nleftarrow 1=p'"
},
{
"math_id": 30,
"text": "p \\nleftarrow p=0"
},
{
"math_id": 31,
"text": "q \\rightarrow p"
},
{
"math_id": 32,
"text": "q \\nleftarrow p"
}
] | https://en.wikipedia.org/wiki?curid=11880985 |
1188104 | Old quantum theory | Predecessor to modern quantum mechanics (1900–1925)
The old quantum theory is a collection of results from the years 1900–1925 which predate modern quantum mechanics. The theory was never complete or self-consistent, but was instead a set of heuristic corrections to classical mechanics. The theory has come to be understood as the semi-classical approximation to modern quantum mechanics. The main and final accomplishments of the old quantum theory were the determination of the modern form of the periodic table by Edmund Stoner and the Pauli exclusion principle which were both premised on the Arnold Sommerfeld enhancements to the Bohr model of the atom.
The main tool of the old quantum theory was the Bohr–Sommerfeld quantization condition, a procedure for selection of certain allowed states of a classical system: the system can then only exist in one of the allowed states and not in any other state.
History.
The old quantum theory was instigated by the 1900 work of Max Planck on the emission and absorption of light in a black body with his discovery of Planck's law introducing his quantum of action, and began in earnest after the work of Albert Einstein on the specific heats of solids in 1907 brought him to the attention of Walther Nernst. Einstein, followed by Debye, applied quantum principles to the motion of atoms, explaining the specific heat anomaly.
In 1910, Arthur Erich Haas develops J. J. Thomson's atomic model in his 1910 paper that outlined a treatment of the hydrogen atom involving quantization of electronic orbitals, thus anticipating the Bohr model (1913) by three years.
John William Nicholson is noted as the first to create an atomic model that quantized angular momentum as h/2π. Niels Bohr quoted him in his 1913 paper of the Bohr model of the atom.
In 1913, Niels Bohr displayed rudiments of the later defined correspondence principle and used it to formulate a model of the hydrogen atom which explained the line spectrum. In the next few years Arnold Sommerfeld extended the quantum rule to arbitrary integrable systems making use of the principle of adiabatic invariance of the quantum numbers introduced by Lorentz and Einstein. Sommerfeld made a crucial contribution by quantizing the z-component of the angular momentum, which in the old quantum era was called "space quantization" (German: "Richtungsquantelung"). This model, which became known as the Bohr–Sommerfeld model, allowed the orbits of the electron to be ellipses instead of circles, and introduced the concept of quantum degeneracy. The theory would have correctly explained the Zeeman effect, except for the issue of electron spin. Sommerfeld's model was much closer to the modern quantum mechanical picture than Bohr's.
Throughout the 1910s and well into the 1920s, many problems were attacked using the old quantum theory with mixed results. Molecular rotation and vibration spectra were understood and the electron's spin was discovered, leading to the confusion of half-integer quantum numbers. Max Planck introduced the zero point energy and Arnold Sommerfeld semiclassically quantized the relativistic hydrogen atom. Hendrik Kramers explained the Stark effect. Bose and Einstein gave the correct quantum statistics for photons.
Kramers gave a prescription for calculating transition probabilities between quantum states in terms of Fourier components of the motion, ideas which were extended in collaboration with Werner Heisenberg to a semiclassical matrix-like description of atomic transition probabilities. Heisenberg went on to reformulate all of quantum theory in terms of a version of these transition matrices, creating matrix mechanics.
In 1924, Louis de Broglie introduced the wave theory of matter, which was extended to a semiclassical equation for matter waves by Albert Einstein a short time later. In 1926 Erwin Schrödinger found a completely quantum mechanical wave-equation, which reproduced all the successes of the old quantum theory without ambiguities and inconsistencies. Schrödinger's wave mechanics developed separately from matrix mechanics until Schrödinger and others proved that the two methods predicted the same experimental consequences. Paul Dirac later proved in 1926 that both methods can be obtained from a more general method called transformation theory.
In the 1950s Joseph Keller updated Bohr–Sommerfeld quantization using Einstein's interpretation of 1917, now known as Einstein–Brillouin–Keller method. In 1971, Martin Gutzwiller took into account that this method only works for integrable systems and derived a semiclassical way of quantizing chaotic systems from path integrals.
Basic principles.
The basic idea of the old quantum theory is that the motion in an atomic system is quantized, or discrete. The system obeys classical mechanics except that not every motion is allowed, only those motions which obey the "quantization condition":
formula_0
where the formula_1 are the momenta of the system and the formula_2 are the corresponding coordinates. The quantum numbers formula_3 are "integers" and the integral is taken over one period of the motion at constant energy (as described by the Hamiltonian). The integral is an area in phase space, which is a quantity called the action and is quantized in units of the (unreduced) Planck constant. For this reason, the Planck constant was often called the "quantum of action".
In order for the old quantum condition to make sense, the classical motion must be separable, meaning that there are separate coordinates formula_2 in terms of which the motion is periodic. The periods of the different motions do not have to be the same, they can even be incommensurate, but there must be a set of coordinates where the motion decomposes in a multi-periodic way.
The motivation for the old quantum condition was the correspondence principle, complemented by the physical observation that the quantities which are quantized must be adiabatic invariants. Given Planck's quantization rule for the harmonic oscillator, either condition determines the correct classical quantity to quantize in a general system up to an additive constant.
This quantization condition is often known as the "Wilson–Sommerfeld rule", proposed independently by William Wilson and Arnold Sommerfeld.
Examples.
Thermal properties of the harmonic oscillator.
The simplest system in the old quantum theory is the harmonic oscillator, whose Hamiltonian is:
formula_4
The old quantum theory yields a recipe for the quantization of the energy levels of the harmonic oscillator, which, when combined with the Boltzmann probability distribution of thermodynamics, yields the correct expression for the stored energy and specific heat of a quantum oscillator both at low and at ordinary temperatures. Applied as a model for the specific heat of solids, this resolved a discrepancy in pre-quantum thermodynamics that had troubled 19th-century scientists. Let us now describe this.
The level sets of "H" are the orbits, and the quantum condition is that the area enclosed by an orbit in phase space is an integer. It follows that the energy is quantized according to the Planck rule:
formula_5
a result which was known well before, and used to formulate the old quantum condition. This result differs by formula_6 from the results found with the help of quantum mechanics. This constant is neglected in the derivation of the "old quantum theory", and its value cannot be determined using it.
The thermal properties of a quantized oscillator may be found by averaging the energy in each of the discrete states assuming that they are occupied with a Boltzmann weight:
formula_7
"kT" is Boltzmann constant times the absolute temperature, which is the temperature as measured in more natural units of energy. The quantity formula_8 is more fundamental in thermodynamics than the temperature, because it is the thermodynamic potential associated to the energy.
From this expression, it is easy to see that for large values of formula_8, for very low temperatures, the average energy "U" in the Harmonic oscillator approaches zero very quickly, exponentially fast. The reason is that "kT" is the typical energy of random motion at temperature "T", and when this is smaller than formula_9, there is not enough energy to give the oscillator even one quantum of energy. So the oscillator stays in its ground state, storing next to no energy at all.
This means that at very cold temperatures, the change in energy with respect to beta, or equivalently the change in energy with respect to temperature, is also exponentially small. The change in energy with respect to temperature is the specific heat, so the specific heat is exponentially small at low temperatures, going to zero like
formula_10
At small values of formula_8, at high temperatures, the average energy "U" is equal to formula_11. This reproduces the equipartition theorem of classical thermodynamics: every harmonic oscillator at temperature "T" has energy "kT" on average. This means that the specific heat of an oscillator is constant in classical mechanics and equal to "k". For a collection of atoms connected by springs, a reasonable model of a solid, the total specific heat is equal to the total number of oscillators times "k". There are overall three oscillators for each atom, corresponding to the three possible directions of independent oscillations in three dimensions. So the specific heat of a classical solid is always 3"k" per atom, or in chemistry units, 3"R" per mole of atoms.
Monatomic solids at room temperatures have approximately the same specific heat of 3"k" per atom, but at low temperatures they don't. The specific heat is smaller at colder temperatures, and it goes to zero at absolute zero. This is true for all material systems, and this observation is called the third law of thermodynamics. Classical mechanics cannot explain the third law, because in classical mechanics the specific heat is independent of the temperature.
This contradiction between classical mechanics and the specific heat of cold materials was noted by James Clerk Maxwell in the 19th century, and remained a deep puzzle for those who advocated an atomic theory of matter. Einstein resolved this problem in 1906 by proposing that atomic motion is quantized. This was the first application of quantum theory to mechanical systems. A short while later, Peter Debye gave a quantitative theory of solid specific heats in terms of quantized oscillators with various frequencies (see Einstein solid and Debye model).
One-dimensional potential: "U" = 0.
One-dimensional problems are easy to solve. At any energy "E", the value of the momentum "p" is found from the conservation equation:
formula_12
which is integrated over all values of "q" between the classical "turning points", the places where the momentum vanishes. The integral is easiest for a "particle in a box" of length "L", where the quantum condition is:
formula_13
which gives the allowed momenta:
formula_14
and the energy levels
formula_15
One-dimensional potential: "U" = "Fx".
Another easy case to solve with the old quantum theory is a linear potential on the positive halfline, the constant confining force "F" binding a particle to an impenetrable wall. This case is much more difficult in the full quantum mechanical treatment, and unlike the other examples, the semiclassical answer here is not exact but approximate, becoming more accurate at large quantum numbers.
formula_16
so that the quantum condition is
formula_17
which determines the energy levels,
formula_18
In the specific case F=mg, the particle is confined by the gravitational potential of the earth and the "wall" here is the surface of the earth.
One-dimensional potential: "U" = <templatestyles src="Fraction/styles.css" />1⁄2"kx"2.
This case is also easy to solve, and the semiclassical answer here agrees with the quantum one to within the ground-state energy. Its quantization-condition integral is
formula_19
with solution
formula_20
for oscillation angular frequency formula_21, as before.
Rotator.
Another simple system is the rotator. A rotator consists of a mass "M" at the end of a massless rigid rod of length "R" and in two dimensions has the Lagrangian:
formula_22
which determines that the angular momentum "J" conjugate to formula_23, the polar angle, formula_24. The old quantum condition requires that "J" multiplied by the period of formula_23 is an integer multiple of the Planck constant:
formula_25
the angular momentum to be an integer multiple of formula_26. In the Bohr model, this restriction imposed on circular orbits was enough to determine the energy levels.
In three dimensions, a rigid rotator can be described by two angles — formula_23 and formula_27, where formula_23 is the inclination relative to an arbitrarily chosen "z"-axis while formula_27 is the rotator angle in the projection to the "x"–"y" plane. The kinetic energy is again the only contribution to the Lagrangian:
formula_28
And the conjugate momenta are formula_29 and formula_30. The equation of motion for formula_27 is trivial: formula_31 is a constant:
formula_32
which is the "z"-component of the angular momentum. The quantum condition demands that the integral of the constant formula_33 as formula_27 varies from 0 to formula_34 is an integer multiple of "h":
formula_35
And "m" is called the magnetic quantum number, because the "z" component of the angular momentum is the magnetic moment of the rotator along the "z" direction in the case where the particle at the end of the rotator is charged.
Since the three-dimensional rotator is rotating about an axis, the total angular momentum should be restricted in the same way as the two-dimensional rotator. The two quantum conditions restrict the total angular momentum and the "z"-component of the angular momentum to be the integers "l","m". This condition is reproduced in modern quantum mechanics, but in the era of the old quantum theory it led to a paradox: how can the orientation of the angular momentum relative to the arbitrarily chosen "z"-axis be quantized? This seems to pick out a direction in space.
This phenomenon, the quantization of angular momentum about an axis, was given the name "space quantization", because it seemed incompatible with rotational invariance. In modern quantum mechanics, the angular momentum is quantized the same way, but the discrete states of definite angular momentum in any one orientation are quantum superpositions of the states in other orientations, so that the process of quantization does not pick out a preferred axis. For this reason, the name "space quantization" fell out of favor, and the same phenomenon is now called the quantization of angular momentum.
Hydrogen atom.
The angular part of the hydrogen atom is just the rotator, and gives the quantum numbers "l" and "m". The only remaining variable is the radial coordinate, which executes a periodic one-dimensional potential motion, which can be solved.
For a fixed value of the total angular momentum "L", the Hamiltonian for a classical Kepler problem is (the unit of mass and unit of energy redefined to absorb two constants):
formula_36
Fixing the energy to be (a negative) constant and solving for the radial momentum formula_37, the quantum condition integral is:
formula_38
which can be solved with the method of residues, and gives a new quantum number formula_39 which determines the energy in combination with formula_40. The energy is:
formula_41
and it only depends on the sum of "k" and "l", which is the "principal quantum number" "n". Since "k" is positive, the allowed values of "l" for any given "n" are no bigger than "n". The energies reproduce those in the Bohr model, except with the correct quantum mechanical multiplicities, with some ambiguity at the extreme values.
De Broglie waves.
In 1905, Einstein noted that the entropy of the quantized electromagnetic field oscillators in a box is, for short wavelength, equal to the entropy of a gas of point particles in the same box. The number of point particles is equal to the number of quanta. Einstein concluded that the quanta could be treated as if they were localizable objects
Einstein's theoretical argument was based on thermodynamics, on counting the number of states, and so was not completely convincing. Nevertheless, he concluded that light had attributes of both waves and particles, more precisely that an electromagnetic standing wave with frequency formula_21 with the quantized energy:
formula_42
should be thought of as consisting of n photons each with an energy formula_9. Einstein could not describe how the photons were related to the wave.
The photons have momentum as well as energy, and the momentum had to be formula_43 where formula_39 is the wavenumber of the electromagnetic wave. This is required by relativity, because the momentum and energy form a four-vector, as do the frequency and wave-number.
In 1924, as a PhD candidate, Louis de Broglie proposed a new interpretation of the quantum condition. He suggested that all matter, electrons as well as photons, are described by waves obeying the relations.
formula_44
or, expressed in terms of wavelength formula_45 instead,
formula_46
He then noted that the quantum condition:
formula_47
counts the change in phase for the wave as it travels along the classical orbit, and requires that it be an integer multiple of formula_34. Expressed in wavelengths, the number of wavelengths along a classical orbit must be an integer. This is the condition for constructive interference, and it explained the reason for quantized orbits—the matter waves make standing waves only at discrete frequencies, at discrete energies.
For example, for a particle confined in a box, a standing wave must fit an integer number of wavelengths between twice the distance between the walls. The condition becomes:
formula_48
so that the quantized momenta are:
formula_49
reproducing the old quantum energy levels.
This development was given a more mathematical form by Einstein, who noted that the phase function for the waves, formula_50, in a mechanical system should be identified with the solution to the Hamilton–Jacobi equation, an equation which William Rowan Hamilton believed to be a short-wavelength limit of a sort of wave mechanics in the 19th century. Schrödinger then found the proper wave equation which matched the Hamilton–Jacobi equation for the phase; this is now known as the Schrödinger equation.
Kramers transition matrix.
The old quantum theory was formulated only for special mechanical systems which could be separated into action angle variables which were periodic. It did not deal with the emission and absorption of radiation. Nevertheless, Hendrik Kramers was able to find heuristics for describing how emission and absorption should be calculated.
Kramers suggested that the orbits of a quantum system should be Fourier analyzed, decomposed into harmonics at multiples of the orbit frequency:
formula_51
The index "n" describes the quantum numbers of the orbit, it would be "n"–"l"–"m" in the Sommerfeld model. The frequency formula_21 is the angular frequency of the orbit formula_52 while "k" is an index for the Fourier mode. Bohr had suggested that the "k"-th harmonic of the classical motion correspond to the transition from level "n" to level "n"−"k".
Kramers proposed that the transition between states were analogous to classical emission of radiation, which happens at frequencies at multiples of the orbit frequencies. The rate of emission of radiation is proportional to formula_53, as it would be in classical mechanics. The description was approximate, since the Fourier components did not have frequencies that exactly match the energy spacings between levels.
This idea led to the development of matrix mechanics.
Limitations.
The old quantum theory had some limitations:
However it can be used to describe atoms with more than one electron (e.g. Helium) and the Zeeman effect.
It was later proposed that the old quantum theory is in fact the semi-classical approximation to the canonical quantum mechanics but its limitations are still under investigation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\oint_{H(p,q)=E} p_i \\, dq_i = n_i h\n"
},
{
"math_id": 1,
"text": "p_i"
},
{
"math_id": 2,
"text": "q_i"
},
{
"math_id": 3,
"text": "n_i"
},
{
"math_id": 4,
"text": "\nH= {p^2 \\over 2m} + {m\\omega^2 q^2\\over 2}.\n"
},
{
"math_id": 5,
"text": "\nE= n\\hbar \\omega,\n\\,"
},
{
"math_id": 6,
"text": "\\tfrac{1}{2}\\hbar \\omega"
},
{
"math_id": 7,
"text": "\nU = {\\sum_n \\hbar\\omega n e^{-\\beta n\\hbar\\omega} \\over \\sum_n e^{-\\beta n \\hbar\\omega}} = {\\hbar \\omega e^{-\\beta\\hbar\\omega} \\over 1 - e^{-\\beta\\hbar\\omega}},\\;\\;\\;{\\rm where}\\;\\;\\beta = \\frac{1}{kT},\n"
},
{
"math_id": 8,
"text": "\\beta"
},
{
"math_id": 9,
"text": "\\hbar\\omega"
},
{
"math_id": 10,
"text": " \\exp(-\\hbar\\omega/kT) "
},
{
"math_id": 11,
"text": "1/\\beta = kT"
},
{
"math_id": 12,
"text": "\n\\sqrt{2m(E - U(q))}=\\sqrt{2mE} = p = \\text{const.}\n"
},
{
"math_id": 13,
"text": "\n2\\int_0^L p \\, dq = nh\n"
},
{
"math_id": 14,
"text": "\np= {nh \\over 2L}\n"
},
{
"math_id": 15,
"text": "\nE_n= {p^2 \\over 2m} = {n^2 h^2 \\over 8mL^2}\n"
},
{
"math_id": 16,
"text": "\n2 \\int_0^{\\frac{E}{F}} \\sqrt{2m(E - Fx)}\\ dx= n h\n"
},
{
"math_id": 17,
"text": "\n{4\\over 3} \\sqrt{2m}{ E^{3/2}\\over F } = n h \n"
},
{
"math_id": 18,
"text": "\nE_n = \\left({3nhF\\over 4\\sqrt{2m}} \\right)^{2/3}\n"
},
{
"math_id": 19,
"text": "\n2 \\int_{-\\sqrt{\\frac{2E}{k}}}^{\\sqrt{\\frac{2E}{k}}} \\sqrt{2m\\left(E - \\frac12 k x^2\\right)}\\ dx = n h\n"
},
{
"math_id": 20,
"text": "\nE = n \\frac{h}{2\\pi} \\sqrt{\\frac{k}{m}} = n\\hbar\\omega\n"
},
{
"math_id": 21,
"text": "\\omega"
},
{
"math_id": 22,
"text": "\nL = {MR^2 \\over 2} \\dot\\theta^2 \n"
},
{
"math_id": 23,
"text": "\\theta"
},
{
"math_id": 24,
"text": "J = MR^2 \\dot\\theta"
},
{
"math_id": 25,
"text": "2\\pi J = n h"
},
{
"math_id": 26,
"text": "\\hbar"
},
{
"math_id": 27,
"text": "\\phi"
},
{
"math_id": 28,
"text": "L = {MR^2\\over 2} \\dot\\theta^2 + {MR^2\\over 2} (\\sin(\\theta)\\dot\\phi)^2"
},
{
"math_id": 29,
"text": "p_\\theta = \\dot\\theta"
},
{
"math_id": 30,
"text": "p_\\phi=\\sin(\\theta)^2 \\dot\\phi"
},
{
"math_id": 31,
"text": "p_\\phi"
},
{
"math_id": 32,
"text": "p_\\phi = l_\\phi "
},
{
"math_id": 33,
"text": "l_\\phi"
},
{
"math_id": 34,
"text": "2\\pi"
},
{
"math_id": 35,
"text": "\nl_\\phi = m \\hbar\n"
},
{
"math_id": 36,
"text": "\nH= { p_r^2 \\over 2 } + {l^2 \\over 2 r^2 } - {1\\over r}.\n"
},
{
"math_id": 37,
"text": "p_r"
},
{
"math_id": 38,
"text": "\n\\oint \\sqrt{2E - {l^2\\over r^2} + { 2\\over r}}\\ dr= k h\n"
},
{
"math_id": 39,
"text": "k"
},
{
"math_id": 40,
"text": "l"
},
{
"math_id": 41,
"text": "\n E= -{1 \\over 2 (k + l)^2}\n"
},
{
"math_id": 42,
"text": "\nE = n\\hbar\\omega\n\\,"
},
{
"math_id": 43,
"text": "\\hbar k"
},
{
"math_id": 44,
"text": "\np = \\hbar k\n"
},
{
"math_id": 45,
"text": "\\lambda"
},
{
"math_id": 46,
"text": "\np = {h \\over \\lambda}\n"
},
{
"math_id": 47,
"text": "\n\\int p \\, dx = \\hbar \\int k \\, dx = 2\\pi\\hbar n\n"
},
{
"math_id": 48,
"text": "n\\lambda = 2L"
},
{
"math_id": 49,
"text": "p = \\frac{nh}{2L}"
},
{
"math_id": 50,
"text": "\\theta(J,x)"
},
{
"math_id": 51,
"text": "\nX_n(t) = \\sum_{k=-\\infty}^{\\infty} e^{ik\\omega t} X_{n;k}\n"
},
{
"math_id": 52,
"text": "2\\pi/T_n"
},
{
"math_id": 53,
"text": "|X_k|^2"
}
] | https://en.wikipedia.org/wiki?curid=1188104 |
11881071 | Kilpatrick limit | In particle accelerators, a common mechanism for accelerating a charged particle beam is via copper resonant cavities in which electric and magnetic fields form a standing wave, the mode of which is designed so that the E field points along the axis of the accelerator, producing forward acceleration of the particles when in the correct phase.
The maximum electric field formula_0 achievable is limited by a process known as RF breakdown. The reliable limits for various RF frequencies formula_1 were tested experimentally in the 1950s by W. D. Kilpatrick.
An approximate relation by least-square optimization of the data yields
formula_2 with formula_3 (megavolts per metre).
This relation is known as the Kilpatrick Limit.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "f = 1.64\\,\\mathrm{MHz} \\cdot \\left(\\frac{E}{E_0}\\right)^2 \\cdot \\exp\\left( -8.5 \\frac{E_0}{E} \\right), \\quad "
},
{
"math_id": 3,
"text": "E_0 = 1 \\mathrm{\\frac{MV}{m}}"
}
] | https://en.wikipedia.org/wiki?curid=11881071 |
1188115 | Pseudo-Hadamard transform | The pseudo-Hadamard transform is a reversible transformation of a bit string that provides cryptographic diffusion. See Hadamard transform.
The bit string must be of even length so that it can be split into two bit strings "a" and "b" of equal lengths, each of "n" bits. To compute the transform for Twofish algorithm, "a"' and "b"', from these we use the equations:
formula_0
formula_1
To reverse this, clearly:
formula_2
formula_3
On the other hand, the transformation for SAFER+ encryption is as follows:
formula_4
formula_5
Generalization.
The above equations can be expressed in matrix algebra, by considering "a" and "b" as two elements of a vector, and the transform itself as multiplication by a matrix of the form:
formula_6
The inverse can then be derived by inverting the matrix.
However, the matrix can be generalised to higher dimensions, allowing vectors of any power-of-two size to be transformed, using the following recursive rule:
formula_7
For example:
formula_8
See also.
This is the Kronecker product of an Arnold Cat Map matrix with a Hadamard matrix. | [
{
"math_id": 0,
"text": "a' = a + b \\, \\pmod{2^n}"
},
{
"math_id": 1,
"text": "b' = a + 2b\\, \\pmod{2^n}"
},
{
"math_id": 2,
"text": "b = b' - a' \\, \\pmod{2^n}"
},
{
"math_id": 3,
"text": "a = 2a' - b' \\, \\pmod{2^n}"
},
{
"math_id": 4,
"text": "a' = 2a + b \\, \\pmod{2^n}"
},
{
"math_id": 5,
"text": "b' = a + b\\, \\pmod{2^n}"
},
{
"math_id": 6,
"text": "H_1 = \\begin{bmatrix} 2 & 1 \\\\ 1 & 1 \\end{bmatrix}"
},
{
"math_id": 7,
"text": "H_n = \\begin{bmatrix} 2 \\times H_{n-1} & H_{n-1} \\\\ H_{n-1} & H_{n-1} \\end{bmatrix}"
},
{
"math_id": 8,
"text": "H_2 = \\begin{bmatrix} 4 & 2 & 2 & 1 \\\\ 2 & 2 & 1 & 1 \\\\ 2 & 1 & 2 & 1 \\\\ 1 & 1 & 1 & 1 \\end{bmatrix}"
}
] | https://en.wikipedia.org/wiki?curid=1188115 |
1188184 | Block design | In combinatorial mathematics, a block design is an incidence structure consisting of a set together with a family of subsets known as "blocks", chosen such that frequency of the elements satisfies certain conditions making the collection of blocks exhibit symmetry (balance). Block designs have applications in many areas, including experimental design, finite geometry, physical chemistry, software testing, cryptography, and algebraic geometry.
Without further specifications the term "block design" usually refers to a balanced incomplete block design (BIBD), specifically (and also synonymously) a 2-design, which has been the most intensely studied type historically due to its application in the design of experiments. Its generalization is known as a t-design.
Overview.
A design is said to be "balanced" (up to "t") if all "t"-subsets of the original set occur in equally many (i.e., "λ") blocks. When "t" is unspecified, it can usually be assumed to be 2, which means that each "pair" of elements is found in the same number of blocks and the design is "pairwise balanced". For "t"=1, each element occurs in the same number of blocks (the "replication number", denoted "r") and the design is said to be "regular". Any design balanced up to "t" is also balanced in all lower values of "t" (though with different "λ"-values), so for example a pairwise balanced ("t"=2) design is also regular ("t"=1). When the balancing requirement fails, a design may still be "partially balanced" if the "t"-subsets can be divided into "n" classes, each with its own (different) "λ"-value. For "t"=2 these are known as PBIBD("n") designs, whose classes form an association scheme.
Designs are usually said (or assumed) to be "incomplete", meaning that the collection of blocks is not all possible "k"-subsets, thus ruling out a trivial design.
A block design in which all the blocks have the same size (usually denoted "k") is called "uniform" or "proper". The designs discussed in this article are all uniform. Block designs that are not necessarily uniform have also been studied; for "t"=2 they are known in the literature under the general name pairwise balanced designs (PBDs).
Block designs may or may not have repeated blocks. Designs without repeated blocks are called "simple", in which case the "family" of blocks is a set rather than a multiset.
In statistics, the concept of a block design may be extended to "non-binary" block designs, in which blocks may contain multiple copies of an element (see blocking (statistics)). There, a design in which each element occurs the same total number of times is called "equireplicate," which implies a "regular" design only when the design is also binary. The incidence matrix of a non-binary design lists the number of times each element is repeated in each block.
Regular uniform designs (configurations).
The simplest type of "balanced" design ("t"=1) is known as a tactical configuration or 1-design. The corresponding incidence structure in geometry is known simply as a configuration, see Configuration (geometry). Such a design is uniform and regular: each block contains "k" elements and each element is contained in "r" blocks. The number of set elements "v" and the number of blocks "b" are related by formula_0, which is the total number of element occurrences.
Every binary matrix with constant row and column sums is the incidence matrix of a regular uniform block design. Also, each configuration has a corresponding biregular bipartite graph known as its incidence or Levi graph.
Pairwise balanced uniform designs (2-designs or BIBDs).
Given a finite set "X" (of elements called "points") and integers "k", "r", "λ" ≥ 1, we define a "2-design" (or "BIBD", standing for balanced incomplete block design) "B" to be a family of "k"-element subsets of "X", called "blocks", such that any "x" in "X" is contained in "r" blocks, and any pair of distinct points "x" and "y" in "X" is contained in "λ" blocks. Here, the condition that any "x" in "X" is contained in "r" blocks is redundant, as shown below.
Here "v" (the number of elements of "X", called points), "b" (the number of blocks), "k", "r", and λ are the "parameters" of the design. (To avoid degenerate examples, it is also assumed that "v" > "k", so that no block contains all the elements of the set. This is the meaning of "incomplete" in the name of these designs.) In a table:
The design is called a ("v", "k", "λ")-design or a ("v", "b", "r", "k", "λ")-design. The parameters are not all independent; "v", "k", and λ determine "b" and "r", and not all combinations of "v", "k", and "λ" are possible. The two basic equations connecting these parameters are
formula_1
obtained by counting the number of pairs ("B", "p") where "B" is a block and "p" is a point in that block, and
formula_2
obtained from counting for a fixed "x" the triples ("x", "y", "B") where "x" and "y" are distinct points and "B" is a block that contains them both. This equation for every "x" also proves that "r" is constant (independent of "x") even without assuming it explicitly, thus proving that the condition that any "x" in "X" is contained in "r" blocks is redundant and "r" can be computed from the other parameters.
The resulting "b" and "r" must be integers, which imposes conditions on "v", "k", and "λ". These conditions are not sufficient as, for example, a (43,7,1)-design does not exist.
The "order" of a 2-design is defined to be "n" = "r" − "λ". The complement of a 2-design is obtained by replacing each block with its complement in the point set "X". It is also a 2-design and has parameters "v"′ = "v", "b"′ = "b", "r"′ = "b" − "r", "k"′ = "v" − "k", "λ"′ = "λ" + "b" − 2"r". A 2-design and its complement have the same order.
A fundamental theorem, Fisher's inequality, named after the statistician Ronald Fisher, is that "b" ≥ "v" in any 2-design.
A rather surprising and not very obvious (but very general) combinatorial result for these designs is that if points are denoted by any arbitrarily chosen set of equally or unequally spaced numerics, there is no choice of such a set which can make all block-sums (that is, sum of all points in a given block) constant. For other designs such as partially balanced incomplete block designs this may however be possible. Many such cases are discussed in. However, it can also be observed trivially for the magic squares or magic rectangles which can be viewed as the partially balanced incomplete block designs.
Examples.
The unique (6,3,2)-design ("v" = 6, "k" = 3, "λ" = 2) has 10 blocks ("b" = 10) and each element is repeated 5 times ("r" = 5). Using the symbols 0 − 5, the blocks are the following triples:
012 013 024 035 045 125 134 145 234 235.
and the corresponding incidence matrix (a "v"×"b" binary matrix with constant row sum "r" and constant column sum "k") is:
formula_3
One of four nonisomorphic (8,4,3)-designs has 14 blocks with each element repeated 7 times. Using the symbols 0 − 7 the blocks are the following 4-tuples:
0123 0124 0156 0257 0345 0367 0467 1267 1346 1357 1457 2347 2356 2456.
The unique (7,3,1)-design is symmetric and has 7 blocks with each element repeated 3 times. Using the symbols 0 − 6, the blocks are the following triples:
013 026 045 124 156 235 346.
This design is associated with the Fano plane, with the elements and blocks of the design corresponding to the points and lines of the plane. Its corresponding incidence matrix can also be symmetric, if the labels or blocks are sorted the right way:
formula_4
Symmetric 2-designs (SBIBDs).
The case of equality in Fisher's inequality, that is, a 2-design with an equal number of points and blocks, is called a symmetric design. Symmetric designs have the smallest number of blocks among all the 2-designs with the same number of points.
In a symmetric design "r" = "k" holds as well as "b" = "v", and, while it is generally not true in arbitrary 2-designs, in a symmetric design every two distinct blocks meet in "λ" points. A theorem of Ryser provides the converse. If "X" is a "v"-element set, and "B" is a "v"-element set of "k"-element subsets (the "blocks"), such that any two distinct blocks have exactly λ points in common, then ("X, B") is a symmetric block design.
The parameters of a symmetric design satisfy
formula_5
This imposes strong restrictions on "v", so the number of points is far from arbitrary. The Bruck–Ryser–Chowla theorem gives necessary, but not sufficient, conditions for the existence of a symmetric design in terms of these parameters.
The following are important examples of symmetric 2-designs:
Projective planes.
Finite projective planes are symmetric 2-designs with "λ" = 1 and order "n" > 1. For these designs the symmetric design equation becomes:
formula_6
Since "k" = "r" we can write the "order of a projective plane" as "n" = "k" − 1 and, from the displayed equation above, we obtain "v" = ("n" + 1)"n" + 1 = "n"2 + "n" + 1 points in a projective plane of order "n".
As a projective plane is a symmetric design, we have "b" = "v", meaning that "b" = "n"2 + "n" + 1 also. The number "b" is the number of "lines" of the projective plane. There can be no repeated lines since λ = 1, so a projective plane is a simple 2-design in which the number of lines and the number of points are always the same. For a projective plane, "k" is the number of points on each line and it is equal to "n" + 1. Similarly, "r" = "n" + 1 is the number of lines with which a given point is incident.
For "n" = 2 we get a projective plane of order 2, also called the Fano plane, with "v" = 4 + 2 + 1 = 7 points and 7 lines. In the Fano plane, each line has "n" + 1 = 3 points and each point belongs to "n" + 1 = 3 lines.
Projective planes are known to exist for all orders which are prime numbers or powers of primes. They form the only known infinite family (with respect to having a constant λ value) of symmetric block designs.
Biplanes.
A biplane or biplane geometry is a symmetric 2-design with "λ" = 2; that is, every set of two points is contained in two blocks ("lines"), while any two lines intersect in two points. They are similar to finite projective planes, except that rather than two points determining one line (and two lines determining one point), two points determine two lines (respectively, points). A biplane of order "n" is one whose blocks have "k" = "n" + 2 points; it has "v" = 1 + ("n" + 2)("n" + 1)/2 points (since "r" = "k").
The 18 known examples are listed below.
Algebraically this corresponds to the exceptional embedding of the projective special linear group "PSL"(2,5) in "PSL"(2,11) – see projective linear group: action on "p" points for details.
Biplanes of orders 5, 6, 8 and 10 do not exist, as shown by the Bruck-Ryser-Chowla theorem.
Hadamard 2-designs.
An Hadamard matrix of size "m" is an "m" × "m" matrix H whose entries are ±1 such that HH⊤ = mIm, where H⊤ is the transpose of H and I"m" is the "m" × "m" identity matrix. An Hadamard matrix can be put into "standardized form" (that is, converted to an equivalent Hadamard matrix) where the first row and first column entries are all +1. If the size "m" > 2 then "m" must be a multiple of 4.
Given an Hadamard matrix of size 4"a" in standardized form, remove the first row and first column and convert every −1 to a 0. The resulting 0–1 matrix M is the incidence matrix of a symmetric 2-(4"a" − 1, 2"a" − 1, "a" − 1) design called an Hadamard 2-design.
It contains formula_7 blocks/points; each contains/is contained in formula_8 points/blocks. Each pair of points is contained in exactly formula_9 blocks.
This construction is reversible, and the incidence matrix of a symmetric 2-design with these parameters can be used to form an Hadamard matrix of size 4"a".
Resolvable 2-designs.
A resolvable 2-design is a BIBD whose blocks can be partitioned into sets (called "parallel classes"), each of which forms a partition of the point set of the BIBD. The set of parallel classes is called a "resolution" of the design.
If a 2-("v","k",λ) resolvable design has "c" parallel classes, then "b" ≥ "v" + "c" − 1.
Consequently, a symmetric design can not have a non-trivial (more than one parallel class) resolution.
Archetypical resolvable 2-designs are the finite affine planes. A solution of the famous 15 schoolgirl problem is a resolution of a 2-(15,3,1) design.
General balanced designs ("t"-designs).
Given any positive integer "t", a "t"-design "B" is a class of "k"-element subsets of "X", called "blocks", such that every point "x" in "X" appears in exactly "r" blocks, and every "t"-element subset "T" appears in exactly λ blocks. The numbers "v" (the number of elements of "X"), "b" (the number of blocks), "k", "r", λ, and "t" are the "parameters" of the design. The design may be called a "t"-("v","k",λ)-design. Again, these four numbers determine "b" and "r" and the four numbers themselves cannot be chosen arbitrarily. The equations are
formula_10
where "λi" is the number of blocks that contain any "i"-element set of points and "λt" = λ.
Note that formula_11 and formula_12.
Theorem: Any "t"-("v","k",λ)-design is also an "s"-("v","k",λs)-design for any "s" with 1 ≤ "s" ≤ "t". (Note that the "lambda value" changes as above and depends on "s".)
A consequence of this theorem is that every "t"-design with "t" ≥ 2 is also a 2-design.
A "t"-("v","k",1)-design is called a Steiner system.
The term "block design" by itself usually means a 2-design.
Derived and extendable t-designs.
Let D = ("X", "B") be a t-("v","k","λ") design and "p" a point of "X". The "derived design" "D""p" has point set "X" − {"p"} and as block set all the blocks of D which contain p with p removed. It is a ("t" − 1)-("v" − 1, "k" − 1, "λ") design. Note that derived designs with respect to different points may not be isomorphic. A design E is called an "extension" of D if E has a point p such that Ep is isomorphic to D; we call D "extendable" if it has an extension.
Theorem: If a "t"-("v","k","λ") design has an extension, then "k" + 1 divides "b"("v" + 1).
The only extendable projective planes (symmetric 2-("n"2 + "n" + 1, "n" + 1, 1) designs) are those of orders 2 and 4.
Every Hadamard 2-design is extendable (to an Hadamard 3-design).
Theorem:.
If D, a symmetric 2-("v","k",λ) design, is extendable, then one of the following holds:
Note that the projective plane of order two is an Hadamard 2-design; the projective plane of order four has parameters which fall in case 2; the only other known symmetric 2-designs with parameters in case 2 are the order 9 biplanes, but none of them are extendable; and there is no known symmetric 2-design with the parameters of case 3.
Inversive planes.
A design with the parameters of the extension of an affine plane, i.e., a 3-("n"2 + 1, "n" + 1, 1) design, is called a finite inversive plane, or Möbius plane, of order "n".
It is possible to give a geometric description of some inversive planes, indeed, of all known inversive planes. An "ovoid" in PG(3,"q") is a set of "q"2 + 1 points, no three collinear. It can be shown that every plane (which is a hyperplane since the geometric dimension is 3) of PG(3,"q") meets an ovoid "O" in either 1 or "q" + 1 points. The plane sections of size "q" + 1 of "O" are the blocks of an inversive plane of order "q". Any inversive plane arising this way is called "egglike". All known inversive planes are egglike.
An example of an ovoid is the elliptic quadric, the set of zeros of the quadratic form
"x"1"x"2 + "f"("x"3, "x"4),
where f is an irreducible quadratic form in two variables over GF("q"). ["f"("x","y") = "x"2 + "xy" + "y"2 for example].
If "q" is an odd power of 2, another type of ovoid is known – the Suzuki–Tits ovoid.
Theorem. Let "q" be a positive integer, at least 2. (a) If "q" is odd, then any ovoid is projectively equivalent to the elliptic quadric in a projective geometry PG(3,"q"); so "q" is a prime power and there is a unique egglike inversive plane of order "q". (But it is unknown if non-egglike ones exist.) (b) if "q" is even, then "q" is a power of 2 and any inversive plane of order "q" is egglike (but there may be some unknown ovoids).
Partially balanced designs (PBIBDs).
An "n"-class association scheme consists of a set "X" of size "v" together with a partition "S" of "X" × "X" into "n" + 1 binary relations, R0, R1, ..., Rn. A pair of elements in relation Ri are said to be "i"th–"associates". Each element of "X" has "n"i "i"th associates. Furthermore:
An association scheme is "commutative" if formula_20 for all "i", "j" and "k". Most authors assume this property.
A partially balanced incomplete block design with "n" associate classes (PBIBD("n")) is a block design based on a "v"-set X with "b" blocks each of size "k" and with each element appearing in "r" blocks, such that there is an association scheme with "n" classes defined on "X" where, if elements "x" and "y" are "i"th associates, 1 ≤ "i" ≤ "n", then they are together in precisely λi blocks.
A PBIBD("n") determines an association scheme but the converse is false.
Example.
Let "A"(3) be the following association scheme with three associate classes on the set "X" = {1,2,3,4,5,6}. The ("i","j") entry is "s" if elements "i" and "j" are in relation Rs.
The blocks of a PBIBD(3) based on "A"(3) are:
The parameters of this PBIBD(3) are: "v" = 6, "b" = 8, "k" = 3, "r" = 4 and λ1 = λ2 = 2 and λ3 = 1. Also, for the association scheme we have "n"0 = "n"2 = 1 and "n"1 = "n"3 = 2. The incidence matrix M is
formula_21
and the concurrence matrix MMT is
formula_22
from which we can recover the "λ" and "r" values.
Properties.
The parameters of a PBIBD("m") satisfy:
A PBIBD(1) is a BIBD and a PBIBD(2) in which λ1 = λ2 is a BIBD.
Two associate class PBIBDs.
PBIBD(2)s have been studied the most since they are the simplest and most useful of the PBIBDs. They fall into six types based on a classification of the "then known" PBIBD(2)s by :
Applications.
The mathematical subject of block designs originated in the statistical framework of design of experiments. These designs were especially useful in applications of the technique of analysis of variance (ANOVA). This remains a significant area for the use of block designs.
While the origins of the subject are grounded in biological applications (as is some of the existing terminology), the designs are used in many applications where systematic comparisons are being made, such as in software testing.
The incidence matrix of block designs provide a natural source of interesting block codes that are used as error correcting codes. The rows of their incidence matrices are also used as the symbols in a form of pulse-position modulation.
Statistical application.
Suppose that skin cancer researchers want to test three different sunscreens. They coat two different sunscreens on the upper sides of the hands of a test person. After a UV radiation they record the skin irritation in terms of sunburn. The number of treatments is 3 (sunscreens) and the block size is 2 (hands per person).
A corresponding BIBD can be generated by the R-function "design.bib" of the R-package agricolae and is specified in the following table:
The investigator chooses the parameters "v" = 3, "k" = 2 and λ = 1 for the block design which are then inserted into the R-function. Subsequently, the remaining parameters b and r are determined automatically.
Using the basic relations we calculate that we need "b" = 3 blocks, that is, 3 test people in order to obtain a balanced incomplete block design. Labeling the blocks "A", "B" and C, to avoid confusion, we have the block design,
"A" = {2, 3}, "B" = {1, 3} and "C" = {1, 2}.
A corresponding incidence matrix is specified in the following table:
Each treatment occurs in 2 blocks, so "r" = 2.
Just one block (C) contains the treatments 1 and 2 simultaneously and the same applies to the pairs of treatments (1,3) and (2,3). Therefore, λ = 1.
It is impossible to use a complete design (all treatments in each block) in this example because there are 3 sunscreens to test, but only 2 hands on each person.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " bk = vr "
},
{
"math_id": 1,
"text": " bk = vr, "
},
{
"math_id": 2,
"text": " \\lambda(v-1) = r(k-1), "
},
{
"math_id": 3,
"text": "\\begin{pmatrix}\n 1 & 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 & 0 \\\\\n 1 & 1 & 0 & 0 & 0 & 1 & 1 & 1 & 0 & 0 \\\\\n 1 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 1 & 1 \\\\\n 0 & 1 & 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 \\\\\n 0 & 0 & 1 & 0 & 1 & 0 & 1 & 1 & 1 & 0 \\\\\n 0 & 0 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 1 \\\\\n \\end{pmatrix}"
},
{
"math_id": 4,
"text": "\\left ( \\begin{matrix} \n1 & 1 & 1 & 0 & 0 & 0 & 0\\\\\n1 & 0 & 0 & 1 & 1 & 0 & 0 \\\\\n1 & 0 & 0 & 0 & 0 & 1 & 1 \\\\\n0 & 1 & 0 & 1 & 0 & 1 & 0 \\\\\n0 & 1 & 0 & 0 & 1 & 0 & 1 \\\\\n0 & 0 & 1 & 1 & 0 & 0 & 1 \\\\\n0 & 0 & 1 & 0 & 1 & 1 & 0\n\\end{matrix} \\right ) "
},
{
"math_id": 5,
"text": " \\lambda (v-1) = k(k-1). "
},
{
"math_id": 6,
"text": "v-1 = k(k-1)."
},
{
"math_id": 7,
"text": "4a-1"
},
{
"math_id": 8,
"text": "2a-1"
},
{
"math_id": 9,
"text": "a-1"
},
{
"math_id": 10,
"text": " \\lambda_i = \\lambda \\left.\\binom{v-i}{t-i} \\right/ \\binom{k-i}{t-i} \\text{ for } i = 0,1,\\ldots,t, "
},
{
"math_id": 11,
"text": "b=\\lambda_0 = \\lambda {v\\choose t} / {k\\choose t}"
},
{
"math_id": 12,
"text": "r = \\lambda_1 = \\lambda {v-1 \\choose t-1} / {k-1 \\choose t-1} "
},
{
"math_id": 13,
"text": "R_{0}=\\{(x,x):x\\in X\\}"
},
{
"math_id": 14,
"text": " R^* :=\\{(x,y) | (y,x)\\in R\\}"
},
{
"math_id": 15,
"text": "(x,y)\\in R_{k}"
},
{
"math_id": 16,
"text": "z\\in X"
},
{
"math_id": 17,
"text": "(x,z)\\in R_{i}"
},
{
"math_id": 18,
"text": "(z,y)\\in R_{j}"
},
{
"math_id": 19,
"text": "p^k_{ij}"
},
{
"math_id": 20,
"text": "p_{ij}^k=p_{ji}^k"
},
{
"math_id": 21,
"text": "\\begin{pmatrix}\n 1 & 1 & 1 & 1 & 0 & 0 & 0 & 0 \\\\\n 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 1 & 1 & 1 & 0 & 0 \\\\\n 1 & 0 & 1 & 0 & 0 & 0 & 1 & 1 \\\\\n 0 & 1 & 0 & 0 & 1 & 0 & 1 & 1 \\\\\n 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1 \\\\\n \\end{pmatrix}"
},
{
"math_id": 22,
"text": "\\begin{pmatrix}\n 4 & 2 & 2 & 2 & 1 & 1 \\\\\n 2 & 4 & 2 & 1 & 2 & 1 \\\\\n 2 & 2 & 4 & 1 & 1 & 2 \\\\\n 2 & 1 & 1 & 4 & 2 & 2 \\\\\n 1 & 2 & 1 & 2 & 4 & 2 \\\\\n 1 & 1 & 2 & 2 & 2 & 4 \\\\\n \\end{pmatrix}"
},
{
"math_id": 23,
"text": " vr = bk "
},
{
"math_id": 24,
"text": " \\sum_{i=1}^m n_i = v-1 "
},
{
"math_id": 25,
"text": " \\sum_{i=1}^m n_i \\lambda_i = r(k-1) "
},
{
"math_id": 26,
"text": " \\sum_{u=0}^m p_{ju}^h = n_j "
},
{
"math_id": 27,
"text": " n_i p_{jh}^i = n_j p_{ih}^j "
}
] | https://en.wikipedia.org/wiki?curid=1188184 |
11882170 | Total electron content | Descriptive quantity of the ionosphere
Total electron content (TEC) is an important descriptive quantity for the ionosphere of the Earth. TEC is the total number of electrons integrated between two points, along a tube of one meter squared cross section, i.e., the electron columnar number density. It is often reported in multiples of the so-called "TEC unit", defined as TECU
1016el/m2≈.
TEC is significant in determining the scintillation and group and phase delays of a radio wave through a medium. Ionospheric TEC is characterized by observing carrier phase delays of received radio signals transmitted from satellites located above the ionosphere, often using Global Positioning System satellites. TEC is strongly affected by solar activity.
Formulation.
The TEC is path-dependent. By definition, it can be calculated by integrating along the path "ds" through the ionosphere with the location-dependent electron density "ne(s)":
TEC = formula_0
The "vertical" TEC ("VTEC") is determined by integration of the electron density on a perpendicular to the ground standing route, the "slant" TEC ("STEC") is obtained by integrating over any straight path.
Propagation delay.
To first order, the ionospheric radio propagation effect is proportional to TEC and inversely proportional to the radio frequency "f". The ionospheric phase delay compared to propagation in vacuum reads:
formula_1
while the ionospheric group delay has the same magnitude but opposite sign:
formula_2
The ionospheric delay is normally expressed in units of length (meters), assuming a delay duration (in seconds) multiplied by the vacuum speed of light (in m/s).
The proportionality constant "κ" reads:
formula_3
where "q", "m"e, re are the electron charge, mass, and radius, respectively; "c" is the vacuum speed of light and "ϵ"0 is the vacuum permittivity. The value of the constant is approximately "κ" ≈ 40.308193 m3·s−2; the units can be expressed equivalently as m·m2·Hz2 to highlight the cancellation involved in yielding delays τ in meters, given "f" in Hz and TEC in m−2.
Typical daytime values of TEC are expressed on the scale from 0 to 100 TEC units. However, very small variations of 0.1-0.5 TEC units can be also extracted under the assumption of relatively constant observational biases. These small TEC variations are related to medium-scale traveling ionospheric disturbances (MSTIDs). These ionospheric disturbances are primarily generated by gravity waves propagating upward from lower atmosphere.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\int n_e(s)\\,ds"
},
{
"math_id": 1,
"text": "\\tau_p^\\mathrm{iono} = -\\kappa \\frac{\\mathrm{TEC}}{f^2} "
},
{
"math_id": 2,
"text": "\\tau_g^\\mathrm{iono} = -\\tau_p^\\mathrm{iono}"
},
{
"math_id": 3,
"text": "\\kappa = q^2 / (8 \\pi^2 m_e \\epsilon_0) = c^2 r_e / (2\\pi)\n"
}
] | https://en.wikipedia.org/wiki?curid=11882170 |
1188375 | Turing reduction | Concept in computability theory
In computability theory, a Turing reduction from a decision problem formula_0 to a decision problem formula_1 is an oracle machine that decides problem formula_0 given an oracle for formula_1 (Rogers 1967, Soare 1987). It can be understood as an algorithm that could be used to solve formula_0 if it had available to it a subroutine for solving formula_1. The concept can be analogously applied to function problems.
If a Turing reduction from formula_0 to formula_1 exists, then every algorithm for formula_1 can be used to produce an algorithm for formula_0, by inserting the algorithm for formula_1 at each place where the oracle machine computing formula_0 queries the oracle for formula_1. However, because the oracle machine may query the oracle a large number of times, the resulting algorithm may require more time asymptotically than either the algorithm for formula_1 or the oracle machine computing formula_0. A Turing reduction in which the oracle machine runs in polynomial time is known as a Cook reduction.
The first formal definition of relative computability, then called relative reducibility, was given by Alan Turing in 1939 in terms of oracle machines. Later in 1943 and 1952 Stephen Kleene defined an equivalent concept in terms of recursive functions. In 1944 Emil Post used the term "Turing reducibility" to refer to the concept.
Definition.
Given two sets formula_2 of natural numbers, we say formula_0 is Turing reducible to formula_1 and write
formula_3
if and only if there is an oracle machine that computes the characteristic function of "A" when run with oracle "B". In this case, we also say "A" is B"-recursive and B"-computable.
If there is an oracle machine that, when run with oracle "B", computes a partial function with domain "A", then "A" is said to be B"-recursively enumerable and B"-computably enumerable.
We say formula_0 is Turing equivalent to formula_1 and write formula_4 if both formula_3 and formula_5 The equivalence classes of Turing equivalent sets are called Turing degrees. The Turing degree of a set formula_6 is written formula_7.
Given a set formula_8, a set formula_9 is called Turing hard for formula_10 if formula_11 for all formula_12. If additionally formula_13 then formula_0 is called Turing complete for formula_10.
Relation of Turing completeness to computational universality.
Turing completeness, as just defined above, corresponds only partially to Turing completeness in the sense of computational universality. Specifically, a Turing machine is a universal Turing machine if its halting problem (i.e., the set of inputs for which it eventually halts) is many-one complete for the set formula_10 of recursively enumerable sets. Thus, a necessary "but insufficient" condition for a machine to be computationally universal, is that the machine's halting problem be Turing-complete for formula_10. Insufficient because it may still be the case that, the language accepted by the machine is not itself recursively enumerable.
Example.
Let formula_14 denote the set of input values for which the Turing machine with index "e" halts. Then the sets formula_15 and formula_16 are Turing equivalent (here formula_17 denotes an effective pairing function). A reduction showing formula_3 can be constructed using the fact that formula_18. Given a pair formula_19, a new index formula_20 can be constructed using the smn theorem such that the program coded by formula_20 ignores its input and merely simulates the computation of the machine with index "e" on input "n". In particular, the machine with index formula_20 either halts on every input or halts on no input. Thus formula_21 holds for all "e" and "n". Because the function "i" is computable, this shows formula_22. The reductions presented here are not only Turing reductions but "many-one reductions", discussed below.
The use of a reduction.
Since every reduction from a set formula_0 to a set formula_1 has to determine whether a single element is in formula_0 in only finitely many steps, it can only make finitely many queries of membership in the set formula_1. When the amount of information about the set formula_1 used to compute a single bit of formula_0 is discussed, this is made precise by the "use" function. Formally, the "use" of a reduction is the function that sends each natural number formula_30 to the largest natural number formula_31 whose membership in the set formula_1 was queried by the reduction while determining the membership of formula_30 in formula_0.
Stronger reductions.
There are two common ways of producing reductions stronger than Turing reducibility. The first way is to limit the number and manner of oracle queries.
The second way to produce a stronger reducibility notion is to limit the computational resources that the program implementing the Turing reduction may use. These limits on the computational complexity of the reduction are important when studying subrecursive classes such as P. A set "A" is polynomial-time reducible to a set formula_1 if there is a Turing reduction of formula_0 to formula_1 that runs in polynomial time. The concept of log-space reduction is similar.
These reductions are stronger in the sense that they provide a finer distinction into equivalence classes, and satisfy more restrictive requirements than Turing reductions. Consequently, such reductions are harder to find. There may be no way to build a many-one reduction from one set to another even when a Turing reduction for the same sets exists.
Weaker reductions.
According to the Church–Turing thesis, a Turing reduction is the most general form of an effectively calculable reduction. Nevertheless, weaker reductions are also considered. Set formula_0 is said to be arithmetical in formula_1 if formula_0 is definable by a formula of Peano arithmetic with formula_1 as a parameter. The set formula_0 is hyperarithmetical in formula_1 if there is a recursive ordinal formula_34 such that formula_0 is computable from formula_35, the "α"-iterated Turing jump of formula_1. The notion of relative constructibility is an important reducibility notion in set theory.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "A,B \\subseteq \\mathbb{N}"
},
{
"math_id": 3,
"text": "A \\leq_T B"
},
{
"math_id": 4,
"text": "A \\equiv_T B\\,"
},
{
"math_id": 5,
"text": "B \\leq_T A."
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "\\textbf{deg}(X)"
},
{
"math_id": 8,
"text": "\\mathcal{X} \\subseteq \\mathcal{P}(\\mathbb{N})"
},
{
"math_id": 9,
"text": "A \\subseteq \\mathbb{N}"
},
{
"math_id": 10,
"text": "\\mathcal{X}"
},
{
"math_id": 11,
"text": "X \\leq_T A"
},
{
"math_id": 12,
"text": "X \\in \\mathcal{X}"
},
{
"math_id": 13,
"text": "A \\in \\mathcal{X}"
},
{
"math_id": 14,
"text": "W_e"
},
{
"math_id": 15,
"text": "A = \\{e \\mid e \\in W_e\\}"
},
{
"math_id": 16,
"text": "B = \\{(e,n) \\mid n \\in W_e \\}"
},
{
"math_id": 17,
"text": "(-,-)"
},
{
"math_id": 18,
"text": "e \\in A \\Leftrightarrow (e,e) \\in B"
},
{
"math_id": 19,
"text": "(e,n)"
},
{
"math_id": 20,
"text": "i(e,n)"
},
{
"math_id": 21,
"text": "i(e,n) \\in A \\Leftrightarrow (e,n) \\in B"
},
{
"math_id": 22,
"text": "B \\leq_T A"
},
{
"math_id": 23,
"text": "\\leq_T"
},
{
"math_id": 24,
"text": "B \\leq_T C"
},
{
"math_id": 25,
"text": "A \\leq_T C"
},
{
"math_id": 26,
"text": "A \\leq_T A"
},
{
"math_id": 27,
"text": "B \\leq_T A "
},
{
"math_id": 28,
"text": "A = B"
},
{
"math_id": 29,
"text": "(A,B)"
},
{
"math_id": 30,
"text": "n"
},
{
"math_id": 31,
"text": "m"
},
{
"math_id": 32,
"text": "f"
},
{
"math_id": 33,
"text": "f(n)"
},
{
"math_id": 34,
"text": "\\alpha"
},
{
"math_id": 35,
"text": "B^{(\\alpha)}"
}
] | https://en.wikipedia.org/wiki?curid=1188375 |
11885926 | Flow velocity | Vector field which is used to mathematically describe the motion of a continuum
In continuum mechanics the flow velocity in fluid dynamics, also macroscopic velocity in statistical mechanics, or drift velocity in electromagnetism, is a vector field used to mathematically describe the motion of a continuum. The length of the flow velocity vector is scalar, the flow speed.
It is also called velocity field; when evaluated along a line, it is called a velocity profile (as in, e.g., law of the wall).
Definition.
The flow velocity u of a fluid is a vector field
formula_0
which gives the velocity of an "element of fluid" at a position formula_1 and time formula_2
The flow speed "q" is the length of the flow velocity vector
formula_3
and is a scalar field.
Uses.
The flow velocity of a fluid effectively describes everything about the motion of a fluid. Many physical properties of a fluid can be expressed mathematically in terms of the flow velocity. Some common examples follow:
Steady flow.
The flow of a fluid is said to be "steady" if formula_4 does not vary with time. That is if
formula_5
Incompressible flow.
If a fluid is incompressible the divergence of formula_6 is zero:
formula_7
That is, if formula_6 is a solenoidal vector field.
Irrotational flow.
A flow is "irrotational" if the curl of formula_6 is zero:
formula_8
That is, if formula_6 is an irrotational vector field.
A flow in a simply-connected domain which is irrotational can be described as a potential flow, through the use of a velocity potential formula_9 with formula_10 If the flow is both irrotational and incompressible, the Laplacian of the velocity potential must be zero: formula_11
Vorticity.
The "vorticity", formula_12, of a flow can be defined in terms of its flow velocity by
formula_13
If the vorticity is zero, the flow is irrotational.
The velocity potential.
If an irrotational flow occupies a simply-connected fluid region then there exists a scalar field formula_14 such that
formula_15
The scalar field formula_16 is called the velocity potential for the flow. (See Irrotational vector field.)
Bulk velocity.
In many engineering applications the local flow velocity formula_4 vector field is not known in every point and the only accessible velocity is the bulk velocity or average flow velocity formula_17 (with the usual dimension of length per time), defined as the quotient between the volume flow rate formula_18 (with dimension of cubed length per time) and the cross sectional area formula_19 (with dimension of square length):
formula_20.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathbf{u}=\\mathbf{u}(\\mathbf{x},t),"
},
{
"math_id": 1,
"text": "\\mathbf{x}\\,"
},
{
"math_id": 2,
"text": " t.\\,"
},
{
"math_id": 3,
"text": "q = \\| \\mathbf{u} \\|"
},
{
"math_id": 4,
"text": " \\mathbf{u}"
},
{
"math_id": 5,
"text": " \\frac{\\partial \\mathbf{u}}{\\partial t}=0."
},
{
"math_id": 6,
"text": "\\mathbf{u}"
},
{
"math_id": 7,
"text": " \\nabla\\cdot\\mathbf{u}=0."
},
{
"math_id": 8,
"text": " \\nabla\\times\\mathbf{u}=0. "
},
{
"math_id": 9,
"text": "\\Phi,"
},
{
"math_id": 10,
"text": "\\mathbf{u}=\\nabla\\Phi."
},
{
"math_id": 11,
"text": "\\Delta\\Phi=0."
},
{
"math_id": 12,
"text": "\\omega"
},
{
"math_id": 13,
"text": " \\omega=\\nabla\\times\\mathbf{u}."
},
{
"math_id": 14,
"text": " \\phi "
},
{
"math_id": 15,
"text": " \\mathbf{u}=\\nabla\\mathbf{\\phi}. "
},
{
"math_id": 16,
"text": "\\phi"
},
{
"math_id": 17,
"text": "\\bar{u}"
},
{
"math_id": 18,
"text": "\\dot{V}"
},
{
"math_id": 19,
"text": "A"
},
{
"math_id": 20,
"text": "\\bar{u}=\\frac{\\dot{V}}{A}"
}
] | https://en.wikipedia.org/wiki?curid=11885926 |
1189425 | Van Emde Boas tree | A van Emde Boas tree (), also known as a vEB tree or van Emde Boas priority queue, is a tree data structure which implements an associative array with "m"-bit integer keys. It was invented by a team led by Dutch computer scientist Peter van Emde Boas in 1975. It performs all operations in "O"(log "m") time (assuming that an formula_1 bit operation can be performed in constant time), or equivalently in formula_0 time, where formula_2 is the largest element that can be stored in the tree. The parameter formula_3 is not to be confused with the actual number of elements stored in the tree, by which the performance of other tree data-structures is often measured.
The standard vEB tree has inadequate space efficiency. For example, for storing 32-bit integers (i.e., when formula_4, it requires formula_5 bits of storage. However, similar data structures with equally good time efficiency and with space efficiency of formula_6 exist, where formula_7 is the number of stored elements, and vEB trees can be modified to require only formula_8 space.
Supported operations.
A vEB supports the operations of an "ordered associative array", which includes the usual associative array operations along with two more "order" operations, "FindNext" and "FindPrevious":
A vEB tree also supports the operations "Minimum" and "Maximum", which return the minimum and maximum element stored in the tree respectively. These both run in formula_9 time, since the minimum and maximum element are stored as attributes in each tree.
Function.
Let formula_10 for some integer formula_11. Define formula_2. A vEB tree T over the universe formula_12 has a root node that stores an array T.children of length formula_13. T.children[i] is a pointer to a vEB tree that is responsible for the values formula_14. Additionally, "T" stores two values T.min and T.max as well as an auxiliary vEB tree T.aux.
Data is stored in a vEB tree as follows: The smallest value currently in the tree is stored in T.min and largest value is stored in T.max. Note that T.min is not stored anywhere else in the vEB tree, while T.max is. If "T" is empty then we use the convention that T.max=−1 and T.min=M. Any other value formula_15 is stored in the subtree T.children[i] where formula_16. The auxiliary tree T.aux keeps track of which children are non-empty, so T.aux contains the value formula_17 if and only if T.children[j] is non-empty.
FindNext.
The operation FindNext(T, x) that searches for the successor of an element "x" in a vEB tree proceeds as follows: If "x"<T.min then the search is complete, and the answer is T.min. If x≥T.max then the next element does not exist, return M. Otherwise, let formula_16. If x<T.children[i].max then the value being searched for is contained in T.children[i] so the search proceeds recursively in T.children[i]. Otherwise, we search for the successor of the value "i" in T.aux. This gives us the index "j" of the first subtree that contains an element larger than "x". The algorithm then returns T.children[j].min. The element found on the children level needs to be composed with the high bits to form a complete next element.
function FindNext(T, x)
if x < T.min then
return T.min
if x ≥ T.max then "// no next element"
return M
i = floor(x/formula_13)
lo = x mod formula_13
if lo < T.children[i].max then
return (formula_13 i) + FindNext(T.children[i], lo)
j = FindNext(T.aux, i)
return (formula_13 j) + T.children[j].min
end
Note that, in any case, the algorithm performs formula_9 work and then possibly recurses on a subtree over a universe of size formula_18 (an formula_19 bit universe). This gives a recurrence for the running time of formula_20, which resolves to formula_21.
Insert.
The call insert(T, x) that inserts a value x into a vEB tree T operates as follows:
In code:
function Insert(T, x)
if T.min == x || T.max == x then "// x is already inserted"
return
if T.min > T.max then "// T is empty"
T.min = T.max = x;
return
if x < T.min then
swap(x, T.min)
if x > T.max then
T.max = x
i = floor(x / formula_13)
lo = x mod formula_13
Insert(T.children[i], lo)
if T.children[i].min == T.children[i].max then
Insert(T.aux, i)
end
The key to the efficiency of this procedure is that inserting an element into an empty vEB tree takes "O"(1) time. So, even though the algorithm sometimes makes two recursive calls, this only occurs when the first recursive call was into an empty subtree. This gives the same running time recurrence of &NoBreak;&NoBreak; as before.
Delete.
Deletion from vEB trees is the trickiest of the operations. The call Delete(T, x) that deletes a value "x" from a vEB tree T operates as follows:
In code:
function Delete(T, x)
if T.min == T.max == x then
T.min = M
T.max = −1
return
if x == T.min then
hi = T.aux.min * formula_13
j = T.aux.min
T.min = x = hi + T.children[j].min
i = floor(x / formula_13)
lo = x mod formula_13
Delete(T.children[i], lo)
if T.children[i] is empty then
Delete(T.aux, i)
if x == T.max then
if T.aux is empty then
T.max = T.min
else
hi = T.aux.max * formula_13
j = T.aux.max
T.max = hi + T.children[j].max
end
Again, the efficiency of this procedure hinges on the fact that deleting from a vEB tree that contains only one element takes only constant time. In particular, the second Delete call only executes if "x" was the only element in T.children[i] prior to the deletion.
In practice.
The assumption that log "m" is an integer is unnecessary. The operations formula_22 and formula_23 can be replaced by taking only higher-order ⌈"m"/2⌉ and the lower-order ⌊"m"/2⌋ bits of x, respectively. On any existing machine, this is more efficient than division or remainder computations.
In practical implementations, especially on machines with "shift-by-k" and "find first zero" instructions, performance can further be improved by switching to a bit array once m equal to the word size (or a small multiple thereof) is reached. Since all operations on a single word are constant time, this does not affect the asymptotic performance, but it does avoid the majority of the pointer storage and several pointer dereferences, achieving a significant practical savings in time and space with this trick.
An optimization of vEB trees is to discard empty subtrees. This makes vEB trees quite compact when they contain many elements, because no subtrees are created until something needs to be added to them. Initially, each element added creates about log("m") new trees containing about "m"/2 pointers all together. As the tree grows, more and more subtrees are reused, especially the larger ones. In a full tree of "M" elements, only O("M") space is used. Moreover, unlike a binary search tree, most of this space is being used to store data: even for billions of elements, the pointers in a full vEB tree number in the thousands.
The implementation described above uses pointers and occupies a total space of "O"("M")
"O"(2"m"), proportional to the size of the key universe. This can be seen as follows. The recurrence is formula_24.
Resolving that would lead to formula_25.
One can, fortunately, also show that "S"("M")
"M"−2 by induction.
Similar structures.
The "O"("M") space usage of vEB trees is an enormous overhead unless a large fraction of the universe of keys is being stored. This is one reason why vEB trees are not popular in practice. This limitation can be addressed by changing the array used to store children to another data structure. One possibility is to use only a fixed number of bits per level, which results in a trie. Alternatively, each array may be replaced by a hash table, reducing the space to "O"("n" log log "M") (where n is the number of elements stored in the data structure) at the expense of making the data structure randomized.
x-fast tries and the more complicated y-fast tries have comparable update and query times to vEB trees and use randomized hash tables to reduce the space used. x-fast tries use "O"("n" log "M") space while y-fast tries use "O"("n") space.
Fusion trees are another type of tree data structure that implements an associative array on w-bit integers on a finite universe. They use word-level parallelism and bit manipulation techniques to achieve "O"(log"w" "n") time for predecessor/successor queries and updates, where "w" is the word size. Fusion trees use "O"("n") space and can be made dynamic with hashing or exponential trees.
Implementations.
There is a verified implementation in Isabelle (proof assistant). Both functional correctness and time bounds are proved.
Efficient imperative Standard ML code can be generated.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "O(\\log \\log M)"
},
{
"math_id": 1,
"text": "m"
},
{
"math_id": 2,
"text": "M = 2^m"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "m = 32"
},
{
"math_id": 5,
"text": "M = 2^{32}"
},
{
"math_id": 6,
"text": "O(n)"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "O(n \\log M)"
},
{
"math_id": 9,
"text": "O(1)"
},
{
"math_id": 10,
"text": "\\log_2 m = k"
},
{
"math_id": 11,
"text": "k"
},
{
"math_id": 12,
"text": "\\{0, \\ldots, M - 1\\}"
},
{
"math_id": 13,
"text": "\\sqrt{M}"
},
{
"math_id": 14,
"text": "\\{i \\sqrt{M}, \\ldots, (i + 1) \\sqrt{M} - 1\\}"
},
{
"math_id": 15,
"text": "x"
},
{
"math_id": 16,
"text": "i = \\lfloor x / \\sqrt{M} \\rfloor"
},
{
"math_id": 17,
"text": "j"
},
{
"math_id": 18,
"text": "M^{1/2}"
},
{
"math_id": 19,
"text": "m/2"
},
{
"math_id": 20,
"text": "T(m) = T(m/2) + O(1)"
},
{
"math_id": 21,
"text": "O(\\log m) = O(\\log \\log M)"
},
{
"math_id": 22,
"text": "x\\sqrt{M}"
},
{
"math_id": 23,
"text": "x\\bmod\\sqrt{M}"
},
{
"math_id": 24,
"text": " S(M) = O( \\sqrt{M}) + (\\sqrt{M}+1) \\cdot S(\\sqrt{M}) "
},
{
"math_id": 25,
"text": " S(M) \\in (1 + \\sqrt{M})^{\\log \\log M} + \\log \\log M \\cdot O( \\sqrt{M} )"
}
] | https://en.wikipedia.org/wiki?curid=1189425 |
1189485 | Abu al-Wafa' al-Buzjani | Persian mathematician and astronomer (940–998)
Abū al-Wafāʾ Muḥammad ibn Muḥammad ibn Yaḥyā ibn Ismāʿīl ibn al-ʿAbbās al-Būzjānī or Abū al-Wafā Būzhjānī (, ; 10 June 940 – 15 July 998) was a Persian mathematician and astronomer who worked in Baghdad. He made important innovations in spherical trigonometry, and his work on arithmetic for businessmen contains the first instance of using negative numbers in a medieval Islamic text.
He is also credited with compiling the tables of sines and tangents at 15' intervals. He also introduced the secant and cosecant functions, as well studied the interrelations between the six trigonometric lines associated with an arc. His "Almagest" was widely read by medieval Arabic astronomers in the centuries after his death. He is known to have written several other books that have not survived.
Life.
He was born in Buzhgan, (now Torbat-e Jam) in Khorasan (in today's Iran). At age 19, in 959, he moved to Baghdad and remained there until his death in 998. He was a contemporary of the distinguished scientists Abū Sahl al-Qūhī and al-Sijzi who were in Baghdad at the time and others such as Abu Nasr Mansur, Abu-Mahmud Khojandi, Kushyar Gilani and al-Biruni. In Baghdad, he received patronage from members of the Buyid court.
Astronomy.
Abu al-Wafa' was the first to build a wall quadrant to observe the sky. It has been suggested that he was influenced by the works of al-Battani as the latter described a quadrant instrument in his "Kitāb az-Zīj". His use of the concept of the tangent helped solve problems involving right-angled spherical triangles. He developed a new technique to calculate sine tables, allowing him to construct more accurate tables than his predecessors.
In 997, he participated in an experiment to determine the difference in local time between his location, Baghdad, and that of al-Biruni (who was living in Kath, now a part of Uzbekistan). The result was very close to present-day calculations, showing a difference of approximately 1 hour between the two longitudes. Abu al-Wafa is also known to have worked with Abū Sahl al-Qūhī, who was a famous maker of astronomical instruments. While what is extant from his works lacks theoretical innovation, his observational data were used by many later astronomers, including al-Biruni.
"Almagest".
Among his works on astronomy, only the first seven treatises of his "Almagest" ("Kitāb al-Majisṭī") are now extant. The work covers numerous topics in the fields of plane and spherical trigonometry, planetary theory, and solutions to determine the direction of Qibla.
Mathematics.
He defined the tangent function, and he established several trigonometric identities in their modern form, where the ancient Greek mathematicians had expressed the equivalent identities in terms of chords. The trigonometric identities he introduced were:
formula_0
formula_1
formula_2
He may have developed the law of sines for spherical triangles, though others like Abu-Mahmud Khojandi have been credited with the same achievement:
formula_3
where formula_4 are the sides of the triangle (measured in radians on the unit sphere) and formula_5 are the opposing angles.
Some sources suggest that he introduced the tangent function, although other sources give the credit for this innovation to al-Marwazi.
Works.
He also wrote translations and commentaries on the algebraic works of Diophantus, al-Khwārizmī, and Euclid's "Elements".
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sin(a \\pm b) = \\sin(a) \\cos(b) \\pm \\cos(a) \\sin(b)"
},
{
"math_id": 1,
"text": "\\cos(2 a) = 1 - 2\\sin^2(a)"
},
{
"math_id": 2,
"text": "\\sin(2 a) = 2\\sin(a) \\cos(a)"
},
{
"math_id": 3,
"text": "\\frac{\\sin A}{\\sin a} = \\frac{\\sin B}{\\sin b}\n= \\frac{\\sin C}{\\sin c}"
},
{
"math_id": 4,
"text": "A, B, C"
},
{
"math_id": 5,
"text": "a, b, c"
}
] | https://en.wikipedia.org/wiki?curid=1189485 |
1189553 | Path (topology) | Continuous function whose domain is a closed unit interval
In mathematics, a path in a topological space formula_1 is a continuous function from a closed interval into formula_2
Paths play an important role in the fields of topology and mathematical analysis.
For example, a topological space for which there exists a path connecting any two points is said to be path-connected. Any space may be broken up into path-connected components. The set of path-connected components of a space formula_1 is often denoted formula_3
One can also define paths and loops in pointed spaces, which are important in homotopy theory. If formula_1 is a topological space with basepoint formula_4 then a path in formula_1 is one whose initial point is formula_5. Likewise, a loop in formula_1 is one that is based at formula_5.
Definition.
A "curve" in a topological space formula_1 is a continuous function formula_6 from a non-empty and non-degenerate interval formula_7
A path in formula_1 is a curve formula_8 whose domain formula_9 is a compact non-degenerate interval (meaning formula_10 are real numbers), where formula_11 is called the initial point of the path and formula_12 is called its terminal point.
A path from formula_13 to formula_14 is a path whose initial point is formula_13 and whose terminal point is formula_15
Every non-degenerate compact interval formula_9 is homeomorphic to formula_16 which is why a path is sometimes, especially in homotopy theory, defined to be a continuous function formula_17 from the closed unit interval formula_18 into formula_2
An arc or C0-arc in formula_1 is a path in formula_1 that is also a topological embedding.
Importantly, a path is not just a subset of formula_1 that "looks like" a curve, it also includes a parameterization. For example, the maps formula_19 and formula_20 represent two different paths from 0 to 1 on the real line.
A loop in a space formula_1 based at formula_21 is a path from formula_13 to formula_22 A loop may be equally well regarded as a map formula_17 with formula_23 or as a continuous map from the unit circle formula_24 to formula_1
formula_25
This is because formula_24 is the quotient space of formula_26 when formula_27 is identified with formula_28 The set of all loops in formula_1 forms a space called the loop space of formula_2
Homotopy of paths.
Paths and loops are central subjects of study in the branch of algebraic topology called homotopy theory. A homotopy of paths makes precise the notion of continuously deforming a path while keeping its endpoints fixed.
Specifically, a homotopy of paths, or path-homotopy, in formula_1 is a family of paths formula_29 indexed by formula_26 such that
The paths formula_34 and formula_35 connected by a homotopy are said to be homotopic (or more precisely path-homotopic, to distinguish between the relation defined on all continuous functions between fixed spaces). One can likewise define a homotopy of loops keeping the base point fixed.
The relation of being homotopic is an equivalence relation on paths in a topological space. The equivalence class of a path formula_36 under this relation is called the homotopy class of formula_37 often denoted formula_38
Path composition.
One can compose paths in a topological space in the following manner. Suppose formula_36 is a path from formula_13 to formula_14 and formula_39 is a path from formula_14 to formula_40. The path formula_41 is defined as the path obtained by first traversing formula_36 and then traversing formula_39:
formula_42
Clearly path composition is only defined when the terminal point of formula_36 coincides with the initial point of formula_43 If one considers all loops based at a point formula_4 then path composition is a binary operation.
Path composition, whenever defined, is not associative due to the difference in parametrization. However it is associative up to path-homotopy. That is, formula_44 Path composition defines a group structure on the set of homotopy classes of loops based at a point formula_5 in formula_2 The resultant group is called the fundamental group of formula_1 based at formula_4 usually denoted formula_45
In situations calling for associativity of path composition "on the nose," a path in formula_1 may instead be defined as a continuous map from an interval formula_46 to formula_1 for any real formula_47 (Such a path is called a Moore path.) A path formula_36 of this kind has a length formula_48 defined as formula_49 Path composition is then defined as before with the following modification:
formula_50
Whereas with the previous definition, formula_37 formula_39, and formula_41 all have length formula_51 (the length of the domain of the map), this definition makes formula_52 What made associativity fail for the previous definition is that although formula_53 and formula_54have the same length, namely formula_55 the midpoint of formula_53 occurred between formula_39 and formula_56 whereas the midpoint of formula_54 occurred between formula_36 and formula_39. With this modified definition formula_53 and formula_54 have the same length, namely formula_57 and the same midpoint, found at formula_58 in both formula_53 and formula_54; more generally they have the same parametrization throughout.
Fundamental groupoid.
There is a categorical picture of paths which is sometimes useful. Any topological space formula_1 gives rise to a category where the objects are the points of formula_1 and the morphisms are the homotopy classes of paths. Since any morphism in this category is an isomorphism this category is a groupoid, called the fundamental groupoid of formula_2 Loops in this category are the endomorphisms (all of which are actually automorphisms). The automorphism group of a point formula_5 in formula_1 is just the fundamental group based at formula_5. More generally, one can define the fundamental groupoid on any subset formula_0 of formula_59 using homotopy classes of paths joining points of formula_60 This is convenient for Van Kampen's Theorem. | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "X."
},
{
"math_id": 3,
"text": "\\pi_0(X)."
},
{
"math_id": 4,
"text": "x_0,"
},
{
"math_id": 5,
"text": "x_0"
},
{
"math_id": 6,
"text": "f : J \\to X"
},
{
"math_id": 7,
"text": "J \\subseteq \\R."
},
{
"math_id": 8,
"text": "f : [a, b] \\to X"
},
{
"math_id": 9,
"text": "[a, b]"
},
{
"math_id": 10,
"text": "a < b"
},
{
"math_id": 11,
"text": "f(a)"
},
{
"math_id": 12,
"text": "f(b)"
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "y"
},
{
"math_id": 15,
"text": "y."
},
{
"math_id": 16,
"text": "[0, 1],"
},
{
"math_id": 17,
"text": "f : [0, 1] \\to X"
},
{
"math_id": 18,
"text": "I := [0, 1]"
},
{
"math_id": 19,
"text": "f(x) = x"
},
{
"math_id": 20,
"text": "g(x) = x^2"
},
{
"math_id": 21,
"text": "x \\in X"
},
{
"math_id": 22,
"text": "x."
},
{
"math_id": 23,
"text": "f(0) = f(1)"
},
{
"math_id": 24,
"text": "S^1"
},
{
"math_id": 25,
"text": "f : S^1 \\to X."
},
{
"math_id": 26,
"text": "I = [0, 1]"
},
{
"math_id": 27,
"text": "0"
},
{
"math_id": 28,
"text": "1."
},
{
"math_id": 29,
"text": "f_t : [0, 1] \\to X"
},
{
"math_id": 30,
"text": "f_t(0) = x_0"
},
{
"math_id": 31,
"text": "f_t(1) = x_1"
},
{
"math_id": 32,
"text": "F : [0, 1] \\times [0, 1] \\to X"
},
{
"math_id": 33,
"text": "F(s, t) = f_t(s)"
},
{
"math_id": 34,
"text": "f_0"
},
{
"math_id": 35,
"text": "f_1"
},
{
"math_id": 36,
"text": "f"
},
{
"math_id": 37,
"text": "f,"
},
{
"math_id": 38,
"text": "[f]."
},
{
"math_id": 39,
"text": "g"
},
{
"math_id": 40,
"text": "z"
},
{
"math_id": 41,
"text": "fg"
},
{
"math_id": 42,
"text": "fg(s) = \\begin{cases}f(2s) & 0 \\leq s \\leq \\frac{1}{2} \\\\ g(2s-1) & \\frac{1}{2} \\leq s \\leq 1.\\end{cases}"
},
{
"math_id": 43,
"text": "g."
},
{
"math_id": 44,
"text": "[(fg)h] = [f(gh)]."
},
{
"math_id": 45,
"text": "\\pi_1\\left(X, x_0\\right)."
},
{
"math_id": 46,
"text": "[0, a]"
},
{
"math_id": 47,
"text": "a \\geq 0."
},
{
"math_id": 48,
"text": "|f|"
},
{
"math_id": 49,
"text": "a."
},
{
"math_id": 50,
"text": "fg(s) = \\begin{cases}f(s) & 0 \\leq s \\leq |f| \\\\ g(s-|f|) & |f| \\leq s \\leq |f| + |g|\\end{cases}"
},
{
"math_id": 51,
"text": "1"
},
{
"math_id": 52,
"text": "|fg| = |f| + |g|."
},
{
"math_id": 53,
"text": "(fg)h"
},
{
"math_id": 54,
"text": "f(gh)"
},
{
"math_id": 55,
"text": "1,"
},
{
"math_id": 56,
"text": "h,"
},
{
"math_id": 57,
"text": "|f| + |g| + |h|,"
},
{
"math_id": 58,
"text": "\\left(|f| + |g| + |h|\\right)/2"
},
{
"math_id": 59,
"text": "X,"
},
{
"math_id": 60,
"text": "A."
}
] | https://en.wikipedia.org/wiki?curid=1189553 |
11896192 | Wallenius' noncentral hypergeometric distribution | In probability theory and statistics, Wallenius' noncentral hypergeometric distribution (named after Kenneth Ted Wallenius) is a generalization of the hypergeometric distribution where items are sampled with bias.
This distribution can be illustrated as an urn model with bias. Assume, for example, that an urn contains "m"1 red balls and "m"2 white balls, totalling "N" = "m"1 + "m"2 balls. Each red ball has the weight ω1 and each white ball has the weight ω2. We will say that the odds ratio is ω = ω1 / ω2. Now we are taking "n" balls, one by one, in such a way that the probability of taking a particular ball at a particular draw is equal to its proportion of the total weight of all balls that lie in the urn at that moment. The number of red balls "x"1 that we get in this experiment is a random variable with Wallenius' noncentral hypergeometric distribution.
The matter is complicated by the fact that there is more than one noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution is obtained if balls are sampled one by one in such a way that there is competition between the balls. Fisher's noncentral hypergeometric distribution is obtained if the balls are sampled simultaneously or independently of each other. Unfortunately, both distributions are known in the literature as "the" noncentral hypergeometric distribution. It is important to be specific about which distribution is meant when using this name.
The two distributions are both equal to the (central) hypergeometric distribution when the odds ratio is 1.
The difference between these two probability distributions is subtle. See the Wikipedia entry on noncentral hypergeometric distributions for a more detailed explanation.
Univariate distribution.
Wallenius' distribution is particularly complicated because each ball has a probability of being taken that depends not only on its weight, but also on the total weight of its competitors. And the weight of the competing balls depends on the outcomes of all preceding draws.
This recursive dependency gives rise to a difference equation with a solution that is given in open form by the integral in the expression of the probability mass function in the table above.
Closed form expressions for the probability mass function exist (Lyons, 1980), but they are not very useful for practical calculations because of extreme numerical instability, except in degenerate cases.
Several other calculation methods are used, including recursion, Taylor expansion and numerical integration (Fog, 2007, 2008).
The most reliable calculation method is recursive calculation of f("x","n") from f("x","n"-1) and f("x"-1,"n"-1) using the recursion formula given below under properties. The probabilities of all ("x","n") combinations on all possible trajectories leading to the desired point are calculated, starting with f(0,0) = 1 as shown on the figure to the right. The total number of probabilities to calculate is "n"("x"+1)-"x"2. Other calculation methods must be used when "n" and "x" are so big that this method is too inefficient.
The probability that all balls have the same color is easier to calculate. See the formula below under multivariate distribution.
No exact formula for the mean is known (short of complete enumeration of all probabilities). The equation given above is reasonably accurate. This equation can be solved for μ by Newton-Raphson iteration. The same equation can be used for estimating the odds from an experimentally obtained value of the mean.
Properties of the univariate distribution.
Wallenius' distribution has fewer symmetry relations than Fisher's noncentral hypergeometric distribution has. The only symmetry relates to the swapping of colors:
formula_0
Unlike Fisher's distribution, Wallenius' distribution has no symmetry relating to the number of balls "not" taken.
The following recursion formula is useful for calculating probabilities:
formula_1
formula_2
formula_3
Another recursion formula is also known:
formula_1
formula_4
formula_5
The probability is limited by
formula_6
formula_7
formula_8
formula_9
where the underlined superscript indicates the falling factorial formula_10.
Multivariate distribution.
The distribution can be expanded to any number of colors "c" of balls in the urn. The multivariate distribution is used when there are more than two colors.
The probability mass function can be calculated by various Taylor expansion methods or by numerical integration (Fog, 2008).
The probability that all balls have the same color, "j", can be calculated as:
formula_11
for "x"j = "n" ≤ "m"j, where the underlined superscript denotes the falling factorial.
A reasonably good approximation to the mean can be calculated using the equation given above. The equation can be solved by defining θ so that
formula_12
and solving
formula_13
for θ by Newton-Raphson iteration.
The equation for the mean is also useful for estimating the odds from experimentally obtained values for the mean.
No good way of calculating the variance is known. The best known method is to approximate the multivariate Wallenius distribution by a multivariate Fisher's noncentral hypergeometric distribution with the same mean, and insert the mean as calculated above in the approximate formula for the variance of the latter distribution.
Properties of the multivariate distribution.
The order of the colors is arbitrary so that any colors can be swapped.
The weights can be arbitrarily scaled:
formula_14 for all formula_15.
Colors with zero number ("m"i = 0) or zero weight (ωi = 0) can be omitted from the equations.
Colors with the same weight can be joined:
formula_16
formula_17
formula_18
where formula_19 is the (univariate, central) hypergeometric distribution probability.
Complementary Wallenius' noncentral hypergeometric distribution.
The balls that are "not" taken in the urn experiment have a distribution that is different from Wallenius' noncentral hypergeometric distribution, due to a lack of symmetry. The distribution of the balls not taken can be called the complementary Wallenius' noncentral hypergeometric distribution.
Probabilities in the complementary distribution are calculated from Wallenius' distribution by replacing "n" with "N"-"n", "x"i with "m"i - "x"i, and ωi with 1/ωi. | [
{
"math_id": 0,
"text": "\\operatorname{wnchypg}(x;n,m_1,m_2,\\omega) = \\operatorname{wnchypg}(n-x;n,m_2,m_1,1/\\omega)\\,."
},
{
"math_id": 1,
"text": "\\operatorname{wnchypg}(x;n,m_1,m_2,\\omega) = "
},
{
"math_id": 2,
"text": "\\operatorname{wnchypg}(x-1;n-1,m_1,m_2,\\omega) \\frac{(m_1-x+1)\\omega}{(m_1-x+1)\\omega+m_2+x-n} + "
},
{
"math_id": 3,
"text": "\\operatorname{wnchypg}(x;n-1,m_1,m_2,\\omega) \\frac{m_2+x-n+1}{(m_1-x)\\omega+m_2+x-n+1}"
},
{
"math_id": 4,
"text": "\\operatorname{wnchypg}(x-1;n-1,m_1-1,m_2,\\omega) \\frac{m_1\\omega}{m_1\\omega+m_2} + "
},
{
"math_id": 5,
"text": "\\operatorname{wnchypg}(x;n-1,m_1,m_2-1,\\omega) \\frac{m_2}{m_1\\omega+m_2}\\,."
},
{
"math_id": 6,
"text": "\\operatorname{f}_1(x) \\le \\operatorname{wnchypg}(x;n,m_1,m_2,\\omega) \\le \\operatorname{f}_2(x)\\,,\\,\\,\\text{for}\\,\\, \\omega < 1\\,,"
},
{
"math_id": 7,
"text": "\\operatorname{f}_1(x) \\ge \\operatorname{wnchypg}(x;n,m_1,m_2,\\omega) \\ge \\operatorname{f}_2(x)\\,,\\,\\,\\text{for}\\,\\, \\omega > 1\\,,\\text{where}"
},
{
"math_id": 8,
"text": "\\operatorname{f}_1(x)=\\binom{m_1}{x}\\binom{m_2}{n-x} \\frac{n!}{(m_1+m_2/\\omega)^{\\underline{x}}\\, (m_2+\\omega(m_1-x))^{\\underline{n-x}}}"
},
{
"math_id": 9,
"text": "\\operatorname{f}_2(x)=\\binom{m_1}{x}\\binom{m_2}{n-x} \\frac{n!}{(m_1+(m_2-x_2)/\\omega)^{\\underline{x}}\\, (m_2+\\omega m_1)^{\\underline{n-x}}}\\, ,"
},
{
"math_id": 10,
"text": "a^{\\underline{b}} = a(a-1)\\ldots(a-b+1)"
},
{
"math_id": 11,
"text": "\\operatorname{mwnchypg}((0,\\ldots,0,x_j,0,\\ldots);n,\\mathbf{m}, \\boldsymbol{\\omega}) = \\frac{m_j^{\\,\\,\\underline{n}}} {\\left( \\frac{1}{\\omega_j}\\sum_{i=1}^{c}m_i\\omega_i \\right) ^{\\underline{n}}}"
},
{
"math_id": 12,
"text": "\\mu_i = m_i(1-e^{\\omega_i\\theta})"
},
{
"math_id": 13,
"text": "\\sum_{i=1}^c \\mu_i = n"
},
{
"math_id": 14,
"text": "\\operatorname{mwnchypg}(\\mathbf{x};n,\\mathbf{m}, \\boldsymbol{\\omega}) = \\operatorname{mwnchypg}(\\mathbf{x};n,\\mathbf{m}, r\\boldsymbol{\\omega})\\,\\,"
},
{
"math_id": 15,
"text": "r \\in \\mathbb{R}_+"
},
{
"math_id": 16,
"text": "\\operatorname{mwnchypg}\\left(\\mathbf{x};n,\\mathbf{m}, (\\omega_1,\\ldots,\\omega_{c-1},\\omega_{c-1})\\right)\\, ="
},
{
"math_id": 17,
"text": "\\operatorname{mwnchypg}\\left((x_1,\\ldots,x_{c-1}+x_c); n,(m_1,\\ldots,m_{c-1}+m_c), (\\omega_1,\\ldots,\\omega_{c-1})\\right)\\, \\cdot"
},
{
"math_id": 18,
"text": "\\operatorname{hypg}(x_c; x_{c-1}+x_c, m_c, m_{c-1}+m_c)\\,,"
},
{
"math_id": 19,
"text": "\\operatorname{hypg}(x;n,m,N)"
}
] | https://en.wikipedia.org/wiki?curid=11896192 |
11896410 | Electoral quota | Number of votes a candidate needs to win
In proportional representation systems, an electoral quota is the number of votes a candidate needs to be guaranteed election.
Admissible quotas.
An admissible quota is a quota that is guaranteed to apportion only as many seats as are available in the legislature. Such a quota can be any number between:
formula_0
Common quotas.
There are two commonly-used quotas: the Hare and Droop quotas. The Hare quota is unbiased in the number of seats it hands out, and so is more proportional than the Droop quota (which tends to be biased towards larger parties); however, the Droop quota guarantees that a party that wins a majority of votes in a district will win a majority of the seats in the district.
Hare quota.
The Hare quota (also known as the "simple" quota or Hamilton's quota) is the most commonly-used quota for apportionments using the largest remainder method of party-list representation. It was used by Thomas Hare in his first proposals for STV. It is given by the expression:
formula_1
Specifically, the Hare quota is unique in being "unbiased" in the number of seats it hands out. This makes it more proportional than the Droop quota (which is biased towards larger parties).
The Hare quota gives no advantage to larger or smaller parties. However, in small legislatures with no threshold, the Hare quota can be manipulated by running candidates on many small lists, allowing each list to pick up a single remainder seat.
Droop quota.
The Droop quota is used in most single transferable vote (STV) elections today and is occasionally used in elections held under the largest remainder method of party-list proportional representation (list PR). It is given by the expression:
formula_2
It was first proposed in 1868 by the English lawyer and mathematician Henry Richmond Droop (1831–1884), who identified it as the minimum amount of support needed to secure a seat in semiproportional voting systems such as SNTV, leading him to propose it as an alternative to the Hare quota.
However, the Droop quota has a substantial seat bias in favor of larger parties; in fact, the Droop quota is the most-biased possible quota that can still be considered to be proportional.
Today the Droop quota is used in almost all STV elections, including those in India, the Republic of Ireland, Northern Ireland, Malta, and Australia. | [
{
"math_id": 0,
"text": "\\frac{\\text{votes}}{\\text{seats}+1} \\leq \\text{quota} \\leq \\frac{\\text{votes}}{\\text{seats}-1}"
},
{
"math_id": 1,
"text": "\\frac{\\text{total votes}}{\\text{total seats}}"
},
{
"math_id": 2,
"text": "\\frac{\\text{total votes}}{\\text{total seats}+1}"
}
] | https://en.wikipedia.org/wiki?curid=11896410 |
11898194 | Lense–Thirring precession | Precession of a gyroscope due to a nearby celestial body's rotation affecting spacetime
In general relativity, Lense–Thirring precession or the Lense–Thirring effect (; named after Josef Lense and Hans Thirring) is a relativistic correction to the precession of a gyroscope near a large rotating mass such as the Earth. It is a gravitomagnetic frame-dragging effect. It is a prediction of general relativity consisting of secular precessions of the longitude of the ascending node and the argument of pericenter of a test particle freely orbiting a central spinning mass endowed with angular momentum formula_0.
The difference between de Sitter precession and the Lense–Thirring effect is that the de Sitter effect is due simply to the presence of a central mass, whereas the Lense–Thirring effect is due to the rotation of the central mass. The total precession is calculated by combining the de Sitter precession with the Lense–Thirring precession.
According to a 2007 historical analysis by Herbert Pfister, the effect should be renamed the Einstein–Thirring–Lense effect.
The Lense–Thirring metric.
The gravitational field of a spinning spherical body of constant density was studied by Lense and Thirring in 1918, in the weak-field approximation. They obtained the metric
formula_1
where the symbols represent:
The above is the weak-field approximation of the full solution of the Einstein equations for a rotating body, known as the Kerr metric, which, due to the difficulty of its solution, was not obtained until 1965.
The Coriolis term.
The frame-dragging effect can be demonstrated in several ways. One way is to solve for geodesics; these will then exhibit a Coriolis force-like term, except that, in this case (unlike the standard Coriolis force), the force is not fictional, but is due to frame dragging induced by the rotating body. So, for example, an (instantaneously) radially infalling geodesic at the equator will satisfy the equation
formula_11
where
The above can be compared to the standard equation for motion subject to the Coriolis force:
formula_15
where formula_16 is the angular velocity of the rotating coordinate system. Note that, in either case, if the observer is not in radial motion, i.e. if formula_17, there is no effect on the observer.
Precession.
The frame-dragging effect will cause a gyroscope to precess. The rate of precession is given by
formula_18
where:
That is, if the gyroscope's angular momentum relative to the fixed stars is formula_23, then it precesses as
formula_24
The rate of precession is given by
formula_25
where formula_26 is the Christoffel symbol for the above metric. "Gravitation" by Misner, Thorne, and Wheeler provides hints on how to most easily calculate this.
Gravitomagnetic analysis.
It is popular in some circles to use the gravitomagnetic approach to the linearized field equations. The reason for this popularity should be immediately evident below, by contrasting it to the difficulties of working with the equations above. The linearized metric formula_27 can be read off from the Lense–Thirring metric given above, where formula_28, and formula_29. In this approach, one writes the linearized metric, given in terms of the gravitomagnetic potentials formula_30 and formula_31 is
formula_32
and
formula_33
where
formula_34
is the gravito-electric potential, and
formula_35
is the gravitomagnetic potential. Here formula_36 is the 3D spatial coordinate of the observer, and formula_37 is the angular momentum of the rotating body, exactly as defined above. The corresponding fields are
formula_38
for the gravito-electric field, and
formula_39
is the gravitomagnetic field. It is then a matter of substitution and rearranging to obtain
formula_40
as the gravitomagnetic field. Note that it is half the Lense–Thirring precession frequency. In this context, Lense–Thirring precession can essentially be viewed as a form of Larmor precession. The factor of 1/2 suggests that the correct gravitomagnetic analog of the gyromagnetic ratio is (curiously!) two. This factor of two can be explained completely analogous to the electron's g-factor by taking into account relativistic calculations.
The gravitomagnetic analog of the Lorentz force in the non-relativistic limit is given by
formula_41
where formula_42 is the mass of a test particle moving with velocity formula_43. This can be used in a straightforward way to compute the classical motion of bodies in the gravitomagnetic field. For example, a radially infalling body will have a velocity formula_44; direct substitution yields the Coriolis term given in a previous section.
Example: Foucault's pendulum.
To get a sense of the magnitude of the effect, the above can be used to compute the rate of precession of Foucault's pendulum, located at the surface of the Earth.
For a solid ball of uniform density, such as the Earth, of radius formula_45, the moment of inertia is given by formula_46 so that the absolute value of the angular momentum formula_0 is formula_47 with formula_16 the angular speed of the spinning ball.
The direction of the spin of the Earth may be taken as the "z" axis, whereas the axis of the pendulum is perpendicular to the Earth's surface, in the radial direction. Thus, we may take formula_48, where formula_49 is the latitude. Similarly, the location of the observer formula_50 is at the Earth's surface formula_45. This leaves rate of precession is as
formula_51
As an example the latitude of the city of Nijmegen in the Netherlands is used for reference. This latitude gives a value for the Lense–Thirring precession
formula_52
At this rate a Foucault pendulum would have to oscillate for more than 16000 years to precess 1 degree. Despite being quite small, it is still two orders of magnitude larger than Thomas precession for such a pendulum.
The above does not include the de Sitter precession; it would need to be added to get the total relativistic precessions on Earth.
Experimental verification.
The Lense–Thirring effect, and the effect of frame dragging in general, continues to be studied experimentally. There are two basic settings for experimental tests: direct observation via satellites and spacecraft orbiting Earth, Mars or Jupiter, and indirect observation by measuring astrophysical phenomena, such as accretion disks surrounding black holes and neutron stars, or astrophysical jets from the same.
The Juno spacecraft's suite of science instruments will primarily characterize and explore the three-dimensional structure of Jupiter's polar magnetosphere, auroras and mass composition.
As Juno is a polar-orbit mission, it will be possible to measure the orbital frame-dragging, known also as Lense–Thirring precession, caused by the angular momentum of Jupiter.
Results from astrophysical settings are presented after the following section.
Astrophysical setting.
A star orbiting a spinning supermassive black hole experiences Lense–Thirring precession, causing its orbital line of nodes to precess at a rate
formula_53
where
The precessing stars also exert a torque back on the black hole, causing its spin axis to precess, at a rate
formula_54
where
A gaseous accretion disk that is tilted with respect to a spinning black hole will experience Lense–Thirring precession, at a rate given by the above equation, after setting "e" = 0 and identifying "a" with the disk radius. Because the precession rate varies with distance from the black hole, the disk will "wrap up", until viscosity forces the gas into a new plane, aligned with the black hole's spin axis (the "Bardeen–Petterson effect").
Astrophysical tests.
The orientation of an astrophysical jet can be used as evidence to deduce the orientation of an accretion disk; a rapidly changing jet orientation suggests a reorientation of the accretion disk, as described above. Exactly such a change was observed in 2019 with the black hole X-ray binary in V404 Cygni.
Pulsars emit rapidly repeating radio pulses with extremely high regularity, which can be measured with microsecond precision over time spans of years and even decades. A 2020 study reports the observation of a pulsar in a tight orbit with a white dwarf, to sub-millisecond precision over two decades. The precise determination allows the change of orbital parameters to be studied; these confirm the operation of the Lense–Thirring effect in this astrophysical setting.
It may be possible to detect the Lense–Thirring effect by long-term measurement of the orbit of the S2 star around the supermassive black hole in the center of the Milky Way, using the GRAVITY instrument of the Very Large Telescope. The star orbits with a period of 16 years, and it should be possible to constrain the angular momentum of the black hole by observing the star over two to three periods (32 to 48 years).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "\\mathrm ds^2\n=\\left(1-\\frac{2GM}{rc^2}\\right)c^2\\,\\mathrm dt^2\n-\\left(1+\\frac{2GM}{rc^2}\\right)\\,\\mathrm d\\sigma^2\n+4G\\epsilon_{ijk}S^k \\frac{x^i}{c^3r^3} c \\,\\mathrm dt\\,\\mathrm dx^j,"
},
{
"math_id": 2,
"text": "\\mathrm ds^2"
},
{
"math_id": 3,
"text": "\\mathrm d\\sigma^2\n= \\mathrm dx^2 + \\mathrm dy^2 + \\mathrm dz^2\n= \\mathrm dr^2\n + r^2\\mathrm d\\theta^2\n + r^2\\sin^2\\theta\\,\\mathrm d\\varphi^2"
},
{
"math_id": 4,
"text": "r = \\sqrt{x^2 + y^2 + z^2}"
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "G"
},
{
"math_id": 7,
"text": "\\epsilon_{ijk}"
},
{
"math_id": 8,
"text": "M = \\int T^{00} \\,\\mathrm d^3x"
},
{
"math_id": 9,
"text": "S_k = \\int \\epsilon_{klm}x^l T^{m0} \\,\\mathrm d^3x"
},
{
"math_id": 10,
"text": "T^{\\mu\\nu}"
},
{
"math_id": 11,
"text": "r\\frac{\\mathrm{d}^2\\varphi}{\\mathrm{d}t^2}\n+2\\frac{GJ}{c^2r^3}\\frac{\\mathrm{d}r}{\\mathrm{d}t}\n=0,"
},
{
"math_id": 12,
"text": "t"
},
{
"math_id": 13,
"text": "\\varphi"
},
{
"math_id": 14,
"text": "J = \\Vert S \\Vert"
},
{
"math_id": 15,
"text": "r\\frac{\\mathrm{d}^2\\varphi}{\\mathrm{d}t^2}\n+2\\omega\\frac{\\mathrm{d}r}{\\mathrm{d}t} = 0,"
},
{
"math_id": 16,
"text": "\\omega"
},
{
"math_id": 17,
"text": "dr/dt = 0"
},
{
"math_id": 18,
"text": "\\Omega^k = \\frac{G}{c^2 r^3} \\left[S^k - 3 \\frac{(S \\cdot x) x^k}{r^2}\\right],"
},
{
"math_id": 19,
"text": "\\Omega"
},
{
"math_id": 20,
"text": "\\Omega_k"
},
{
"math_id": 21,
"text": "S_k"
},
{
"math_id": 22,
"text": "S \\cdot x"
},
{
"math_id": 23,
"text": "L^i"
},
{
"math_id": 24,
"text": "\\frac{\\mathrm{d}L^i}{\\mathrm{d}t} = \\epsilon_{ijk} \\Omega^j L^k."
},
{
"math_id": 25,
"text": "\\epsilon_{ijk} \\Omega^k = \\Gamma_{ij0},"
},
{
"math_id": 26,
"text": "\\Gamma_{ij0}"
},
{
"math_id": 27,
"text": "h_{\\mu\\nu} = g_{\\mu\\nu} - \\eta_{\\mu\\nu}"
},
{
"math_id": 28,
"text": "ds^2 = g_{\\mu\\nu} \\,dx^\\mu \\,dx^\\nu"
},
{
"math_id": 29,
"text": "\\eta_{\\mu\\nu} \\,dx^\\mu \\,dx^\\nu = c^2 \\,dt^2 - dx^2 - dy^2 - dz^2"
},
{
"math_id": 30,
"text": "\\phi"
},
{
"math_id": 31,
"text": "\\vec{A}"
},
{
"math_id": 32,
"text": "h_{00} = \\frac{-2\\phi}{c^2}"
},
{
"math_id": 33,
"text": "h_{0i} = \\frac{2A_i}{c^2},"
},
{
"math_id": 34,
"text": "\\phi = \\frac{-GM}{r}"
},
{
"math_id": 35,
"text": "\\vec{A} = \\frac {G}{2r^3c} \\vec{S} \\times \\vec{r}"
},
{
"math_id": 36,
"text": "\\vec{r}"
},
{
"math_id": 37,
"text": "\\vec{S}"
},
{
"math_id": 38,
"text": "\\vec{E} = -\\nabla\\phi - \\frac{\\partial \\vec{A}}{\\partial t}"
},
{
"math_id": 39,
"text": "\\vec{B} = \\vec{ \\nabla} \\times \\vec{A}"
},
{
"math_id": 40,
"text": "\\vec{B} = -\\frac{G}{2cr^3} \\left[ \\vec{S} - 3 \\frac{(\\vec{S}\\cdot\\vec{r}) \\vec{r}}{r^2}\\right]"
},
{
"math_id": 41,
"text": "\\vec{F} = m \\vec{ E} + m \\frac{\\vec{v}}{c} \\times \\vec{B},"
},
{
"math_id": 42,
"text": "m"
},
{
"math_id": 43,
"text": "\\vec{v}"
},
{
"math_id": 44,
"text": "\\vec{v} = -\\hat{r} \\,dr/dt"
},
{
"math_id": 45,
"text": "R"
},
{
"math_id": 46,
"text": "2MR^2/5,"
},
{
"math_id": 47,
"text": "\\Vert S\\Vert = 2MR^2\\omega/5,"
},
{
"math_id": 48,
"text": "\\hat{z} \\cdot \\hat{r} = \\cos\\theta"
},
{
"math_id": 49,
"text": "\\theta"
},
{
"math_id": 50,
"text": "r"
},
{
"math_id": 51,
"text": "\\Omega_\\text{LT} = \\frac{2}{5} \\frac{G M \\omega}{c^2 R} \\cos\\theta."
},
{
"math_id": 52,
"text": "\\Omega_\\text{LT} = 2.2 \\cdot 10^{-4} \\text{ arcseconds}/\\text{day}."
},
{
"math_id": 53,
"text": "\n\\frac{\\mathrm{d}\\Omega}{\\mathrm{d}t}\n= \\frac{2GS}{c^2 a^3 \\left(1 - e^2\\right)^{3/2}} =\n \\frac{2G^2 M^2\\chi}{c^3 a^3 \\left(1 - e^2\\right)^{3/2}},\n"
},
{
"math_id": 54,
"text": "\n\\frac{\\mathrm{d}\\mathbf{S}}{\\mathrm{d}t}\n = \\frac{2G}{c^2}\\sum_j \\frac{\\mathbf{L}_j \\times \\mathbf{S}}{a_j^3 \\left(1 - e_j^2\\right)^{3/2}},\n"
}
] | https://en.wikipedia.org/wiki?curid=11898194 |
1190077 | One-time password | Password that can only be used once
A one-time password (OTP), also known as a one-time PIN, one-time passcode, one-time authorization code (OTAC) or dynamic password, is a password that is valid for only one login session or transaction, on a computer system or other digital device. OTPs avoid several shortcomings that are associated with traditional (static) password-based authentication; a number of implementations also incorporate two-factor authentication by ensuring that the one-time password requires access to "something a person has" (such as a small keyring fob device with the OTP calculator built into it, or a smartcard or specific cellphone) as well as "something a person knows" (such as a PIN).
OTP generation algorithms typically make use of pseudorandomness or randomness to generate a shared key or seed, and cryptographic hash functions, which can be used to derive a value but are hard to reverse and therefore difficult for an attacker to obtain the data that was used for the hash. This is necessary because otherwise, it would be easy to predict future OTPs by observing previous ones.
OTPs have been discussed as a possible replacement for, as well as an enhancer to, traditional passwords. On the downside, OTPs can be intercepted or rerouted, and hard tokens can get lost, damaged, or stolen. Many systems that use OTPs do not securely implement them, and attackers can still learn the password through phishing attacks to impersonate the authorized user.
Characteristics.
The most important advantage addressed by OTPs is that, in contrast to static passwords, they are not vulnerable to replay attacks. This means that a potential intruder who manages to record an OTP that was already used to log into a service or to conduct a transaction will not be able to use it, since it will no longer be valid. A second major advantage is that a user who uses the same (or similar) password for multiple systems, is not made vulnerable on all of them, if the password for one of these is gained by an attacker. A number of OTP systems also aim to ensure that a session cannot easily be intercepted or impersonated without knowledge of unpredictable data created during the "previous" session, thus reducing the attack surface further.
There are also different ways to make the user aware of the next OTP to use. Some systems use special electronic security tokens that the user carries and that generate OTPs and show them using a small display. Other systems consist of software that runs on the user's mobile phone. Yet other systems generate OTPs on the server-side and send them to the user using an out-of-band channel such as SMS messaging. Finally, in some systems, OTPs are printed on paper that the user is required to carry.
In some mathematical algorithm schemes, it is possible for the user to provide the server with a static key for use as an encryption key, by only sending a one-time password.
Generation.
Concrete OTP algorithms vary greatly in their details. Various approaches for the generation of OTPs include:
Time-synchronized.
A time-synchronized OTP is usually related to a piece of hardware called a security token (e.g., each user is given a personal token that generates a one-time password). It might look like a small calculator or a keychain charm, with an LCD that shows a number that changes occasionally. Inside the token is an accurate clock that has been synchronized with the clock on the authentication server. On these OTP systems, time is an important part of the password algorithm, since the generation of new passwords is based on the current time rather than, or in addition to, the previous password or a secret key. This token may be a proprietary device, or a mobile phone or similar mobile device which runs software that is proprietary, freeware, or open-source. An example of a time-synchronized OTP standard is time-based one-time password (TOTP). Some applications can be used to keep time-synchronized OTP, like Google Authenticator or a password manager.
Hash chains.
Each new OTP may be created from the past OTPs used. An example of this type of algorithm, credited to Leslie Lamport, uses a one-way function (call it formula_0). This one-time password system works as follows:
To get the next password in the series from the previous passwords, one needs to find a way of calculating the inverse function formula_10. Since formula_0 was chosen to be one-way, this is extremely difficult to do. If formula_0 is a cryptographic hash function, which is generally the case, it is assumed to be a computationally intractable task. An intruder who happens to see a one-time password may have access for one time period or login, but it becomes useless once that period expires. The S/KEY one-time password system and its derivative OTP are based on Lamport's scheme.
Challenge–response.
The use of challenge–response one-time passwords requires a user to provide a response to a challenge. For example, this can be done by inputting the value that the token has generated into the token itself. To avoid duplicates, an additional counter is usually involved, so if one happens to get the same challenge twice, this still results in different one-time passwords. However, the computation does not usually involve the previous one-time password; that is, usually, this or another algorithm is used, rather than using both algorithms.
Implementations.
SMS.
A common technology used for the delivery of OTPs is text messaging. Because text messaging is a ubiquitous communication channel, being directly available in nearly all mobile handsets and, through text-to-speech conversion, to any mobile or landline telephone, text messaging has a great potential to reach all consumers with a low total cost to implement. OTP over text messaging may be encrypted using an A5/x standard, which several hacking groups report can be successfully decrypted within minutes or seconds. Additionally, security flaws in the SS7 routing protocol can and have been used to redirect the associated text messages to attackers; in 2017, several O2 customers in Germany were breached in this manner in order to gain access to their mobile banking accounts. In July 2016, the U.S. NIST issued a draft of a special publication with guidance on authentication practices, which discourages the use of SMS as a method of implementing out-of-band two-factor authentication, due to the ability for SMS to be intercepted at scale. Text messages are also vulnerable to SIM swap scams—in which an attacker fraudulently transfers a victim's phone number to their own SIM card, which can then be used to gain access to messages being sent to it.
Hardware tokens.
RSA Security's SecurID is one example of a time-synchronization type of token, along with HID Global's solutions. Like all tokens, these may be lost, damaged, or stolen; additionally, there is an inconvenience as batteries die, especially for tokens without a recharging facility or with a non-replaceable battery. A variant of the proprietary token was proposed by RSA in 2006 and was described as "ubiquitous authentication", in which RSA would partner with manufacturers to add physical SecurID chips to devices such as mobile phones.
Recently, it has become possible to take the electronic components associated with regular keyfob OTP tokens and embed them in a credit card form factor. However, the thinness of the cards, at 0.79mm to 0.84mm thick, prevents standard components or batteries from being used. Special polymer-based batteries must be used which have a much lower battery life than coin (button) cells. Semiconductor components must not only be very flat but must minimise the power used in standby and when operating.
Yubico offers a small USB token with an embedded chip that creates an OTP when a key is pressed and simulates a keyboard to facilitate easily entering a long password. Since it is a USB device it avoids the inconvenience of battery replacement.
A new version of this technology has been developed that embeds a keypad into a payment card of standard size and thickness. The card has an embedded keypad, display, microprocessor and proximity chip.
Soft tokens.
On smartphones, one-time passwords can also be delivered directly through mobile apps, including dedicated authentication apps such as Authy and Google Authenticator, or within a service's existing app, such as in the case of Steam. These systems do not share the same security vulnerabilities as SMS, and do not necessarily require a connection to a mobile network to use.
Hard copies.
In some countries' online banking, the bank sends to the user a numbered list of OTPs that is printed on paper. Other banks send plastic cards with actual OTPs obscured by a layer that the user has to scratch off to reveal a numbered OTP. For every online transaction, the user is required to enter a specific OTP from that list. Some systems ask for the numbered OTPs sequentially, others pseudorandomly choose an OTP to be entered.
Security.
When correctly implemented, OTPs are no longer useful to an attacker within a short time of their initial use. This differs from passwords, which may remain useful to attackers years after the fact.
As with passwords, OTPs are vulnerable to social engineering attacks in which phishers steal OTPs by tricking customers into providing them with their OTPs. Also like passwords, OTPs can be vulnerable to man-in-the-middle attacks, making it important to communicate them via a secure channel, for example Transport Layer Security.
The fact that both passwords and OTP are vulnerable to similar kinds of attacks was a key motivation for Universal 2nd Factor, which is designed to be more resistant to phishing attacks.
OTPs which don't involve a time-synchronization or challenge–response component will necessarily have a longer window of vulnerability if compromised before their use. In late 2005 customers of a Swedish bank were tricked into giving up their pre-supplied one-time passwords. In 2006 this type of attack was used on customers of a US bank.
Standardization.
Many OTP technologies are patented. This makes standardization in this area more difficult, as each company tries to push its own technology. Standards do, however, exist – for example, RFC 1760 (S/KEY), RFC 2289 (OTP), RFC 4226 (HOTP) and RFC 6238 (TOTP).
Use.
Mobile phone.
A mobile phone itself can be a hand-held authentication token. Mobile text messaging is one of the ways of receiving an OTAC through a mobile phone. In this way, a service provider sends a text message that includes an OTAC enciphered by a digital certificate to a user for authentication. According to a report, mobile text messaging provides high security when it uses public key infrastructure (PKI) to provide bidirectional authentication and non-repudiation, in accordance with theoretical analysis.
SMS as a method of receiving OTACs is broadly used in our daily lives for purposes such as banking, credit/debit cards, and security.
Telephone.
There are two methods of using a telephone to verify a user’s authentication.
With the first method, a service provider shows an OTAC on the computer or smartphone screen and then makes an automatic telephone call to a number that has already been authenticated. Then the user enters the OTAC that appears on their screen into the telephone keypad.
With the second method, which is used to authenticate and activate Microsoft Windows, the user call a number that is provided by the service provider and enters the OTAC that the phone system gives the user.
Computer.
In the field of computer technology, it is known that using one-time authorization code (OTAC) through email, in a broad sense, and using one-time authorization code (OTAC) through web-application, in a professional sense.
Post.
It is possible to send OTACs to a user via post or registered mail. When a user requests an OTAC, the service provider sends it via post or registered mail and then the user can use it for authentication. For example, in the UK, some banks send their OTAC for Internet banking authorization via post or registered mail.
Expansion.
Quantum cryptography, which is based on the uncertainty principle is one of the ideal methods to produce an OTAC.
Moreover, it has been discussed and used not only using an enciphered code for authentication but also using graphical one time PIN authentication such as QR code which provides decentralized access control technique with anonymous authentication.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "s"
},
{
"math_id": 2,
"text": "f(s)"
},
{
"math_id": 3,
"text": "f(f(f(\\ldots f(s) \\ldots )))"
},
{
"math_id": 4,
"text": "f^{1000}(s)"
},
{
"math_id": 5,
"text": "p"
},
{
"math_id": 6,
"text": "f^{999}(s)"
},
{
"math_id": 7,
"text": "f(p)"
},
{
"math_id": 8,
"text": "f^{998}(s)"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "f^{-1}"
}
] | https://en.wikipedia.org/wiki?curid=1190077 |
1190262 | Proper map | Map between topological spaces with the property that the preimage of every compact is compact
In mathematics, a function between topological spaces is called proper if inverse images of compact subsets are compact. In algebraic geometry, the analogous concept is called a proper morphism.
Definition.
There are several competing definitions of a "proper function".
Some authors call a function formula_0 between two topological spaces proper if the preimage of every compact set in formula_1 is compact in formula_2
Other authors call a map formula_3 proper if it is continuous and closed with compact fibers; that is if it is a continuous closed map and the preimage of every point in formula_1 is compact. The two definitions are equivalent if formula_1 is locally compact and Hausdorff.
If formula_4 is Hausdorff and formula_1 is locally compact Hausdorff then proper is equivalent to universally closed. A map is universally closed if for any topological space formula_6 the map formula_7 is closed. In the case that formula_1 is Hausdorff, this is equivalent to requiring that for any map formula_8 the pullback formula_9 be closed, as follows from the fact that formula_10 is a closed subspace of formula_11
An equivalent, possibly more intuitive definition when formula_4 and formula_1 are metric spaces is as follows: we say an infinite sequence of points formula_12 in a topological space formula_4 escapes to infinity if, for every compact set formula_13 only finitely many points formula_14 are in formula_15 Then a continuous map formula_0 is proper if and only if for every sequence of points formula_16 that escapes to infinity in formula_17 the sequence formula_18 escapes to infinity in formula_5
Generalization.
It is possible to generalize
the notion of proper maps of topological spaces to locales and topoi, see .
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "f : X \\to Y"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "X."
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "Y."
},
{
"math_id": 6,
"text": "Z"
},
{
"math_id": 7,
"text": "f \\times \\operatorname{id}_Z : X \\times Z \\to Y \\times Z"
},
{
"math_id": 8,
"text": "Z \\to Y"
},
{
"math_id": 9,
"text": "X \\times_Y Z \\to Z"
},
{
"math_id": 10,
"text": "X \\times_YZ"
},
{
"math_id": 11,
"text": "X \\times Z."
},
{
"math_id": 12,
"text": "\\{p_i\\}"
},
{
"math_id": 13,
"text": "S \\subseteq X"
},
{
"math_id": 14,
"text": "p_i"
},
{
"math_id": 15,
"text": "S."
},
{
"math_id": 16,
"text": "\\left\\{p_i\\right\\}"
},
{
"math_id": 17,
"text": "X,"
},
{
"math_id": 18,
"text": "\\left\\{f\\left(p_i\\right)\\right\\}"
},
{
"math_id": 19,
"text": "K \\subseteq Y"
},
{
"math_id": 20,
"text": "C \\subseteq X"
},
{
"math_id": 21,
"text": "f(C) = K."
}
] | https://en.wikipedia.org/wiki?curid=1190262 |
1190270 | Plücker embedding | In mathematics, the Plücker map embeds the Grassmannian formula_0, whose elements are "k"-dimensional subspaces of an "n"-dimensional vector space "V", either real or complex, in a projective space, thereby realizing it as a projective algebraic variety. More precisely, the Plücker map embeds formula_0 into the projectivization formula_1 of the formula_2-th exterior power of formula_3. The image is algebraic, consisting of the intersection of a number of quadrics defined by the (see below).
The Plücker embedding was first defined by Julius Plücker in the case formula_4 as a way of describing the lines in three-dimensional space (which, as projective lines in real projective space, correspond to two-dimensional subspaces of a four-dimensional vector space). The image of that embedding is the Klein quadric in RP5.
Hermann Grassmann generalized Plücker's embedding to arbitrary "k" and "n". The homogeneous coordinates of the image of the Grassmannian formula_5 under the Plücker embedding, relative to the basis in the exterior space formula_6 corresponding to the natural basis in formula_7 (where formula_8 is the base field) are called Plücker coordinates.
Definition.
Denoting by formula_9 the formula_10-dimensional vector space over the field formula_8, and by
formula_11 the Grassmannian of formula_2-dimensional subspaces of formula_3, the Plücker embedding is the map "ι" defined by
formula_12
where formula_13 is a basis for the element formula_14 and formula_15 is the projective equivalence class of the element formula_16 of the formula_2th exterior power of formula_3.
This is an embedding of the Grassmannian into the projectivization formula_1. The image can be completely characterized as the intersection of a number of quadrics, the Plücker quadrics (see below), which are expressed by homogeneous quadratic relations on the Plücker coordinates (see below) that derive from linear algebra.
The bracket ring appears as the ring of polynomial functions on formula_6.
Plücker relations.
The image under the Plücker embedding satisfies a simple set of homogeneous quadratic relations, usually called the Plücker relations, or Grassmann–Plücker relations, defining the intersection of a number of quadrics in formula_17. This shows that the Grassmannian embeds as an algebraic subvariety of formula_1 and gives another method of constructing the Grassmannian. To state the Grassmann–Plücker relations, let formula_18 be the formula_2-dimensional subspace spanned by the basis represented by column vectors formula_19.
Let formula_20 be the formula_21 matrix of homogeneous coordinates, whose columns are formula_19. Then the equivalence class formula_22 of all such homogeneous coordinates matrices
formula_23 related to each other by right multiplication by an invertible formula_24 matrix formula_25 may be identified with the element formula_26. For any ordered sequence formula_27
of formula_28 integers, let formula_29 be the determinant of the formula_30 matrix whose rows are the rows formula_31 of formula_20. Then, up to projectivization, formula_32 are the Plücker coordinates of the element formula_33 whose homogeneous coordinates are formula_34. They are the linear coordinates of the image formula_35 of formula_36 under the Plücker map, relative to the standard basis in the exterior space formula_37. Changing the basis defining the homogeneous coordinate matrix formula_38 just changes the Plücker coordinates by a nonzero scaling factor equal to the determinant of the change of basis matrix formula_39,
and hence just the representative of the projective equivalence class in formula_37.
For any two ordered sequences:
formula_40
of positive integers formula_41, the following homogeneous equations are valid, and determine the image of formula_26 under the Plücker map:
where formula_42 denotes the sequence formula_43 with the term formula_44 omitted. These are generally referred to as the Plücker relations.
When dim("V")
4 and "k"
2, we get formula_45, the simplest Grassmannian which is not a projective space, and the above reduces to a single equation. Denoting the coordinates of formula_46 by
formula_47
the image of formula_45 under the Plücker map is defined by the single equation
formula_48
In general, many more equations are needed to define the image of the Plücker embedding, as in (1),
but these are not, in general, algebraically independent. The maximal number of algebraically independent relations (on Zariski open sets)
is given by the difference of dimension between formula_1 and formula_49, which is formula_50
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{Gr}(k,V)"
},
{
"math_id": 1,
"text": "\\mathbf{P}({\\textstyle\\bigwedge}^k V)"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "V"
},
{
"math_id": 4,
"text": "k=2, n=4"
},
{
"math_id": 5,
"text": " \\mathbf{Gr}(k,V)"
},
{
"math_id": 6,
"text": "{\\textstyle\\bigwedge}^k V "
},
{
"math_id": 7,
"text": "V = K^n"
},
{
"math_id": 8,
"text": "K"
},
{
"math_id": 9,
"text": "V= K^n"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": " \\mathbf{Gr}(k, V)"
},
{
"math_id": 12,
"text": "\\begin{align}\n\\iota \\colon \\mathbf{Gr}(k, V) &{}\\rightarrow \\mathbf{P}({\\textstyle\\bigwedge}^k V),\\\\\n\\iota \\colon \\mathcal{W}:=\\operatorname{span}( w_1, \\ldots, w_k ) &{}\\mapsto [ w_1 \\wedge \\cdots \\wedge w_k ], \\end{align}"
},
{
"math_id": 13,
"text": "(w_1, \\dots , w_k)"
},
{
"math_id": 14,
"text": " \\mathcal{W}\\in \\mathbf{Gr}(k, V)"
},
{
"math_id": 15,
"text": " [ w_1 \\wedge \\cdots \\wedge w_k ]"
},
{
"math_id": 16,
"text": " w_1 \\wedge \\cdots \\wedge w_k \\in {\\textstyle\\bigwedge}^k V "
},
{
"math_id": 17,
"text": " \\mathbf{P}({\\textstyle\\bigwedge}^k V) "
},
{
"math_id": 18,
"text": "\\mathcal{W}\\in \\mathbf{Gr}(k, V)"
},
{
"math_id": 19,
"text": "W_1, \\dots, W_k"
},
{
"math_id": 20,
"text": " W "
},
{
"math_id": 21,
"text": " n \\times k"
},
{
"math_id": 22,
"text": "[W]"
},
{
"math_id": 23,
"text": "Wg \\sim W"
},
{
"math_id": 24,
"text": "k \\times k "
},
{
"math_id": 25,
"text": "g \\in \\mathbf{GL}(k, K) "
},
{
"math_id": 26,
"text": "\\mathcal{W}"
},
{
"math_id": 27,
"text": "1\\le i_1 < \\cdots < i_k \\le n "
},
{
"math_id": 28,
"text": " k "
},
{
"math_id": 29,
"text": " W_{i_1, \\dots , i_k} "
},
{
"math_id": 30,
"text": "k \\times k"
},
{
"math_id": 31,
"text": "(i_1, \\dots i_k)"
},
{
"math_id": 32,
"text": "\\{ W_{i_1, \\dots , i_k}\\} "
},
{
"math_id": 33,
"text": "\\mathcal{W}\\sim [W] \\in \\mathbf{Gr}(k, V)"
},
{
"math_id": 34,
"text": "W"
},
{
"math_id": 35,
"text": "\\iota(\\mathcal{W})"
},
{
"math_id": 36,
"text": "\\mathcal{W} \\in \\mathbf{Gr}(k, V) "
},
{
"math_id": 37,
"text": " {\\textstyle\\bigwedge}^k V "
},
{
"math_id": 38,
"text": " M "
},
{
"math_id": 39,
"text": "g"
},
{
"math_id": 40,
"text": " i_1 < i_2 < \\cdots < i_{k-1}, \\quad j_1 < j_2 < \\cdots < j_{k+1}"
},
{
"math_id": 41,
"text": " 1 \\le i_l, j_m \\le n "
},
{
"math_id": 42,
"text": " j_1, \\dots , \\hat{j}_l \\dots j_{k+1} "
},
{
"math_id": 43,
"text": " j_1, \\dots , \\dots j_{k+1} "
},
{
"math_id": 44,
"text": " j_l "
},
{
"math_id": 45,
"text": "\\mathbf{Gr}(2, V)"
},
{
"math_id": 46,
"text": " {\\textstyle\\bigwedge}^2 V"
},
{
"math_id": 47,
"text": " W_{ij} = -W_{ji}, \\quad 1\\le i,j \\le 4,"
},
{
"math_id": 48,
"text": " W_{12}W_{34} - W_{13}W_{24} + W_{14}W_{23}=0. "
},
{
"math_id": 49,
"text": "\\mathbf{Gr}(k, V)"
},
{
"math_id": 50,
"text": " \\tbinom{n}{k} - k(n-k) -1. "
}
] | https://en.wikipedia.org/wiki?curid=1190270 |
11904406 | Mazur's lemma | On strongly convergent combinations of a weakly convergent sequence in a Banach space
In mathematics, Mazur's lemma is a result in the theory of normed vector spaces. It shows that any weakly convergent sequence in a normed space has a sequence of convex combinations of its members that converges strongly to the same limit, and is used in the proof of Tonelli's theorem.
Statement of the lemma.
<templatestyles src="Math_theorem/styles.css" />
Mazur's theorem — Let formula_0 be a normed vector space and let formula_1 be a sequence converges weakly to some formula_2.
Then there exists a sequence formula_3 made up of finite convex combination of the formula_4's of the form
formula_5
such that formula_6 strongly that is formula_7.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(X, \\lVert\\cdot\\rVert)"
},
{
"math_id": 1,
"text": "\\left\\{x_j\\right\\}_{j \\in \\N}\\subset X"
},
{
"math_id": 2,
"text": "x\\in X"
},
{
"math_id": 3,
"text": "\\left\\{y_k\\right\\}_{k \\in \\N}\\subset X"
},
{
"math_id": 4,
"text": "x_j"
},
{
"math_id": 5,
"text": "y_k=\\sum_{j\\ge k}\\lambda_j^{(k)}x_j"
},
{
"math_id": 6,
"text": "y_k\\to x"
},
{
"math_id": 7,
"text": "\\lVert y_k-x\\rVert\\to 0"
}
] | https://en.wikipedia.org/wiki?curid=11904406 |
11905171 | Tonelli's theorem (functional analysis) | In mathematics, Tonelli's theorem in functional analysis is a fundamental result on the weak lower semicontinuity of nonlinear functionals on "L""p" spaces. As such, it has major implications for functional analysis and the calculus of variations. Roughly, it shows that weak lower semicontinuity for integral functionals is equivalent to convexity of the integral kernel. The result is attributed to the Italian mathematician Leonida Tonelli.
Statement of the theorem.
Let formula_0 be a bounded domain in formula_1-dimensional Euclidean space formula_2 and let formula_3 be a continuous extended real-valued function. Define a nonlinear functional formula_4 on functions formula_5by
formula_6
Then formula_4 is sequentially weakly lower semicontinuous on the formula_7 space formula_8 for formula_9 and weakly-∗ lower semicontinuous on formula_10 if and only if formula_11 is convex.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Omega"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\Reals^n"
},
{
"math_id": 3,
"text": "f : \\Reals^m \\to \\Reals \\cup \\{\\pm \\infty\\}"
},
{
"math_id": 4,
"text": "F"
},
{
"math_id": 5,
"text": "u : \\Omega \\to \\Reals^m"
},
{
"math_id": 6,
"text": "F[u] = \\int_{\\Omega} f(u(x)) \\, \\mathrm{d} x."
},
{
"math_id": 7,
"text": "L^p"
},
{
"math_id": 8,
"text": "L^p(\\Omega, \\Reals^m)"
},
{
"math_id": 9,
"text": "1 < p < +\\infty"
},
{
"math_id": 10,
"text": "L^\\infty(\\Omega, \\Reals^m)"
},
{
"math_id": 11,
"text": "f"
}
] | https://en.wikipedia.org/wiki?curid=11905171 |
1190521 | Domineering | Mathematical game
Domineering (also called Stop-Gate or Crosscram) is a mathematical game that can be played on any collection of squares on a sheet of graph paper. For example, it can be played on a 6×6 square, a rectangle, an entirely irregular polyomino, or a combination of any number of such components. Two players have a collection of dominoes which they place on the grid in turn, covering up squares. One player places tiles vertically, while the other places them horizontally. (Traditionally, these players are called "Left" and "Right", respectively, or "V" and "H". Both conventions are used in this article.)
As in most games in combinatorial game theory, the first player who cannot move loses.
Domineering is a partisan game, in that players use different pieces: the impartial version of the game is Cram.
Basic examples.
Single box.
Other than the empty game, where there is no grid, the simplest game is a single box.
In this game, clearly, neither player can move. Since it is a second-player win, it is therefore a zero game.
Horizontal rows.
This game is a 2-by-1 grid. There is a convention of assigning the game a positive number when Left is winning and a negative one when Right is winning. In this case, Left has no moves, while Right can play a domino to cover the entire board, leaving nothing, which is clearly a zero game. Thus in surreal number notation, this game is {|0} = −1. This makes sense, as this grid is a 1-move advantage for Right.
This game is also {|0} = −1, because a single box is unplayable.
This grid is the first case of a choice. Right "could" play the left two boxes, leaving −1. The rightmost boxes leave −1 as well. He could also play the middle two boxes, leaving two single boxes. This option leaves 0+0 = 0. Thus this game can be expressed as {|0,−1}. This is −2. If this game is played in conjunction with other games, this is two free moves for Right.
Vertical rows.
Vertical columns are evaluated in the same way. If there is a row of 2"n" or 2"n"+1 boxes, it counts as −"n". A column of such size counts as +"n".
More complex grids.
<br>
This is a more complex game. If Left goes first, either move leaves a 1×2 grid, which is +1. Right, on the other hand, can move to −1. Thus the surreal number notation is {1|−1}. However, this is not a surreal number because 1 > −1. This is a Game but not a number. The notation for this is ±1, and it is a hot game, because each player wants to move here.
<br>
This is a 2×3 grid, which is even more complex, but, just like any Domineering game, it can be broken down by looking at what the various moves for Left and Right are. Left can take the left column (or, equivalently, the right column) and move to ±1, but it is clearly a better idea to split the middle, leaving two separate games, each worth +1. Thus Left's best move is to +2. Right has four "different" moves, but they all leave the following shape in some rotation:
<br>
This game is not a hot game (also called a cold game), because each move hurts the player making it, as we can see by examining the moves. Left can move to −1, Right can move to 0 or +1. Thus this game is {−1|0,1} = {−1|0} = −<templatestyles src="Fraction/styles.css" />1⁄2.
Our 2×3 grid, then, is {2|−<templatestyles src="Fraction/styles.css" />1⁄2}, which can also be represented by the mean value, <templatestyles src="Fraction/styles.css" />3⁄4, together with the bonus for moving (the "temperature"), <templatestyles src="Fraction/styles.css" />1+1⁄4, thus: formula_0
High-level play.
The Mathematical Sciences Research Institute held a Domineering tournament, with a $500 prize for the winner. This game was played on an 8×8 board. The winner was mathematician Dan Calistrate, who defeated David Wolfe in the final. The tournament was detailed in Richard J. Nowakowski's "Games of No Chance" (p. 85).
Winning strategy.
A problem about Domineering is to compute the winning strategy for large boards, and particularly square boards. In 2000, Dennis Breuker, Jos Uiterwijk and Jaap van den Herik computed and published the solution for the 8x8 board. The 9x9 board followed soon after some improvements of their program. Then, in 2002, Nathan Bullock solved the 10x10 board, as part of his thesis on Domineering. The 11x11 board has been solved by Jos Uiterwijk in 2016.
Domineering is a first-player win for the 2x2, 3x3, 4x4, 6x6, 7x7, 8x8, 9x9, 10x10, and 11x11 square boards, and a second-player win for the 1x1 and 5x5 boards. Some other known values for rectangular boards can be found on the site of Nathan Bullock.
Cram.
Cram is the impartial version of Domineering. The only difference in the rules is that each player may place their dominoes in either orientation. It seems only a small variation in the rules, but it results in a completely different game that can be analyzed with the Sprague–Grundy theorem. | [
{
"math_id": 0,
"text": "\\textstyle\\left\\{2 \\left| -\\frac{1}{2}\\right.\\right\\} = \\frac{3}{4} \\pm \\frac{5}{4}"
}
] | https://en.wikipedia.org/wiki?curid=1190521 |
11908069 | Polyconvex function | In the calculus of variations, the notion of polyconvexity is a generalization of the notion of convexity for functions defined on spaces of matrices. The notion of polyconvexity was introduced by John M. Ball as a sufficient conditions for proving the existence of energy minimizers in nonlinear elasticity theory. It is satisfied by a large class of hyperelastic stored energy densities, such as Mooney-Rivlin and Ogden materials. The notion of polyconvexity is related to the notions of convexity, quasiconvexity and rank-one convexity through the following diagram:
formula_0
Motivation.
Let formula_1 be an open bounded domain, formula_2 and formula_3 denote the Sobolev space of mappings from formula_4 to formula_5. A typical problem in the calculus of variations is to minimize a functional, formula_6 of the form
formula_7,
where the energy density function, formula_8 satisfies formula_9-growth, i.e., formula_10 for some formula_11 and formula_12. It is well-known from a theorem of Morrey and Acerbi-Fusco that a necessary and sufficient condition for formula_13 to weakly lower-semicontinuous on formula_3 is that formula_14 is quasiconvex for almost every formula_15. With coercivity assumptions on formula_16 and boundary conditions on formula_17, this leads to the existence of minimizers for formula_13 on formula_3. However, in many applications, the assumption of formula_9-growth on the energy density is often too restrictive. In the context of elasticity, this is because the energy is required to grow unboundedly to formula_18 as local measures of volume approach zero. This led Ball to define the more restrictive notion of polyconvexity to prove the existence of energy minimizers in nonlinear elasticity.
Definition.
A function formula_19 is said to be polyconvex if there exists a "convex" function formula_20 such that
formula_21
where formula_22 is such that
formula_23
Here, formula_24 stands for the matrix of all formula_25 minors of the matrix formula_26, formula_27 and
formula_28
where formula_29.
When formula_30, formula_31 and when formula_32, formula_33, where formula_34 denotes the cofactor matrix of formula_35.
In the above definitions, the range of formula_16 can also be extended to formula_36.
formula_41 for every formula_42, then formula_16 is convex.
formula_44
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f\\text{ convex}\\implies f\\text{ polyconvex}\\implies f\\text{ quasiconvex}\\implies f\\text{ rank-one convex}"
},
{
"math_id": 1,
"text": "\\Omega\\subset\\mathbb{R}^n"
},
{
"math_id": 2,
"text": "u:\\Omega\\rightarrow\\mathbb{R}^m"
},
{
"math_id": 3,
"text": "W^{1,p}(\\Omega,\\mathbb{R}^m)"
},
{
"math_id": 4,
"text": "\\Omega"
},
{
"math_id": 5,
"text": "\\mathbb{R}^m"
},
{
"math_id": 6,
"text": "E:W^{1,p}(\\Omega,\\mathbb{R}^m)\\rightarrow\\mathbb{R}"
},
{
"math_id": 7,
"text": "E[u]=\\int_\\Omega f(x,\\nabla u(x))dx"
},
{
"math_id": 8,
"text": "f:\\Omega\\times\\mathbb{R}^{m\\times n}\\rightarrow[0,\\infty)"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "|f(x,A)|\\leq M(1+|A|^p)"
},
{
"math_id": 11,
"text": "M>0"
},
{
"math_id": 12,
"text": "p\\in(1,\\infty)"
},
{
"math_id": 13,
"text": "E"
},
{
"math_id": 14,
"text": "f(x,\\cdot)"
},
{
"math_id": 15,
"text": "x\\in\\Omega"
},
{
"math_id": 16,
"text": "f"
},
{
"math_id": 17,
"text": "u"
},
{
"math_id": 18,
"text": "+\\infty"
},
{
"math_id": 19,
"text": "f:\\mathbb{R}^{m\\times n}\\rightarrow\\mathbb{R}"
},
{
"math_id": 20,
"text": "\\Phi:\\mathbb{R}^{\\tau(m,n)}\\rightarrow\\mathbb{R}"
},
{
"math_id": 21,
"text": " f(F)=\\Phi(T(F))"
},
{
"math_id": 22,
"text": "T:\\mathbb{R}^{m\\times n}\\rightarrow\\mathbb{R}^{\\tau(m,n)}"
},
{
"math_id": 23,
"text": "T(F):=(F,\\text{adj}_2(F),...,\\text{adj}_{m\\wedge n}(F))."
},
{
"math_id": 24,
"text": "\\text{adj}_s"
},
{
"math_id": 25,
"text": "s\\times s"
},
{
"math_id": 26,
"text": "F\\in\\mathbb{R}^{m\\times n}"
},
{
"math_id": 27,
"text": "2\\leq s\\leq m\\wedge n:=\\min(m,n)"
},
{
"math_id": 28,
"text": "\\tau(m,n):=\\sum_{s=1}^{m\\wedge n}\\sigma(s),"
},
{
"math_id": 29,
"text": "\\sigma(s):=\\binom{m}{s}\\binom{n}{s}"
},
{
"math_id": 30,
"text": "m=n=2"
},
{
"math_id": 31,
"text": "T(F)=(F,\\det F)"
},
{
"math_id": 32,
"text": "m=n=3"
},
{
"math_id": 33,
"text": "T(F)=(F,\\text{cof}\\,F,\\det F)"
},
{
"math_id": 34,
"text": "\\text{cof}\\,F"
},
{
"math_id": 35,
"text": "F"
},
{
"math_id": 36,
"text": "\\mathbb{R}\\cup\\{+\\infty\\}"
},
{
"math_id": 37,
"text": "m=1"
},
{
"math_id": 38,
"text": "n=1"
},
{
"math_id": 39,
"text": "\\alpha\\geq 0"
},
{
"math_id": 40,
"text": "0\\leq p<2"
},
{
"math_id": 41,
"text": " f(F)\\leq\\alpha (1+|F|^p)"
},
{
"math_id": 42,
"text": " F\\in \\mathbb{R}^{m\\times n}"
},
{
"math_id": 43,
"text": "m=n"
},
{
"math_id": 44,
"text": "f(A) = \\begin{cases} \\frac1{\\det (A)}, & \\det (A) > 0; \\\\ + \\infty, & \\det (A) \\leq 0; \\end{cases}"
}
] | https://en.wikipedia.org/wiki?curid=11908069 |
1190816 | Splitting of prime ideals in Galois extensions | In mathematics, the interplay between the Galois group "G" of a Galois extension "L" of a number field "K", and the way the prime ideals "P" of the ring of integers "O""K" factorise as products of prime ideals of "O""L", provides one of the richest parts of algebraic number theory. The splitting of prime ideals in Galois extensions is sometimes attributed to David Hilbert by calling it Hilbert theory. There is a geometric analogue, for ramified coverings of Riemann surfaces, which is simpler in that only one kind of subgroup of "G" need be considered, rather than two. This was certainly familiar before Hilbert.
Definitions.
Let "L"/"K" be a finite extension of number fields, and let "OK" and "OL" be the corresponding ring of integers of "K" and "L", respectively, which are defined to be the integral closure of the integers Z in the field in question.
formula_0
Finally, let "p" be a non-zero prime ideal in "OK", or equivalently, a maximal ideal, so that the residue "OK"/"p" is a field.
From the basic theory of one-dimensional rings follows the existence of a unique decomposition
formula_1
of the ideal "pOL" generated in "OL" by "p" into a product of distinct maximal ideals "P""j", with multiplicities "e""j".
The field "F" = "OK"/"p" naturally embeds into "F""j" = "OL"/"P""j" for every "j", the degree "f""j" = ["OL"/"P""j" : "OK"/"p"] of this residue field extension is called inertia degree of "P""j" over "p".
The multiplicity "e""j" is called ramification index of "P""j" over "p". If it is bigger than 1 for some "j", the field extension "L"/"K" is called ramified at "p" (or we say that "p" ramifies in "L", or that it is ramified in "L"). Otherwise, "L"/"K" is called unramified at "p". If this is the case then by the Chinese remainder theorem the quotient "OL"/"pOL" is a product of fields "F""j". The extension "L"/"K" is ramified in exactly those primes that divide the relative discriminant, hence the extension is unramified in all but finitely many prime ideals.
Multiplicativity of ideal norm implies
formula_2
If "f""j" = "e""j" = 1 for every "j" (and thus "g" = ["L" : "K"]), we say that "p" splits completely in "L". If "g" = 1 and "f""1" = 1 (and so "e""1" = ["L" : "K"]), we say that "p" ramifies completely in "L". Finally, if "g" = 1 and "e""1" = 1 (and so "f""1" = ["L" : "K"]), we say that "p" is inert in "L".
The Galois situation.
In the following, the extension "L"/"K" is assumed to be a Galois extension. Then the prime avoidance lemma can be used to show the Galois group formula_3 acts transitively on the "P""j". That is, the prime ideal factors of "p" in "L" form a single orbit under the automorphisms of "L" over "K". From this and the unique factorisation theorem, it follows that "f" = "f""j" and "e" = "e""j" are independent of "j"; something that certainly need not be the case for extensions that are not Galois. The basic relations then read
formula_4.
and
formula_5
The relation above shows that ["L" : "K"]/"ef" equals the number "g" of prime factors of "p" in "OL". By the orbit-stabilizer formula this number is also equal to |"G"|/|"D""P""j"| for every "j", where "D""P""j", the decomposition group of "P""j", is the subgroup of elements of "G" sending a given "P""j" to itself. Since the degree of "L"/"K" and the order of "G" are equal by basic Galois theory, it follows that the order of the decomposition group "D""P""j" is "ef" for every "j".
This decomposition group contains a subgroup "I""P""j", called inertia group of "P""j", consisting of automorphisms of "L"/"K" that induce the identity automorphism on "F""j". In other words, "I""P""j" is the kernel of reduction map formula_6. It can be shown that this map is surjective, and it follows that formula_7 is isomorphic to "D""P""j"/"I""P""j" and the order of the inertia group "I""P""j" is "e".
The theory of the Frobenius element goes further, to identify an element of "D""P""j"/"I""P""j" for given "j" which corresponds to the Frobenius automorphism in the Galois group of the finite field extension "F""j" /"F". In the unramified case the order of "D""P""j" is "f" and "I""P""j" is trivial, so the Frobenius element is in this case an element of "D""P""j", and thus also an element of "G". For varying "j", the groups "D""P""j" are conjugate subgroups inside "G": Recalling that "G" acts transitively on the "P""j", one checks that if formula_8 maps "P""j" to "P""j, formula_9. Therefore, if "G" is an abelian group, the Frobenius element of an unramified prime "P" does not depend on which "P""j" we take. Furthermore, in the abelian case, associating an unramified prime of "K" to its Frobenius and extending multiplicatively defines a homomorphism from the group of unramified ideals of "K" into "G". This map, known as the Artin map"', is a crucial ingredient of class field theory, which studies the finite abelian extensions of a given number field "K".
In the geometric analogue, for complex manifolds or algebraic geometry over an algebraically closed field, the concepts of "decomposition group" and "inertia group" coincide. There, given a Galois ramified cover, all but finitely many points have the same number of preimages.
The splitting of primes in extensions that are not Galois may be studied by using a splitting field initially, i.e. a Galois extension that is somewhat larger. For example, cubic fields usually are 'regulated' by a degree 6 field containing them.
Example — the Gaussian integers.
This section describes the splitting of prime ideals in the field extension Q(i)/Q. That is, we take "K" = Q and "L" = Q(i), so "O""K" is simply Z, and "O""L" = Z[i] is the ring of Gaussian integers. Although this case is far from representative — after all, Z[i] has unique factorisation, and there aren't many quadratic fields with unique factorization — it exhibits many of the features of the theory.
Writing "G" for the Galois group of Q(i)/Q, and σ for the complex conjugation automorphism in "G", there are three cases to consider.
The prime "p" = 2.
The prime 2 of Z ramifies in Z[i]:
formula_10
The ramification index here is therefore "e" = 2. The residue field is
formula_11
which is the finite field with two elements. The decomposition group must be equal to all of "G", since there is only one prime of Z[i] above 2. The inertia group is also all of "G", since
formula_12
for any integers "a" and "b", as formula_13 .
In fact, 2 is the "only" prime that ramifies in Z[i], since every prime that ramifies must divide the discriminant of Z[i], which is −4.
Primes "p" ≡ 1 mod 4.
Any prime "p" ≡ 1 mod 4 "splits" into two distinct prime ideals in Z[i]; this is a manifestation of Fermat's theorem on sums of two squares. For example:
formula_14
The decomposition groups in this case are both the trivial group {1}; indeed the automorphism σ "switches" the two primes (2 + 3i) and (2 − 3i), so it cannot be in the decomposition group of either prime. The inertia group, being a subgroup of the decomposition group, is also the trivial group. There are two residue fields, one for each prime,
formula_15
which are both isomorphic to the finite field with 13 elements. The Frobenius element is the trivial automorphism; this means that
formula_16
for any integers "a" and "b".
Primes "p" ≡ 3 mod 4.
Any prime "p" ≡ 3 mod 4 remains "inert" in Z[formula_17]; that is, it does "not" split. For example, (7) remains prime in Z[formula_17]. In this situation, the decomposition group is all of "G", again because there is only one prime factor. However, this situation differs from the "p" = 2 case, because now σ does "not" act trivially on the residue field
formula_18
which is the finite field with 72 = 49 elements. For example, the difference between formula_19 and formula_20 is formula_21, which is certainly not divisible by 7. Therefore, the inertia group is the trivial group {1}. The Galois group of this residue field over the subfield Z/7Z has order 2, and is generated by the image of the Frobenius element. The Frobenius element is none other than σ; this means that
formula_22
for any integers "a" and "b".
Computing the factorisation.
Suppose that we wish to determine the factorisation of a prime ideal "P" of "O""K" into primes of "O""L". The following procedure (Neukirch, p. 47) solves this problem in many cases. The strategy is to select an integer θ in "O""L" so that "L" is generated over "K" by θ (such a θ is guaranteed to exist by the primitive element theorem), and then to examine the minimal polynomial "H"("X") of θ over "K"; it is a monic polynomial with coefficients in "O""K". Reducing the coefficients of "H"("X") modulo "P", we obtain a monic polynomial "h"("X") with coefficients in "F", the (finite) residue field "O""K"/"P". Suppose that "h"("X") factorises in the polynomial ring "F"["X"] as
formula_23
where the "h""j" are distinct monic irreducible polynomials in "F"["X"]. Then, as long as "P" is not one of finitely many exceptional primes (the precise condition is described below), the factorisation of "P" has the following form:
formula_24
where the "Q""j" are distinct prime ideals of "O""L". Furthermore, the inertia degree of each "Q""j" is equal to the degree of the corresponding polynomial "h""j", and there is an explicit formula for the "Q""j":
formula_25
where "h""j" denotes here a lifting of the polynomial "h""j" to "K"["X"].
In the Galois case, the inertia degrees are all equal, and the ramification indices "e"1 = ... = "e""n" are all equal.
The exceptional primes, for which the above result does not necessarily hold, are the ones not relatively prime to the conductor of the ring "O""K"[θ]. The conductor is defined to be the ideal
formula_26
it measures how far the order "O""K"[θ] is from being the whole ring of integers (maximal order) "O""L".
A significant caveat is that there exist examples of "L"/"K" and "P" such that there is "no" available θ that satisfies the above hypotheses (see for example ). Therefore, the algorithm given above cannot be used to factor such "P", and more sophisticated approaches must be used, such as that described in.
An example.
Consider again the case of the Gaussian integers. We take θ to be the imaginary unit formula_17, with minimal polynomial "H"("X") = "X"2 + 1. Since Z[formula_17] is the whole ring of integers of Q(formula_17), the conductor is the unit ideal, so there are no exceptional primes.
For "P" = (2), we need to work in the field Z/(2)Z, which amounts to factorising the polynomial "X"2 + 1 modulo 2:
formula_27
Therefore, there is only one prime factor, with inertia degree 1 and ramification index 2, and it is given by
formula_28
The next case is for "P" = ("p") for a prime "p" ≡ 3 mod 4. For concreteness we will take "P" = (7). The polynomial "X"2 + 1 is irreducible modulo 7. Therefore, there is only one prime factor, with inertia degree 2 and ramification index 1, and it is given by
formula_29
The last case is "P" = ("p") for a prime "p" ≡ 1 mod 4; we will again take "P" = (13). This time we have the factorisation
formula_30
Therefore, there are "two" prime factors, both with inertia degree and ramification index 1. They are given by
formula_31
and
formula_32
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\begin{array}{ccc} O_K & \\hookrightarrow & O_L \\\\ \\downarrow & & \\downarrow \\\\ K & \\hookrightarrow & L \\end{array} "
},
{
"math_id": 1,
"text": " pO_L = \\prod_{j=1}^{g} P_j^{e_j} "
},
{
"math_id": 2,
"text": "[L:K]=\\sum_{j=1}^{g} e_j f_j."
},
{
"math_id": 3,
"text": "G=\\operatorname{Gal}(L/K)"
},
{
"math_id": 4,
"text": "pO_L = \\left(\\prod_{j=1}^{g} P_j\\right)^e"
},
{
"math_id": 5,
"text": "[L:K]=efg."
},
{
"math_id": 6,
"text": "D_{P_j}\\to\\operatorname{Gal}(F_j/F)"
},
{
"math_id": 7,
"text": "\\operatorname{Gal}(F_j/F)"
},
{
"math_id": 8,
"text": "\\sigma"
},
{
"math_id": 9,
"text": "\\sigma D_{P_j} \\sigma^{-1} = D_{P_{j'}}"
},
{
"math_id": 10,
"text": "(2)=(1+i)^2"
},
{
"math_id": 11,
"text": "O_L / (1+i)O_L"
},
{
"math_id": 12,
"text": "a+bi\\equiv a-bi\\bmod1+i"
},
{
"math_id": 13,
"text": "a+bi = 2bi + a-bi =(1+i) \\cdot (1-i)bi + a-bi \\equiv a-bi \\bmod 1+i"
},
{
"math_id": 14,
"text": "13=(2+3i)(2-3i)"
},
{
"math_id": 15,
"text": "O_L / (2 \\pm 3i)O_L\\ ,"
},
{
"math_id": 16,
"text": "(a+bi)^{13}\\equiv a + bi\\bmod2\\pm3i"
},
{
"math_id": 17,
"text": "i"
},
{
"math_id": 18,
"text": "O_L / (7)O_L\\ ,"
},
{
"math_id": 19,
"text": "1+i"
},
{
"math_id": 20,
"text": "\\sigma(1 + i) = 1 - i"
},
{
"math_id": 21,
"text": "2i"
},
{
"math_id": 22,
"text": "(a+bi)^7\\equiv a-bi\\bmod7"
},
{
"math_id": 23,
"text": " h(X) = h_1(X)^{e_1} \\cdots h_n(X)^{e_n}, "
},
{
"math_id": 24,
"text": " P O_L = Q_1^{e_1} \\cdots Q_n^{e_n}, "
},
{
"math_id": 25,
"text": " Q_j = P O_L + h_j(\\theta) O_L, "
},
{
"math_id": 26,
"text": " \\{ y \\in O_L : yO_L \\subseteq O_K[\\theta]\\}; "
},
{
"math_id": 27,
"text": "X^2 + 1 = (X+1)^2 \\pmod 2."
},
{
"math_id": 28,
"text": "Q = (2)\\mathbf Z[i] + (i+1)\\mathbf Z[i] = (1+i)\\mathbf Z[i]."
},
{
"math_id": 29,
"text": "Q = (7)\\mathbf Z[i] + (i^2 + 1)\\mathbf Z[i] = 7\\mathbf Z[i]."
},
{
"math_id": 30,
"text": "X^2 + 1 = (X + 5)(X - 5) \\pmod{13}."
},
{
"math_id": 31,
"text": "Q_1 = (13)\\mathbf Z[i] + (i + 5)\\mathbf Z[i] = \\cdots = (2+3i)\\mathbf Z[i]"
},
{
"math_id": 32,
"text": "Q_2 = (13)\\mathbf Z[i] + (i - 5)\\mathbf Z[i] = \\cdots = (2-3i)\\mathbf Z[i]."
}
] | https://en.wikipedia.org/wiki?curid=1190816 |
11908435 | Pseudo-monotone operator | In mathematics, a pseudo-monotone operator from a reflexive Banach space into its continuous dual space is one that is, in some sense, almost as well-behaved as a monotone operator. Many problems in the calculus of variations can be expressed using operators that are pseudo-monotone, and pseudo-monotonicity in turn implies the existence of solutions to these problems.
Definition.
Let ("X", || ||) be a reflexive Banach space. A map "T" : "X" → "X"∗ from "X" into its continuous dual space "X"∗ is said to be pseudo-monotone if "T" is a bounded operator (not necessarily continuous) and if whenever
formula_0
(i.e. "u""j" converges weakly to "u") and
formula_1
it follows that, for all "v" ∈ "X",
formula_2
Properties of pseudo-monotone operators.
Using a very similar proof to that of the Browder–Minty theorem, one can show the following:
Let ("X", || ||) be a real, reflexive Banach space and suppose that "T" : "X" → "X"∗ is bounded, coercive and pseudo-monotone. Then, for each continuous linear functional "g" ∈ "X"∗, there exists a solution "u" ∈ "X" of the equation "T"("u") = "g". | [
{
"math_id": 0,
"text": "u_{j} \\rightharpoonup u \\mbox{ in } X \\mbox{ as } j \\to \\infty"
},
{
"math_id": 1,
"text": "\\limsup_{j \\to \\infty} \\langle T(u_{j}), u_{j} - u \\rangle \\leq 0,"
},
{
"math_id": 2,
"text": "\\liminf_{j \\to \\infty} \\langle T(u_{j}), u_{j} - v \\rangle \\geq \\langle T(u), u - v \\rangle."
}
] | https://en.wikipedia.org/wiki?curid=11908435 |
1190887 | Pilot wave theory | One interpretation of quantum mechanics
In theoretical physics, the pilot wave theory, also known as Bohmian mechanics, was the first known example of a hidden-variable theory, presented by Louis de Broglie in 1927. Its more modern version, the de Broglie–Bohm theory, interprets quantum mechanics as a deterministic theory, and avoids issues such as wave–particle duality, instantaneous wave function collapse, and the paradox of Schrödinger's cat by being inherently nonlocal.
The de Broglie–Bohm pilot wave theory is one of several interpretations of (non-relativistic) quantum mechanics.
History.
Louis de Broglie's early results on the pilot wave theory were presented in his thesis (1924) in the context of atomic orbitals where the waves are stationary. Early attempts to develop a general formulation for the dynamics of these guiding waves in terms of a relativistic wave equation were unsuccessful until in 1926 Schrödinger developed his non-relativistic wave equation. He further suggested that since the equation described waves in configuration space, the particle model should be abandoned. Shortly thereafter, Max Born suggested that the wave function of Schrödinger's wave equation represents the probability density of finding a particle. Following these results, de Broglie developed the dynamical equations for his pilot wave theory. Initially, de Broglie proposed a "double solution" approach, in which the quantum object consists of a physical wave ("u"-wave) in real space which has a spherical singular region that gives rise to particle-like behaviour; in this initial form of his theory he did not have to postulate the existence of a quantum particle. He later formulated it as a theory in which a particle is accompanied by a pilot wave.
De Broglie presented the pilot wave theory at the 1927 Solvay Conference. However, Wolfgang Pauli raised an objection to it at the conference, saying that it did not deal properly with the case of inelastic scattering. De Broglie was not able to find a response to this objection, and he abandoned the pilot-wave approach. Unlike David Bohm years later, de Broglie did not complete his theory to encompass the many-particle case. The many-particle case shows mathematically that the energy dissipation in inelastic scattering could be distributed to the surrounding field structure by a yet-unknown mechanism of the theory of hidden variables.
In 1932, John von Neumann published a book, part of which claimed to prove that all hidden variable theories were impossible. This result was found to be flawed by Grete Hermann three years later, though for a variety of reasons this went unnoticed by the physics community for over fifty years.
In 1952, David Bohm, dissatisfied with the prevailing orthodoxy, rediscovered de Broglie's pilot wave theory. Bohm developed pilot wave theory into what is now called the de Broglie–Bohm theory. The de Broglie–Bohm theory itself might have gone unnoticed by most physicists, if it had not been championed by John Bell, who also countered the objections to it. In 1987, John Bell rediscovered Grete Hermann's work, and thus showed the physics community that Pauli's and von Neumann's objections only showed that the pilot wave theory did not have locality.
The pilot wave theory.
Principles.
The pilot wave theory is a hidden-variable theory. Consequently:
The positions of the particles are considered to be the hidden variables. The observer doesn't know the precise values of these variables; they cannot know them precisely because any measurement disturbs them. On the other hand, the observer is defined not by the wave function of their own atoms but by the atoms' positions. So what one sees around oneself are also the positions of nearby things, not their wave functions.
A collection of particles has an associated matter wave which evolves according to the Schrödinger equation. Each particle follows a deterministic trajectory, which is guided by the wave function; collectively, the density of the particles conforms to the magnitude of the wave function. The wave function is not influenced by the particle and can exist also as an empty wave function.
The theory brings to light nonlocality that is implicit in the non-relativistic formulation of quantum mechanics and uses it to satisfy Bell's theorem. These nonlocal effects can be shown to be compatible with the no-communication theorem, which prevents use of them for faster-than-light communication, and so is empirically compatible with relativity.
Macroscopic analog.
Couder, Fort, "et al." claimed that macroscopic oil droplets on a vibrating fluid bath can be used as an analogue model of pilot waves; a localized droplet creates a periodical wave field around itself. They proposed that resonant interaction between the droplet and its own wave field exhibits behaviour analogous to quantum particles: interference in double-slit experiment, unpredictable tunneling (depending in a complicated way on a practically hidden state of field), orbit quantization (that a particle has to 'find a resonance' with field perturbations it creates—after one orbit, its internal phase has to return to the initial state) and Zeeman effect. Attempts to reproduce these experiments have shown that wall-droplet interactions rather than diffraction or interference of the pilot wave may be responsible for the observed hydrodynamic patterns, which are different from slit-induced interference patterns exhibited by quantum particles.
Mathematical foundations.
To derive the de Broglie–Bohm pilot-wave for an electron, the quantum Lagrangian
formula_0
where formula_1 is the potential energy, formula_2 is the velocity and formula_3 is the potential associated with the quantum force (the particle being pushed by the wave function), is integrated along precisely one path (the one the electron actually follows). This leads to the following formula for the Bohm propagator:
formula_4
This propagator allows one to precisely track the electron over time under the influence of the quantum potential formula_3.
Derivation of the Schrödinger equation.
Pilot wave theory is based on Hamilton–Jacobi dynamics, rather than Lagrangian or Hamiltonian dynamics. Using the Hamilton–Jacobi equation
formula_5
it is possible to derive the Schrödinger equation:
Consider a classical particle – the position of which is not known with certainty. We must deal with it statistically, so only the probability density formula_6 is known. Probability must be conserved, i.e. formula_7 for each formula_8. Therefore, it must satisfy the continuity equation
formula_9
where formula_10 is the velocity of the particle.
In the Hamilton–Jacobi formulation of classical mechanics, velocity is given by formula_11 where formula_12 is a solution of the Hamilton-Jacobi equation
formula_13
formula_14 and formula_15 can be combined into a single complex equation by introducing the complex function formula_16 then the two equations are equivalent to
formula_17
with
formula_18
The time-dependent Schrödinger equation is obtained if we start with formula_19 the usual potential with an extra quantum potential formula_3. The quantum potential is the potential of the quantum force, which is proportional (in approximation) to the curvature of the amplitude of the wave function.
Note this potential is the same one that appears in the Madelung equations, a classical analog of the Schrödinger equation.
Mathematical formulation for a single particle.
The matter wave of de Broglie is described by the time-dependent Schrödinger equation:
formula_20
The complex wave function can be represented as:
formula_21
By plugging this into the Schrödinger equation, one can derive two new equations for the real variables. The first is the continuity equation for the probability density formula_22
formula_23
where the velocity field is determined by the “guidance equation”
formula_24
According to pilot wave theory, the point particle and the matter wave are both real and distinct physical entities (unlike standard quantum mechanics, which postulates no physical particle or wave entities, only observed wave-particle duality).
The pilot wave guides the motion of the point particles as described by the guidance equation.
Ordinary quantum mechanics and pilot wave theory are based on the same partial differential equation. The main difference is that in ordinary quantum mechanics, the Schrödinger equation is connected to reality by the Born postulate, which states that the probability density of the particle's position is given by formula_25 Pilot wave theory considers the guidance equation to be the fundamental law, and sees the Born rule as a derived concept.
The second equation is a modified Hamilton–Jacobi equation for the action S:
formula_26
where Q is the quantum potential defined by
formula_27
If we choose to neglect Q, our equation is reduced to the Hamilton–Jacobi equation of a classical point particle. So, the quantum potential is responsible for all the mysterious effects of quantum mechanics.
One can also combine the modified Hamilton–Jacobi equation with the guidance equation to derive a quasi-Newtonian equation of motion
formula_28
where the hydrodynamic time derivative is defined as
formula_29
Mathematical formulation for multiple particles.
The Schrödinger equation for the many-body wave function formula_30 is given by
formula_31
The complex wave function can be represented as:
formula_32
The pilot wave guides the motion of the particles. The guidance equation for the jth particle is:
formula_33
The velocity of the jth particle explicitly depends on the positions of the other particles.
This means that the theory is nonlocal.
Relativity.
An extension to the relativistic case with spin has been developed since the 1990s.
Empty wave function.
Lucien Hardy and John Stewart Bell have emphasized that in the de Broglie–Bohm picture of quantum mechanics there can exist empty waves, represented by wave functions propagating in space and time but not carrying energy or momentum, and not associated with a particle. The same concept was called "ghost waves" (or "Gespensterfelder", "ghost fields") by Albert Einstein. The empty wave function notion has been discussed controversially. In contrast, the many-worlds interpretation of quantum mechanics does not call for empty wave functions.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "L(t)={\\frac{1}{2}}mv^2-(V+Q),"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "v"
},
{
"math_id": 3,
"text": "Q"
},
{
"math_id": 4,
"text": "K^Q(X_1, t_1; X_0, t_0) = \\frac{1}{J(t)^ {\\frac{1}{2}} } \\exp\\left[\\frac{i}{\\hbar}\\int_{t_0}^{t_1}L(t)\\,dt\\right]."
},
{
"math_id": 5,
"text": " H\\left(\\,\\vec{x}\\,, \\;\\vec{\\nabla}_{\\!x}\\, S\\,, \\;t \\,\\right) + {\\partial S \\over \\partial t}\\left(\\,\\vec{x},\\, t\\,\\right) = 0"
},
{
"math_id": 6,
"text": "\\rho (\\vec{x},t)"
},
{
"math_id": 7,
"text": "\\int\\rho\\,\\mathrm{d}^3\\vec{x} = 1"
},
{
"math_id": 8,
"text": "t"
},
{
"math_id": 9,
"text": "\\frac{\\, \\partial \\rho \\,}{ \\partial t } = - \\vec{\\nabla} \\cdot (\\rho \\,\\vec{v} ) \\qquad\\qquad (1)"
},
{
"math_id": 10,
"text": "\\,\\vec{v}(\\vec{x},t)\\,"
},
{
"math_id": 11,
"text": "\\; \\vec{v}(\\vec{x},t) = \\frac{1}{\\,m\\,} \\, \\vec{\\nabla}_{\\!x} S(\\vec{x},\\,t) \\;"
},
{
"math_id": 12,
"text": "\\, S(\\vec{x},t) \\,"
},
{
"math_id": 13,
"text": "- \\frac{\\partial S}{\\partial t} = \\frac{\\;\\left|\\,\\nabla S\\,\\right|^2\\,}{2m} + \\tilde{V} \\qquad\\qquad (2)"
},
{
"math_id": 14,
"text": "\\,(1)\\,"
},
{
"math_id": 15,
"text": "\\,(2)\\,"
},
{
"math_id": 16,
"text": "\\; \\psi = \\sqrt{\\rho\\,} \\, e^\\frac{\\,i\\,S\\,}{\\hbar} \\;,"
},
{
"math_id": 17,
"text": "i\\, \\hbar\\, \\frac{\\,\\partial \\psi\\,}{\\partial t} = \\left( - \\frac{\\hbar^2}{2m} \\nabla^2 +\\tilde{V} - Q \\right)\\psi \\quad"
},
{
"math_id": 18,
"text": " \\; Q = - \\frac{\\;\\hbar^2\\,}{\\,2m\\,} \\frac{\\nabla^2 \\sqrt{\\rho\\,}}{\\sqrt{\\rho\\,}}~."
},
{
"math_id": 19,
"text": "\\;\\tilde{V} = V + Q \\;,"
},
{
"math_id": 20,
"text": " i\\, \\hbar\\, \\frac{\\,\\partial \\psi\\,}{\\partial t} = \\left( - \\frac{\\hbar^2}{\\,2m\\,} \\nabla^2 + V \\right)\\psi \\quad"
},
{
"math_id": 21,
"text": "\\psi = \\sqrt{\\rho\\,} \\; \\exp \\left( \\frac{i \\, S}{\\hbar} \\right) ~"
},
{
"math_id": 22,
"text": "\\,\\rho\\,:"
},
{
"math_id": 23,
"text": "\\frac{\\, \\partial \\rho \\,}{\\, \\partial t \\,} + \\vec{\\nabla} \\cdot \\left( \\rho\\, \\vec{v} \\right) = 0 ~ ,"
},
{
"math_id": 24,
"text": "\\vec{v}\\left(\\,\\vec{r},\\,t\\,\\right) = \\frac{1}{\\,m\\,} \\, \\vec{\\nabla} S\\left(\\, \\vec{r},\\, t \\,\\right) ~ ."
},
{
"math_id": 25,
"text": "\\; \\rho = |\\psi|^2 ~ ."
},
{
"math_id": 26,
"text": "- \\frac{\\partial S}{\\partial t} = \\frac{\\;\\left|\\, \\vec{\\nabla} S \\,\\right|^2\\,}{\\,2m\\,} + V + Q ~ ,"
},
{
"math_id": 27,
"text": " Q = - \\frac{\\hbar^2}{\\,2m\\,} \\frac{\\nabla^2 \\sqrt{\\rho \\,} }{\\sqrt{ \\rho \\,} } ~."
},
{
"math_id": 28,
"text": "m \\, \\frac{d}{dt} \\, \\vec{v} = - \\vec{\\nabla}( V + Q ) ~ ,"
},
{
"math_id": 29,
"text": "\\frac{d}{dt} = \\frac{ \\partial }{\\, \\partial t \\,} + \\vec{v} \\cdot \\vec{\\nabla} ~ ."
},
{
"math_id": 30,
"text": " \\psi(\\vec{r}_1, \\vec{r}_2, \\cdots, t) "
},
{
"math_id": 31,
"text": " i \\hbar \\frac{\\partial \\psi}{\\partial t} =\\left( -\\frac{\\hbar^2}{2} \\sum_{i=1}^{N} \\frac{\\nabla_i^2}{m_i} + V(\\mathbf{r}_1,\\mathbf{r}_2,\\cdots\\mathbf{r}_N) \\right) \\psi "
},
{
"math_id": 32,
"text": "\\psi = \\sqrt{\\rho\\,} \\; \\exp \\left( \\frac{i \\, S}{\\hbar} \\right) "
},
{
"math_id": 33,
"text": " \\vec{v}_j = \\frac{\\nabla_j S}{m_j}\\; ."
}
] | https://en.wikipedia.org/wiki?curid=1190887 |
1190959 | Petr Hořava (physicist) | Czech physicist
Petr Hořava (born 1963 in Prostějov) is a Czech string theorist. He is a professor of physics in the Berkeley Center for Theoretical Physics at the University of California, Berkeley, where he teaches courses on quantum field theory and string theory. Hořava is a member of the theory group at Lawrence Berkeley National Laboratory.
Work.
Hořava is known for his articles written with Edward Witten about the Hořava-Witten domain walls in M-theory. These articles demonstrated that the ten-dimensional heterotic formula_0 string theory could be produced from 11-dimensional M-theory by making one of the dimensions have edges (the domain walls). This discovery provided crucial support for the conjecture that all string theories could arise as limits of a single higher-dimensional theory.
Hořava is less well known for his discovery of D-branes, usually attributed to Dai, Leigh and Polchinski, who discovered them independently, also in 1989.
In 2009, Hořava proposed a theory of gravity that separates space from time at high energy while matching some predictions of general relativity at lower energies.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_8 \\times E_8"
}
] | https://en.wikipedia.org/wiki?curid=1190959 |
1190979 | Schnorr group | A Schnorr group, proposed by Claus P. Schnorr, is a large prime-order subgroup of formula_0, the multiplicative group of integers modulo formula_1 for some prime formula_1. To generate such a group, generate formula_1, formula_2, formula_3 such that
formula_4
with formula_1, formula_2 prime. Then choose any formula_5 in the range formula_6 until you find one such that
formula_7.
This value
formula_8
is a generator of a subgroup of formula_0 of order formula_2.
Schnorr groups are useful in discrete log based cryptosystems including Schnorr signatures and DSA. In such applications, typically formula_1 is chosen to be large enough to resist index calculus and related methods of solving the discrete-log problem (perhaps 1024 to 3072 bits), while formula_2 is large enough to resist the birthday attack on discrete log problems, which works in any group (perhaps 160 to 256 bits). Because the Schnorr group is of prime order, it has no non-trivial proper subgroups, thwarting confinement attacks due to small subgroups. Implementations of protocols that use Schnorr groups must verify where appropriate that integers supplied by other parties are in fact members of the Schnorr group; formula_9 is a member of the group if formula_10 and formula_11. Any member of the group except the element formula_12 is also a generator of the group. | [
{
"math_id": 0,
"text": "\\mathbb{Z}_p^\\times"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "q"
},
{
"math_id": 3,
"text": "r"
},
{
"math_id": 4,
"text": "p = qr + 1"
},
{
"math_id": 5,
"text": "h"
},
{
"math_id": 6,
"text": "1 < h < p"
},
{
"math_id": 7,
"text": "h^r \\not\\equiv 1\\;(\\text{mod}\\;p)"
},
{
"math_id": 8,
"text": "g = h^r\\text{ mod }p"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "0 < x < p"
},
{
"math_id": 11,
"text": "x^q \\equiv 1\\;(\\text{mod }p)"
},
{
"math_id": 12,
"text": "1"
}
] | https://en.wikipedia.org/wiki?curid=1190979 |
1191031 | Graviphoton | Hypothetical particle in gravity theories
In theoretical physics and quantum physics, a graviphoton or gravivector is a hypothetical particle which emerges as an excitation of the metric tensor (i.e. gravitational field) in spacetime dimensions higher than four, as described in Kaluza–Klein theory.
However, its crucial physical properties are analogous to a (massive) photon: it induces a "vector force", sometimes dubbed a "fifth force". The electromagnetic potential formula_0 emerges from an extra component of the metric tensor formula_1, where the figure 5 labels an additional, fifth dimension.
In gravity theories with extended supersymmetry (extended supergravities), a graviphoton is normally a superpartner of the graviton that behaves like a photon, and is prone to couple with gravitational strength, as was appreciated in the late 1970s. Unlike the graviton, it may provide a "repulsive" (as well as an attractive) force, and thus, in some technical sense, a type of anti-gravity. Under special circumstances, in several natural models, often descending from five-dimensional theories mentioned, it may actually cancel the gravitational attraction in the static limit. Joël Scherk investigated semirealistic aspects of this phenomenon, stimulating searches for physical manifestations of this mechanism.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A_\\mu"
},
{
"math_id": 1,
"text": "g_{\\mu 5}"
}
] | https://en.wikipedia.org/wiki?curid=1191031 |
1191041 | Extended supersymmetry | In theoretical physics, extended supersymmetry is supersymmetry whose infinitesimal generators formula_0 carry not only a spinor index formula_1, but also an additional index formula_2 where formula_3 is integer (such as 2 or 4).
Extended supersymmetry is also called formula_4, formula_5 supersymmetry, for example. Extended supersymmetry is very important for analysis of mathematical properties of quantum field theory and superstring theory. The more extended supersymmetry is, the more it constrains physical observables and parameters. | [
{
"math_id": 0,
"text": "Q_i^\\alpha"
},
{
"math_id": 1,
"text": "\\alpha"
},
{
"math_id": 2,
"text": "i=1,2 \\dots \\mathcal{N}"
},
{
"math_id": 3,
"text": "\\mathcal{N}"
},
{
"math_id": 4,
"text": "\\mathcal{N}=2"
},
{
"math_id": 5,
"text": "\\mathcal{N}=4"
}
] | https://en.wikipedia.org/wiki?curid=1191041 |
1191067 | Magnetic vector potential | Integral of the magnetic field
In classical electromagnetism, magnetic vector potential (often called A) is the vector quantity defined so that its curl is equal to the magnetic field: formula_0. Together with the electric potential "φ", the magnetic vector potential can be used to specify the electric field E as well. Therefore, many equations of electromagnetism can be written either in terms of the fields E and B, or equivalently in terms of the potentials "φ" and A. In more advanced theories such as quantum mechanics, most equations use potentials rather than fields.
Magnetic vector potential was first introduced by Franz Ernst Neumann and Wilhelm Eduard Weber in 1845 and in 1846, respectively. William Thomson also introduced vector potential in 1847, along with the formula relating it to the magnetic field.
Unit conventions.
This article uses the SI system.
In the SI system, the units of A are V·s·m−1 and are the same as that of momentum per unit charge, or force per unit current.
Magnetic vector potential.
The magnetic vector potential, formula_1, is a vector field, and the electric potential, formula_2, is a scalar field such that:
formula_3,
where formula_4 is the magnetic field and formula_5 is the electric field. In magnetostatics where there is no time-varying current or charge distribution, only the first equation is needed. (In the context of electrodynamics, the terms "vector potential" and "scalar potential" are used for "magnetic vector potential" and "electric potential", respectively. In mathematics, vector potential and scalar potential can be generalized to higher dimensions.)
If electric and magnetic fields are defined as above from potentials, they automatically satisfy two of Maxwell's equations: Gauss's law for magnetism and Faraday's law. For example, if formula_1 is continuous and well-defined everywhere, then it is guaranteed not to result in magnetic monopoles. (In the mathematical theory of magnetic monopoles, formula_1 is allowed to be either undefined or multiple-valued in some places; see magnetic monopole for details).
Starting with the above definitions and remembering that the divergence of the curl is zero and the curl of the gradient is the zero vector:
formula_6
Alternatively, the existence of formula_1 and formula_2 is guaranteed from these two laws using Helmholtz's theorem. For example, since the magnetic field is divergence-free (Gauss's law for magnetism; i.e., formula_7), formula_8 always exists that satisfies the above definition.
The vector potential formula_9 is used when studying the Lagrangian in classical mechanics and in quantum mechanics (see Schrödinger equation for charged particles, Dirac equation, Aharonov–Bohm effect).
In minimal coupling, formula_10 is called the potential momentum, and is part of the canonical momentum.
The line integral of formula_1 over a closed loop, formula_11, is equal to the magnetic flux, formula_12, through a surface, formula_13, that it encloses:
formula_14
Therefore, the units of formula_1 are also equivalent to Weber per metre. The above equation is useful in the flux quantization of superconducting loops.
Although the magnetic field, formula_4, is a pseudovector (also called axial vector), the vector potential, formula_1, is a polar vector. This means that if the right-hand rule for cross products were replaced with a left-hand rule, but without changing any other equations or definitions, then formula_4 would switch signs, but A would not change. This is an example of a general theorem: The curl of a polar vector is a pseudovector, and vice versa.
Gauge choices.
The above definition does not define the magnetic vector potential uniquely because, by definition, we can arbitrarily add curl-free components to the magnetic potential without changing the observed magnetic field. Thus, there is a degree of freedom available when choosing formula_1. This condition is known as gauge invariance.
Two common gauge choices are
Lorenz gauge.
In other gauges, the formulas for formula_1 and formula_2 are different; for example, see Coulomb gauge for another possibility.
Time domain.
Using the above definition of the potentials and applying it to the other two Maxwell's equations (the ones that are not automatically satisfied) results in a complicated differential equation that can be simplified using the Lorenz gauge where formula_9 is chosen to satisfy:
formula_15
Using the Lorenz gauge, the electromagnetic wave equations can be written compactly in terms of the potentials,
formula_17
formula_18
The solutions of Maxwell's equations in the Lorenz gauge (see Feynman and Jackson) with the boundary condition that both potentials go to zero sufficiently fast as they approach infinity are called the retarded potentials, which are the magnetic vector potential formula_19 and the electric scalar potential formula_20 due to a current distribution of current density formula_21, charge density formula_22, and volume formula_23, within which formula_24 and formula_25 are non-zero at least sometimes and some places):
formula_26
where the fields at position vector formula_27 and time formula_28 are calculated from sources at distant position formula_29 at an earlier time formula_30 The location formula_31 is a source point in the charge or current distribution (also the integration variable, within volume formula_32). The earlier time formula_33 is called the "retarded time", and calculated as
formula_34
formula_35
formula_36
formula_41
time domain notes.
In this form it is apparent that the component of formula_1 in a given direction depends only on the components of formula_42 that are in the same direction. If the current is carried in a straight wire, formula_1 points in the same direction as the wire.
Frequency domain.
The preceding time domain equations can be expressed in the frequency domain.
formula_43 or formula_44
formula_45
formula_46
formula_47
where
formula_48 and formula_49 are scalar phasors.
formula_50 and formula_51 are vector phasors.
formula_52
frequency domain notes.
There are a few notable things about formula_53 and formula_54 calculated in this way:
formula_61
In this form it is apparent that the component of formula_53 in a given direction depends only on the components of formula_62 that are in the same direction. If the current is carried in a straight wire, formula_53 points in the same direction as the wire.
Depiction of the A-field.
See Feynman for the depiction of the formula_53 field around a long thin solenoid.
Since
formula_64
assuming quasi-static conditions, i.e.
formula_65 and formula_66,
the lines and contours of formula_53 relate to formula_63 like the lines and contours of formula_4 relate to formula_67 Thus, a depiction of the formula_53 field around a loop of formula_63 flux (as would be produced in a toroidal inductor) is qualitatively the same as the formula_63 field around a loop of current.
The figure to the right is an artist's depiction of the formula_68 field. The thicker lines indicate paths of higher average intensity (shorter paths have higher intensity so that the path integral is the same). The lines are drawn to (aesthetically) impart the general look of the formula_8field.
The drawing tacitly assumes formula_69, true under any one of the following assumptions:
Electromagnetic four-potential.
In the context of special relativity, it is natural to join the magnetic vector potential together with the (scalar) electric potential into the electromagnetic potential, also called "four-potential".
One motivation for doing so is that the four-potential is a mathematical four-vector. Thus, using standard four-vector transformation rules, if the electric and magnetic potentials are known in one inertial reference frame, they can be simply calculated in any other inertial reference frame.
Another, related motivation is that the content of classical electromagnetism can be written in a concise and convenient form using the electromagnetic four potential, especially when the Lorenz gauge is used. In particular, in abstract index notation, the set of Maxwell's equations (in the Lorenz gauge) may be written (in Gaussian units) as follows:
formula_72
where formula_73 is the d'Alembertian and formula_74 is the four-current. The first equation is the Lorenz gauge condition while the second contains Maxwell's equations. The four-potential also plays a very important role in quantum electrodynamics.
Charged particle in a field.
In a field with electric potential formula_54 and magnetic potential formula_75, the Lagrangian (formula_76) and the Hamiltonian (formula_77) of a particle with mass formula_78 and charge formula_79 areformula_80
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\nabla \\times \\mathbf{A} = \\mathbf{B}"
},
{
"math_id": 1,
"text": "\\mathbf{A}"
},
{
"math_id": 2,
"text": "\\phi"
},
{
"math_id": 3,
"text": "\\mathbf{B} = \\nabla \\times \\mathbf{A}\\ , \\quad \\mathbf{E} = -\\nabla \\phi - \\frac{ \\partial \\mathbf{A} }{ \\partial t }"
},
{
"math_id": 4,
"text": "\\mathbf{B}"
},
{
"math_id": 5,
"text": "\\mathbf{E} "
},
{
"math_id": 6,
"text": "\\begin{align}\n \\nabla \\cdot \\mathbf{B} &= \\nabla \\cdot \\left(\\nabla \\times \\mathbf{A}\\right) = 0\\ ,\\\\\n \\nabla \\times \\mathbf{E} &= \\nabla \\times \\left( -\\nabla\\phi - \\frac{ \\partial\\mathbf{A} }{ \\partial t } \\right) = -\\frac{ \\partial }{ \\partial t } \\left(\\nabla \\times \\mathbf{A}\\right) = -\\frac{ \\partial \\mathbf{B} }{ \\partial t } ~.\n\\end{align}"
},
{
"math_id": 7,
"text": "\\nabla \\cdot \\mathbf{B} = 0"
},
{
"math_id": 8,
"text": " \\mathbf{A}"
},
{
"math_id": 9,
"text": "\\mathbf{A} "
},
{
"math_id": 10,
"text": "q \\mathbf{A} "
},
{
"math_id": 11,
"text": "\\Gamma"
},
{
"math_id": 12,
"text": "\\Phi_{\\mathbf{B}}"
},
{
"math_id": 13,
"text": "S"
},
{
"math_id": 14,
"text": "\\oint_\\Gamma \\mathbf{A}\\, \\cdot\\ d{\\mathbf{\\Gamma}} = \\iint_S \\nabla\\times\\mathbf{A}\\ \\cdot\\ d \\mathbf{S} = \\Phi_\\mathbf{B} ~."
},
{
"math_id": 15,
"text": "\\ \\nabla \\cdot \\mathbf{A} + \\frac{1}{\\ c^2} \\frac{\\partial \\phi}{\\partial t} = 0 "
},
{
"math_id": 16,
"text": "\\ \\nabla \\cdot \\mathbf{A} = 0 "
},
{
"math_id": 17,
"text": "\\begin{align}\n \\nabla^2\\phi - \\frac{1}{c^2} \\frac{\\partial^2 \\phi}{\\ \\partial t^2} &= - \\frac{\\rho}{\\epsilon_0} \\\\[2.734ex]\n\n\\end{align}"
},
{
"math_id": 18,
"text": "\\begin{align}\n \\nabla^2\\mathbf{A} - \\frac{1}{\\ c^2} \\frac{\\partial^2 \\mathbf{A}}{\\ \\partial t^2} &= - \\mu_0\\ \\mathbf{J}\n\\end{align}"
},
{
"math_id": 19,
"text": "\\mathbf{A}(\\mathbf{r}, t) "
},
{
"math_id": 20,
"text": "\\phi(\\mathbf{r}, t)"
},
{
"math_id": 21,
"text": "\\mathbf{J}(\\mathbf{r}, t)"
},
{
"math_id": 22,
"text": "\\rho(\\mathbf{r}, t) "
},
{
"math_id": 23,
"text": "\\Omega "
},
{
"math_id": 24,
"text": "\\rho"
},
{
"math_id": 25,
"text": "\\mathbf{J}"
},
{
"math_id": 26,
"text": "\\begin{align}\n \\mathbf{A}\\!\\left(\\mathbf{r}, t\\right) &= \\frac{\\mu_0}{\\ 4\\pi\\ } \\int_\\Omega \\frac{ \\mathbf{J}\\left(\\mathbf{r}' , t'\\right) } R\\ d^3\\mathbf{r}' \\\\\n \\phi\\!\\left(\\mathbf{r}, t\\right) &= \\frac{1}{4\\pi\\epsilon_0} \\int_\\Omega \\frac{ \\rho \\left(\\mathbf{r}', t'\\right) } R\\ d^3\\mathbf{r}'\n\\end{align}"
},
{
"math_id": 27,
"text": "\\mathbf{r}"
},
{
"math_id": 28,
"text": "t"
},
{
"math_id": 29,
"text": "\\mathbf{r}' "
},
{
"math_id": 30,
"text": "t' ."
},
{
"math_id": 31,
"text": "\\mathbf{r}'"
},
{
"math_id": 32,
"text": "\\Omega"
},
{
"math_id": 33,
"text": "t'"
},
{
"math_id": 34,
"text": " R = \\bigl\\|\\mathbf{r} - \\mathbf{r}' \\bigr\\| ~."
},
{
"math_id": 35,
"text": " t' = t - \\frac{\\ R \\ }{c} ~."
},
{
"math_id": 36,
"text": "\\ \\nabla \\cdot \\mathbf{A} + \\frac{1}{\\ c^2}\\frac{\\partial\\phi}{\\partial t} = 0 ~."
},
{
"math_id": 37,
"text": "\\mathbf{r} ."
},
{
"math_id": 38,
"text": "\\mathbf{r} "
},
{
"math_id": 39,
"text": " \\mathbf{r}' "
},
{
"math_id": 40,
"text": "t'."
},
{
"math_id": 41,
"text": "\\begin{align}\n A_x\\left(\\mathbf{r}, t\\right) &= \\frac{\\mu_0}{\\ 4\\pi\\ } \\int_\\Omega\\frac{J_x\\left(\\mathbf{r}', t'\\right)}R\\ d^3\\mathbf{r}'\\ , \\qquad\n A_y\\left(\\mathbf{r}, t\\right) &= \\frac{\\mu_0}{\\ 4\\pi\\ } \\int_\\Omega\\frac{J_y\\left(\\mathbf{r}', t'\\right)}R\\ d^3\\mathbf{r}'\\ ,\\qquad\n A_z\\left(\\mathbf{r}, t\\right) &= \\frac{\\mu_0}{\\ 4\\pi\\ } \\int_\\Omega\\frac{J_z\\left(\\mathbf{r}', t'\\right)}R\\ d^3\\mathbf{r}' ~.\n\\end{align}"
},
{
"math_id": 42,
"text": "\\mathbf{J} "
},
{
"math_id": 43,
"text": "\\nabla \\cdot \\mathbf{A} + \\frac{j \\omega}{c^2} \\phi= 0 \\qquad "
},
{
"math_id": 44,
"text": "\\qquad \\phi = \\frac {j \\omega}{k^2} \\nabla \\cdot \\mathbf{A} "
},
{
"math_id": 45,
"text": " \\mathbf{A}\\!\\left(\\mathbf{r}, \\omega \\right) = \\frac{\\mu_0}{\\ 4\\pi\\ } \\int_\\Omega \\frac{ \\mathbf{J}\\left(\\mathbf{r}' , \\omega \\right) }R\\ e^{-jkR} d^3 \\mathbf{r}' \\qquad\n\n\\phi\\!\\left(\\mathbf{r}, \\omega \\right) = \\frac{1}{4\\pi\\epsilon_0} \\int_\\Omega \\frac{ \\rho \\left(\\mathbf{r}', \\omega \\right) } R \\ e^{-jkR} d^3 \\mathbf{r}'"
},
{
"math_id": 46,
"text": " \\nabla^2 \\phi + k^2 \\phi = - \\frac{\\rho}{\\epsilon_0} \\qquad \\nabla^2 \\mathbf{A} + k^2 \\mathbf{A} = - \\mu_0\\ \\mathbf{J} . "
},
{
"math_id": 47,
"text": "\\mathbf{B} = \\nabla \\times \\mathbf{A}\\ \\qquad \\mathbf{E} = -\\nabla \\phi - j \\omega \\mathbf{A} = - j \\omega \\mathbf{A} -j \\frac {\\omega}{k^2} \\nabla ( \\nabla \\cdot \\mathbf{A} )"
},
{
"math_id": 48,
"text": " \\phi "
},
{
"math_id": 49,
"text": " \\rho "
},
{
"math_id": 50,
"text": " \\mathbf{A}, \\mathbf{B}, \\mathbf{E}, "
},
{
"math_id": 51,
"text": " \\mathbf{J} "
},
{
"math_id": 52,
"text": " k = \\frac \\omega c "
},
{
"math_id": 53,
"text": "\\ \\mathbf{A}\\ "
},
{
"math_id": 54,
"text": "\\ \\phi\\ "
},
{
"math_id": 55,
"text": " \\phi = -\\frac {c^2}{j \\omega}\\nabla \\cdot \\mathbf{A}. "
},
{
"math_id": 56,
"text": "\\ \\mathbf{r}\\ ,"
},
{
"math_id": 57,
"text": "\\ \\mathbf{r}'\\ "
},
{
"math_id": 58,
"text": "\\ \\mathbf{r} ~."
},
{
"math_id": 59,
"text": "\\ \\mathbf{r}\\ "
},
{
"math_id": 60,
"text": " e^{-jkR} "
},
{
"math_id": 61,
"text": "\\begin{align}\n \\mathbf{A}_x\\!\\left(\\mathbf{r}, \\omega \\right) &= \\frac{\\mu_0}{4\\pi} \\int_\\Omega \\frac{ \\mathbf{J}_x \\left(\\mathbf{r}' , \\omega \\right) }R\\ e^{-jkR} \\ d^3\\mathbf{r}', \n\\qquad \\mathbf{A}_y\\!\\left(\\mathbf{r}, \\omega \\right) &= \\frac{\\mu_0}{4\\pi} \\int_\\Omega \\frac{ \\mathbf{J}_y \\left(\\mathbf{r}' , \\omega \\right) }R\\ e^{-jkR} \\ d^3\\mathbf{r}', \n\\qquad \\mathbf{A}_z\\!\\left(\\mathbf{r}, \\omega \\right) &= \\frac{\\mu_0}{4\\pi} \\int_\\Omega \\frac{ \\mathbf{J}_z \\left(\\mathbf{r}' , \\omega \\right) }R\\ e^{-jkR} \\ d^3\\mathbf{r}' \n\\end{align}"
},
{
"math_id": 62,
"text": "\\ \\mathbf{J}\\ "
},
{
"math_id": 63,
"text": "\\ \\mathbf{B}\\ "
},
{
"math_id": 64,
"text": "\\nabla \\times \\mathbf{B} = \\mu_0\\ \\mathbf{J}"
},
{
"math_id": 65,
"text": "\\frac{\\ \\partial\\mathbf{E}\\ }{\\partial t} \\to 0\\ "
},
{
"math_id": 66,
"text": "\\ \\nabla \\times \\mathbf{A} = \\mathbf{B}"
},
{
"math_id": 67,
"text": "\\ \\mathbf{J} ~."
},
{
"math_id": 68,
"text": " \\mathbf{A} "
},
{
"math_id": 69,
"text": " \\nabla \\cdot \\mathbf{A} = 0"
},
{
"math_id": 70,
"text": "\\rho = 0"
},
{
"math_id": 71,
"text": "\\frac{1}{c} \\frac{\\partial\\phi}{\\partial t} "
},
{
"math_id": 72,
"text": "\\begin{align}\n \\partial^\\nu A_\\nu &= 0 \\\\\n \\Box^2 A_\\nu &= \\frac{ 4\\pi }{\\ c\\ }\\ J_\\nu\n\\end{align}"
},
{
"math_id": 73,
"text": "\\ \\Box^2\\ "
},
{
"math_id": 74,
"text": "\\ J\\ "
},
{
"math_id": 75,
"text": "\\ \\mathbf{A}"
},
{
"math_id": 76,
"text": "\\ \\mathcal{L}\\ "
},
{
"math_id": 77,
"text": "\\ \\mathcal{H}\\ "
},
{
"math_id": 78,
"text": "\\ m\\ "
},
{
"math_id": 79,
"text": "\\ q\\ "
},
{
"math_id": 80,
"text": "\\begin{aligned}\n\\mathcal{L} &= \\frac{1}{2} m\\ \\mathbf v^2 + q\\ \\mathbf v \\cdot \\mathbf A - q\\ \\phi\\ ,\\\\\n\\mathcal{H} &= \\frac{1}{2m}\\left(q\\ \\mathbf A- \\mathbf{p}\\right)^2 + q\\ \\phi ~.\n\\end{aligned}"
}
] | https://en.wikipedia.org/wiki?curid=1191067 |
1191076 | Graviscalar | Hypothetical particle
In theoretical physics, the hypothetical particle called the graviscalar or radion emerges as an excitation of general relativity's metric tensor, i.e. gravitational field, but is indistinguishable from a scalar in four dimensions, as shown in Kaluza–Klein theory. The scalar field formula_0 comes from a component of the metric tensor formula_1 where the figure 5 labels an additional fifth dimension. The only variations in the scalar field represent variations in the size of the extra dimension. Also, in models with multiple extra dimensions, there exist several such particles. Moreover, in theories with extended supersymmetry, a graviscalar is usually a superpartner of the graviton that behaves as a particle with spin 0. This concept closely relates to the gauged Higgs models. | [
{
"math_id": 0,
"text": "\\phi"
},
{
"math_id": 1,
"text": "g_{55}"
}
] | https://en.wikipedia.org/wiki?curid=1191076 |
1191101 | Electric displacement field | Vector field related to displacement current and flux density
In physics, the electric displacement field (denoted by D) or electric induction is a vector field that appears in Maxwell's equations. It accounts for the electromagnetic effects of polarization and that of an electric field, combining the two in an auxiliary field. It plays a major role in topics such as the capacitance of a material, as well as the response of dielectrics to an electric field, and how shapes can change due to electric fields in piezoelectricity or flexoelectricity as well as the creation of voltages and charge transfer due to elastic strains.
In any material, if there is an inversion center then the charge at, for instance, formula_0 and formula_1 are the same. This means that there is no dipole. If an electric field is applied to an insulator, then (for instance) the negative charges can move slightly towards the positive side of the field, and the positive charges in the other direction. This leads to an induced dipole which is described as a polarization. There can be slightly different movements of the negative electrons and positive nuclei in molecules, or different displacements of the atoms in an ionic compound. Materials which do not have an inversion center display piezoelectricity and always have a polarization; in others spatially varying strains can break the inversion symmetry and lead to polarization, the flexoelectric effect. Other stimuli such as magnetic fields can lead to polarization in some materials, this being called the magnetoelectric effect.
Definition.
The electric displacement field "D" is defined asformula_2where formula_3 is the vacuum permittivity (also called permittivity of free space), and P is the (macroscopic) density of the permanent and induced electric dipole moments in the material, called the polarization density.
The displacement field satisfies Gauss's law in a dielectric:
formula_4
In this equation, formula_5 is the number of free charges per unit volume. These charges are the ones that have made the volume non-neutral, and they are sometimes referred to as the space charge. This equation says, in effect, that the flux lines of D must begin and end on the free charges. In contrast formula_6 is the density of all those charges that are part of a dipole, each of which is neutral. In the example of an insulating dielectric between metal capacitor plates, the only free charges are on the metal plates and dielectric contains only dipoles. If the dielectric is replaced by a doped semiconductor or an ionised gas, etc, then electrons move relative to the ions, and if the system is finite they both contribute to formula_5 at the edges.
D is not determined exclusively by the free charge. As E has a curl of zero in electrostatic situations, it follows that
formula_7
The effect of this equation can be seen in the case of an object with a "frozen in" polarization like a bar electret, the electric analogue to a bar magnet. There is no free charge in such a material, but the inherent polarization gives rise to an electric field, demonstrating that the D field is not determined entirely by the free charge. The electric field is determined by using the above relation along with other boundary conditions on the polarization density to yield the bound charges, which will, in turn, yield the electric field.
In a linear, homogeneous, isotropic dielectric with instantaneous response to changes in the electric field, P depends linearly on the electric field,
formula_8
where the constant of proportionality formula_9 is called the electric susceptibility of the material. Thus
formula_10
where "ε" = "ε"0 "ε"r is the permittivity, and "ε"r = 1 + "χ" the relative permittivity of the material.
In linear, homogeneous, isotropic media, "ε" is a constant. However, in linear anisotropic media it is a tensor, and in nonhomogeneous media it is a function of position inside the medium. It may also depend upon the electric field (nonlinear materials) and have a time dependent response. Explicit time dependence can arise if the materials are physically moving or changing in time (e.g. reflections off a moving interface give rise to Doppler shifts). A different form of time dependence can arise in a time-invariant medium, as there can be a time delay between the imposition of the electric field and the resulting polarization of the material. In this case, P is a convolution of the impulse response susceptibility "χ" and the electric field E. Such a convolution takes on a simpler form in the frequency domain: by Fourier transforming the relationship and applying the convolution theorem, one obtains the following relation for a linear time-invariant medium:
formula_11
where formula_12 is the frequency of the applied field. The constraint of causality leads to the Kramers–Kronig relations, which place limitations upon the form of the frequency dependence. The phenomenon of a frequency-dependent permittivity is an example of material dispersion. In fact, all physical materials have some material dispersion because they cannot respond instantaneously to applied fields, but for many problems (those concerned with a narrow enough bandwidth) the frequency-dependence of "ε" can be neglected.
At a boundary, formula_13, where "σ"f is the free charge density and the unit normal formula_14 points in the direction from medium 2 to medium 1.
History.
The earliest known use of the term is from the year 1864, in James Clerk Maxwell's paper "A Dynamical Theory of the Electromagnetic Field". Maxwell introduced the term D, specific capacity of electric induction, in a form different from the modern and familiar notations.
It was Oliver Heaviside who reformulated the complicated Maxwell's equations to the modern form. It wasn't until 1884 that Heaviside, concurrently with Willard Gibbs and Heinrich Hertz, grouped the equations together into a distinct set. This group of four equations was known variously as the Hertz–Heaviside equations and the Maxwell–Hertz equations, and is sometimes still known as the Maxwell–Heaviside equations; hence, it was probably Heaviside who lent D the present significance it now has.
Example: Displacement field in a capacitor.
Consider an infinite parallel plate capacitor where the space between the plates is empty or contains a neutral, insulating medium. In both cases, the free charges are only on the metal capacitor plates. Since the flux lines D end on free charges, and there are the same number of uniformly distributed charges of opposite sign on both plates, then the flux lines must all simply traverse the capacitor from one side to the other. In SI units, the charge density on the plates is proportional to the value of the D field between the plates. This follows directly from Gauss's law, by integrating over a small rectangular box straddling one plate of the capacitor:
formula_15 formula_16
On the sides of the box, dA is perpendicular to the field, so the integral over this section is zero, as is the integral on the face that is outside the capacitor where D is zero. The only surface that contributes to the integral is therefore the surface of the box inside the capacitor, and hence
formula_17
where "A" is the surface area of the top face of the box and formula_18 is the free surface charge density on the positive plate. If the space between the capacitor plates is filled with a linear homogeneous isotropic dielectric with permittivity formula_19, then there is a polarization induced in the medium, formula_20 and so the voltage difference between the plates is
formula_21
where "d" is their separation.
Introducing the dielectric increases "ε" by a factor formula_22 and either the voltage difference between the plates will be smaller by this factor, or the charge must be higher. The partial cancellation of fields in the dielectric allows a larger amount of free charge to dwell on the two plates of the capacitor per unit of potential drop than would be possible if the plates were separated by vacuum.
If the distance "d" between the plates of a "finite" parallel plate capacitor is much smaller than its lateral dimensions
we can approximate it using the infinite case and obtain its capacitance as
formula_23
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "+x"
},
{
"math_id": 1,
"text": "-x"
},
{
"math_id": 2,
"text": "\\mathbf{D} \\equiv \\varepsilon_{0} \\mathbf{E} + \\mathbf{P},"
},
{
"math_id": 3,
"text": "\\varepsilon_{0}"
},
{
"math_id": 4,
"text": " \\nabla\\cdot\\mathbf{D} = \\rho -\\rho_\\text{b} = \\rho_\\text{f} "
},
{
"math_id": 5,
"text": "\\rho_\\text{f}"
},
{
"math_id": 6,
"text": "\\rho_\\text{b}"
},
{
"math_id": 7,
"text": "\\nabla \\times \\mathbf{D} = \\nabla \\times \\mathbf{P}"
},
{
"math_id": 8,
"text": "\\mathbf{P} = \\varepsilon_{0} \\chi \\mathbf{E},"
},
{
"math_id": 9,
"text": "\\chi"
},
{
"math_id": 10,
"text": "\\mathbf{D} = \\varepsilon_{0} (1+\\chi) \\mathbf{E} = \\varepsilon \\mathbf{E}"
},
{
"math_id": 11,
"text": " \\mathbf{D}(\\omega) = \\varepsilon (\\omega) \\mathbf{E}(\\omega) , "
},
{
"math_id": 12,
"text": "\\omega"
},
{
"math_id": 13,
"text": "(\\mathbf{D_1} - \\mathbf{D_2})\\cdot \\hat{\\mathbf{n}} = D_{1,\\perp} - D_{2,\\perp} = \\sigma_\\text{f} "
},
{
"math_id": 14,
"text": "\\mathbf{\\hat{n}}"
},
{
"math_id": 15,
"text": "\\scriptstyle _A"
},
{
"math_id": 16,
"text": "\\mathbf{D} \\cdot \\mathrm{d}\\mathbf{A}=Q_\\text{free}"
},
{
"math_id": 17,
"text": "|\\mathbf{D}| A = |Q_\\text{free}|,"
},
{
"math_id": 18,
"text": "Q_\\text{free}/A=\\rho_\\text{f}"
},
{
"math_id": 19,
"text": "\\varepsilon =\\varepsilon_0\\varepsilon_r"
},
{
"math_id": 20,
"text": "\\mathbf{D}=\\varepsilon_0\\mathbf{E}+\\mathbf{P}=\\varepsilon\\mathbf{E}"
},
{
"math_id": 21,
"text": " V =|\\mathbf{E}| d =\\frac{|\\mathbf{D}|d}{\\varepsilon}= \\frac{|Q_\\text{free}|d}{\\varepsilon A}"
},
{
"math_id": 22,
"text": "\\varepsilon_r"
},
{
"math_id": 23,
"text": "C = \\frac{Q_\\text{free}}{V} \\approx \\frac{Q_\\text{free}}{|\\mathbf{E}| d} = \\frac{A}{d} \\varepsilon,"
}
] | https://en.wikipedia.org/wiki?curid=1191101 |
1191117 | Peierls bracket | Theoretical physics
In theoretical physics, the Peierls bracket is an equivalent description of the Poisson bracket. It can be defined directly from the action and does not require the canonical coordinates and their canonical momenta to be defined in advance.
The bracket
formula_0
is defined as
formula_1,
as the difference between some kind of action of one quantity on the other, minus the flipped term.
In quantum mechanics, the Peierls bracket becomes a commutator i.e. a Lie bracket.
References.
"This article incorporates material from the Citizendium article "", which is licensed under the but not under the ."
Peierls, R. "The Commutation Laws of Relativistic Field Theory,"
Proc. R. Soc. Lond. August 21, 1952 214 1117 143-157. | [
{
"math_id": 0,
"text": "[A,B]"
},
{
"math_id": 1,
"text": "D_A(B)-D_B(A)"
}
] | https://en.wikipedia.org/wiki?curid=1191117 |
1191490 | Pairing | Bilinear map in mathematics
In mathematics, a pairing is an "R"-bilinear map from the Cartesian product of two "R"-modules, where the underlying ring "R" is commutative.
Definition.
Let "R" be a commutative ring with unit, and let "M", "N" and "L" be "R"-modules.
A pairing is any "R"-bilinear map formula_0. That is, it satisfies
formula_1,
formula_2 and formula_3
for any formula_4 and any formula_5 and any formula_6. Equivalently, a pairing is an "R"-linear map
formula_7
where formula_8 denotes the tensor product of "M" and "N".
A pairing can also be considered as an "R"-linear map
formula_9, which matches the first definition by setting
formula_10.
A pairing is called perfect if the above map formula_11 is an isomorphism of "R"-modules.
A pairing is called non-degenerate on the right if for the above map we have that formula_12 for all formula_13 implies formula_14; similarly, formula_15 is called non-degenerate on the left if formula_12 for all formula_16 implies formula_17.
A pairing is called alternating if formula_18 and formula_19 for all "m". In particular, this implies formula_20, while bilinearity shows formula_21. Thus, for an alternating pairing, formula_22.
Examples.
Any scalar product on a real vector space "V" is a pairing (set "M" = "N" = "V", "R" = R in the above definitions).
The determinant map (2 × 2 matrices over "k") → "k" can be seen as a pairing formula_23.
The Hopf map formula_24 written as formula_25 is an example of a pairing. For instance, Hardie et al. present an explicit construction of the map using poset models.
Pairings in cryptography.
In cryptography, often the following specialized definition is used:
Let formula_26 be additive groups and formula_27 a multiplicative group, all of prime order formula_28. Let formula_29 be generators of formula_30 and formula_31 respectively.
A pairing is a map: formula_32
for which the following holds:
Note that it is also common in cryptographic literature for all groups to be written in multiplicative notation.
In cases when formula_36, the pairing is called symmetric. As formula_37 is cyclic, the map formula_38 will be commutative; that is, for any formula_39, we have formula_40. This is because for a generator formula_41, there exist integers formula_42, formula_43 such that formula_44 and formula_45. Therefore formula_46.
The Weil pairing is an important concept in elliptic curve cryptography; e.g., it may be used to attack certain elliptic curves (see MOV attack). It and other pairings have been used to develop identity-based encryption schemes.
Slightly different usages of the notion of pairing.
Scalar products on complex vector spaces are sometimes called pairings, although they are not bilinear.
For example, in representation theory, one has a scalar product on the characters of complex representations of a finite group which is frequently called character pairing. | [
{
"math_id": 0,
"text": "e:M \\times N \\to L"
},
{
"math_id": 1,
"text": "e(r\\cdot m,n)=e(m,r \\cdot n)=r\\cdot e(m,n)"
},
{
"math_id": 2,
"text": "e(m_1+m_2,n)=e(m_1,n)+e(m_2,n)"
},
{
"math_id": 3,
"text": "e(m,n_1+n_2)=e(m,n_1)+e(m,n_2)"
},
{
"math_id": 4,
"text": "r \\in R"
},
{
"math_id": 5,
"text": "m,m_1,m_2 \\in M"
},
{
"math_id": 6,
"text": "n,n_1,n_2 \\in N "
},
{
"math_id": 7,
"text": "M \\otimes_R N \\to L"
},
{
"math_id": 8,
"text": "M \\otimes_R N"
},
{
"math_id": 9,
"text": "\\Phi : M \\to \\operatorname{Hom}_{R} (N, L) "
},
{
"math_id": 10,
"text": "\\Phi (m) (n) := e(m,n) "
},
{
"math_id": 11,
"text": " \\Phi "
},
{
"math_id": 12,
"text": " e(m,n) = 0 "
},
{
"math_id": 13,
"text": "m"
},
{
"math_id": 14,
"text": " n=0 "
},
{
"math_id": 15,
"text": "e"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": " m=0 "
},
{
"math_id": 18,
"text": " N=M "
},
{
"math_id": 19,
"text": " e(m,m) = 0 "
},
{
"math_id": 20,
"text": "e(m+n,m+n)=0"
},
{
"math_id": 21,
"text": "e(m+n,m+n)=e(m,m)+e(m,n)+e(n,m)+e(n,n)=e(m,n)+e(n,m)"
},
{
"math_id": 22,
"text": "e(m,n)=-e(n,m)"
},
{
"math_id": 23,
"text": "k^2 \\times k^2 \\to k"
},
{
"math_id": 24,
"text": "S^3 \\to S^2"
},
{
"math_id": 25,
"text": "h:S^2 \\times S^2 \\to S^2 "
},
{
"math_id": 26,
"text": "\\textstyle G_1, G_2"
},
{
"math_id": 27,
"text": "\\textstyle G_T"
},
{
"math_id": 28,
"text": "\\textstyle p"
},
{
"math_id": 29,
"text": "\\textstyle P \\in G_1, Q \\in G_2"
},
{
"math_id": 30,
"text": "\\textstyle G_1"
},
{
"math_id": 31,
"text": "\\textstyle G_2"
},
{
"math_id": 32,
"text": " e: G_1 \\times G_2 \\rightarrow G_T "
},
{
"math_id": 33,
"text": "\\textstyle \\forall a,b \\in \\mathbb{Z}:\\ e\\left(aP, bQ\\right) = e\\left(P, Q\\right)^{ab}"
},
{
"math_id": 34,
"text": "\\textstyle e\\left(P, Q\\right) \\neq 1"
},
{
"math_id": 35,
"text": "\\textstyle e"
},
{
"math_id": 36,
"text": "\\textstyle G_1 = G_2 = G"
},
{
"math_id": 37,
"text": "\\textstyle G"
},
{
"math_id": 38,
"text": " e "
},
{
"math_id": 39,
"text": " P,Q \\in G "
},
{
"math_id": 40,
"text": " e(P,Q) = e(Q,P) "
},
{
"math_id": 41,
"text": " g \\in G "
},
{
"math_id": 42,
"text": " p "
},
{
"math_id": 43,
"text": " q "
},
{
"math_id": 44,
"text": " P = g^p "
},
{
"math_id": 45,
"text": " Q=g^q "
},
{
"math_id": 46,
"text": " e(P,Q) = e(g^p,g^q) = e(g,g)^{pq} = e(g^q, g^p) = e(Q,P) "
}
] | https://en.wikipedia.org/wiki?curid=1191490 |
11915675 | Earth mover's distance | Distance between probability distributions
In computer science, the earth mover's distance (EMD) is a measure of dissimilarity between two frequency distributions, densities, or measures, over a metric space "D".
Informally, if the distributions are interpreted as two different ways of piling up earth (dirt) over "D", the EMD captures the minimum cost of building the smaller pile using dirt taken from the larger, where cost is defined as the amount of dirt moved multiplied by the distance over which it is moved.
Over probability distributions, the earth mover's distance is also known as the Wasserstein metric formula_0, Kantorovich–Rubinstein metric, or Mallows's distance. It is the solution of the optimal transport problem, which in turn is also known as the Monge-Kantorovich problem, or sometimes the Hitchcock–Koopmans transportation problem; when the measures are uniform over a set of discrete elements, the same optimization problem is known as minimum weight bipartite matching.
Formal definitions.
The EMD between probability distributions formula_1 and formula_2 can be defined as an infimum over joint probabilities:
formula_3
where formula_4 is the set of all joint distributions whose marginals are formula_5 and formula_6.
By Kantorovich-Rubinstein duality, this can also be expressed as:
formula_7
where the supremum is taken over all 1-Lipschitz continuous functions, i.e. formula_8.
EMD between signatures.
In some applications, it is convenient to represent a distribution formula_1 as a "signature", or a collection of "clusters", where the formula_9-th cluster represents a feature of mass formula_10 centered at formula_11.
In this formulation, consider signatures formula_12 and formula_13. Let formula_14 be the ground distance between clusters formula_11 and formula_15. Then the EMD between formula_1 and formula_2 is given by the optimal flow formula_16, with formula_17 the flow between formula_11 and formula_15, that minimizes the overall cost.
formula_18
subject to the constraints:
formula_19
formula_20
formula_21
formula_22
The optimal flow formula_23 is found by solving this linear optimization problem. The earth mover's distance is defined as the work normalized by the total flow:
formula_24
Variants and extensions.
Unequal probability mass.
Some applications may require the comparison of distributions with different total masses. One approach is to allow for "partial matching", where dirt from the more massive distribution is rearranged to make the less massive, and any leftover "dirt" is discarded at no cost.
Formally, let formula_25 be the total weight of formula_1, and formula_26 be the total weight of formula_2. We have:
formula_27
where formula_28 is the set of all measures whose projections are formula_29 and formula_30.
Note that this generalization of EMD is not a true distance between distributions, as it does not satisfy the triangle inequality.
An alternative approach is to allow for mass to be created or destroyed, on a global or local level, as an alternative to transportation, but with a cost penalty. In that case one must specify a real parameter formula_31, the ratio between the cost of creating or destroying one unit of "dirt", and the cost of transporting it by a unit distance. This is equivalent to minimizing the sum of the earth moving cost plus formula_31 times the "L"1 distance between the rearranged pile and the second distribution. The resulting measure formula_32 is a true distance function.
More than two distributions.
The EMD can be extended naturally to the case where more than two distributions are compared. In this case, the "distance" between the many distributions is defined as the optimal value of a linear program. This generalized EMD may be computed exactly using a greedy algorithm, and the resulting functional has been shown to be Minkowski additive and convex monotone.
Computing the EMD.
The EMD can be computed by solving an instance of transportation problem, using any algorithm for minimum-cost flow problem, e.g. the network simplex algorithm.
The Hungarian algorithm can be used to get the solution if the domain "D" is the set {0, 1}. If the domain is integral, it can be translated for the same algorithm by representing integral bins as multiple binary bins.
As a special case, if "D" is a one-dimensional array of "bins" of length "n", the EMD can be efficiently computed by scanning the array and keeping track of how much dirt needs to be transported between consecutive bins. Here the bins are zero-indexed:
formula_33
EMD-based similarity analysis.
EMD-based similarity analysis (EMDSA) is an important and effective tool in many multimedia information retrieval and pattern recognition applications. However, the computational cost of EMD is super-cubic to the number of the "bins" given an arbitrary "D". Efficient and scalable EMD computation techniques for large scale data have been investigated using MapReduce, as well as bulk synchronous parallel and resilient distributed dataset.
Applications.
An early application of the EMD in computer science was to compare two grayscale images that may differ due to dithering, blurring, or local deformations. In this case, the region is the image's domain, and the total amount of light (or ink) is the "dirt" to be rearranged.
The EMD is widely used in content-based image retrieval to compute distances between the color histograms of two digital images. In this case, the region is the RGB color cube, and each image pixel is a parcel of "dirt". The same technique can be used for any other quantitative pixel attribute, such as luminance, gradient, apparent motion in a video frame, etc..
More generally, the EMD is used in pattern recognition to compare generic summaries or surrogates of data records called signatures. A typical signature consists of list of pairs (("x"1,"m"1), ... ("x""n","m""n")), where each "x""i" is a certain "feature" (e.g., color in an image, letter in a text, etc.), and "m""i" is "mass" (how many times that feature occurs in the record). Alternatively, "x""i" may be the centroid of a data cluster, and "m""i" the number of entities in that cluster. To compare two such signatures with the EMD, one must define a distance between features, which is interpreted as the cost of turning a unit mass of one feature into a unit mass of the other. The EMD between two signatures is then the minimum cost of turning one of them into the other.
EMD analysis has been used for quantitating multivariate changes in biomarkers measured by flow cytometry, with potential applications to other technologies that report distributions of measurements.
History.
The concept was first introduced by Gaspard Monge in 1781, in the context of transportation theory. The use of the EMD as a distance measure for monochromatic images was described in 1989 by S. Peleg, M. Werman and H. Rom. The name "earth mover's distance" was proposed by J. Stolfi in 1994, and was used in print in 1998 by Y. Rubner, C. Tomasi and L. G. Guibas.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "W_1"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "Q"
},
{
"math_id": 3,
"text": "\\text{EMD}(P,Q) = \\inf\\limits_{\\gamma \\in \\Pi(P, Q)} \\mathbb{E}_{(x, y) \\sim \\gamma}\\left[d(x, y)\\right]\\,"
},
{
"math_id": 4,
"text": "\\Pi(P, Q)"
},
{
"math_id": 5,
"text": "P"
},
{
"math_id": 6,
"text": "Q"
},
{
"math_id": 7,
"text": "\\text{EMD}(P,Q) = \\sup\\limits_{\\| f \\|_L \\leq 1} \\, \\mathbb{E}_{x \\sim P}[f(x)] - \\mathbb{E}_{y \\sim Q}[f(y)]\\,"
},
{
"math_id": 8,
"text": "\\| \\nabla f(x)\\| \\leq 1 \\quad \\forall x"
},
{
"math_id": 9,
"text": "i"
},
{
"math_id": 10,
"text": "w_{i}"
},
{
"math_id": 11,
"text": "p_i "
},
{
"math_id": 12,
"text": "P=\\{(p_1,w_{p1}),(p_2,w_{p2}),...,(p_m,w_{pm})\\} "
},
{
"math_id": 13,
"text": "Q=\\{(q_1,w_{q1}),(q_2,w_{q2}),...,(q_n,w_{qn})\\}"
},
{
"math_id": 14,
"text": "D=[d_{i,j}] "
},
{
"math_id": 15,
"text": "q_j "
},
{
"math_id": 16,
"text": "F=[f_{i,j}] "
},
{
"math_id": 17,
"text": "f_{i,j} "
},
{
"math_id": 18,
"text": "\\min\\limits_F {\\sum_{i=1}^m\\sum_{j=1}^n f_{i,j}d_{i,j}}"
},
{
"math_id": 19,
"text": "f_{i,j}\\ge0, 1\\le i \\le m, 1\\le j \\le n\n"
},
{
"math_id": 20,
"text": "\\sum_{j=1}^n {f_{i,j}} \\le w_{pi}, 1 \\le i \\le m"
},
{
"math_id": 21,
"text": "\\sum_{i=1}^m {f_{i,j}} \\le w_{qj}, 1 \\le j \\le n"
},
{
"math_id": 22,
"text": "\\sum_{i=1}^m\\sum_{j=1}^n f_{i,j} = \\min \\left\\{ \\ \\sum_{i=1}^m w_{pi}, \\quad \\sum_{j=1}^n w_{q j} \\ \\right\\}"
},
{
"math_id": 23,
"text": "F "
},
{
"math_id": 24,
"text": "\\text{EMD}(P,Q) = \\frac{\\sum_{i=1}^m \\sum_{j=1}^n f_{i,j}d_{i,j}}{\\sum_{i=1}^m \\sum_{j=1}^n f_{i,j}}"
},
{
"math_id": 25,
"text": "w_P"
},
{
"math_id": 26,
"text": "w_Q"
},
{
"math_id": 27,
"text": "\\text{EMD}(P,Q) = \\tfrac{1}{\\min(w_P, w_Q)} \\inf\\limits_{\\gamma \\in \\Pi_\\geq(P, Q)} \\int d(x,y) \\, \\mathrm{d} \\gamma(x,y)"
},
{
"math_id": 28,
"text": "\\Pi_\\geq(P, Q)"
},
{
"math_id": 29,
"text": "\\geq P"
},
{
"math_id": 30,
"text": "\\geq Q"
},
{
"math_id": 31,
"text": "\\alpha"
},
{
"math_id": 32,
"text": "\\widehat{EMD}_\\alpha"
},
{
"math_id": 33,
"text": "\n\\begin{align}\n\\text{EMD}_0 &= 0 \\\\\n\\text{EMD}_{i+1} &= P_i + \\text{EMD}_i - Q_i \\\\\n\\text{Total Distance} &= \\sum_{i=0}^{n}|\\text{EMD}_i|\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=11915675 |
1191951 | Glycated hemoglobin | Form of hemoglobin chemically linked to a sugar
Glycated hemoglobin, glycohemoglobin, glycosylated hemoglobin is a form of hemoglobin (Hb) that is chemically linked to a sugar. Several types of glycated hemoglobin measures exist, of which HbA1c, or simply A1c, is a standard single test. Most monosaccharides, including glucose, galactose, and fructose, spontaneously (i.e. non-enzymatically) bond with hemoglobin when present in the bloodstream. However, glucose is only 21% as likely to do so as galactose and 13% as likely to do so as fructose, which may explain why glucose is used as the primary metabolic fuel in humans.
The excess formation of the sugar-hemoglobin linkage indicates the presence of excessive sugar in the bloodstream, and is an indicator of diabetes or other hormone diseases in high concentration (HbA1c >6.4%). A1c is of particular interest because it is easy to detect. The process by which sugars attach to hemoglobin is called glycation and the reference system is based on HbA1c, defined as beta-N-1-deoxy fructosyl hemoglobin as component.
HbA1c is measured primarily to determine the three-month average blood sugar level and is used as a standard diagnostic test for evaluating the risk of complications of diabetes and as an assessment of glycemic control. The test is considered a three-month average because the average lifespan of a red blood cell is three to four months. Normal levels of glucose produce a normal amount of glycated hemoglobin. As the average amount of plasma glucose increases, the fraction of glycated hemoglobin increases in a predictable way. In diabetes, higher amounts of glycated hemoglobin, indicating higher of blood glucose levels, have been associated with cardiovascular disease, nephropathy, neuropathy, and retinopathy.
Terminology.
Glycated hemoglobin is preferred over glycosylated hemoglobin to reflect the correct (non-enzymatic) process. Early literature often used "glycosylated" as it was unclear which process was involved until further research was performed. The terms are still sometimes used interchangeably in English-language literature.
The naming of HbA1c derives from hemoglobin type A being separated on cation exchange chromatography. The first fraction to separate, probably considered to be pure hemoglobin A, was designated HbA0, and the following fractions were designated HbA1a, HbA1b, and HbA1c, in their order of elution. Improved separation techniques have subsequently led to the isolation of more subfractions.
History.
Hemoglobin A1c was first separated from other forms of hemoglobin by Huisman and Meyering in 1958 using a chromatographic column. It was first characterized as a glycoprotein by Bookchin and Gallop in 1968. Its increase in diabetes was first described in 1969 by Samuel Rahbar "et al." The reactions leading to its formation were characterized by Bunn and his coworkers in 1975.
The use of hemoglobin A1c for monitoring the degree of control of glucose metabolism in diabetic patients was proposed in 1976 by Anthony Cerami, Ronald Koenig and coworkers.
Damage mechanisms.
Glycated hemoglobin causes an increase of highly reactive free radicals inside blood cells, altering the properties of their cell membranes. This leads to blood cell aggregation and increased blood viscosity, which results in impaired blood flow.
Another way glycated hemoglobin causes damage is via inflammation, which results in atherosclerotic plaque (atheroma) formation. Free-radical build-up promotes the excitation of Fe2+-hemoglobin through Fe3+-Hb into abnormal ferryl hemoglobin (Fe4+-Hb). Fe4+ is unstable and reacts with specific amino acids in hemoglobin to regain its Fe3+ oxidation state. Hemoglobin molecules clump together via cross-linking reactions, and these hemoglobin clumps (multimers) promote cell damage and the release of Fe4+-hemoglobin into the matrix of innermost layers (subendothelium) of arteries and veins. This results in increased permeability of interior surface (endothelium) of blood vessels and production of pro-inflammatory monocyte adhesion proteins, which promote macrophage accumulation in blood vessel surfaces, ultimately leading to harmful plaques in these vessels.
Highly glycated Hb-AGEs go through vascular smooth muscle layer and inactivate acetylcholine-induced endothelium-dependent relaxation, possibly through binding to nitric oxide (NO), preventing its normal function. NO is a potent vasodilator and also inhibits formation of plaque-promoting LDLs (sometimes called "bad cholesterol") oxidized form.
This overall degradation of blood cells also releases heme from them. Loose heme can cause oxidation of endothelial and LDL proteins, which results in plaques.
Principle in medical diagnostics.
Glycation of proteins is a frequent occurrence, but in the case of hemoglobin, a nonenzymatic condensation reaction occurs between glucose and the N-end of the beta chain. This reaction produces a Schiff base (, R=beta chain, CHR'=glucose-derived), which is itself converted to 1-deoxyfructose. This second conversion is an example of an Amadori rearrangement.
When blood glucose levels are high, glucose molecules attach to the hemoglobin in red blood cells. The longer hyperglycemia occurs in blood, the more glucose binds to hemoglobin in the red blood cells and the higher the glycated hemoglobin.
Once a hemoglobin molecule is glycated, it remains that way. A buildup of glycated hemoglobin within the red cell, therefore, reflects the average level of glucose to which the cell has been exposed during its life-cycle. Measuring glycated hemoglobin assesses the effectiveness of therapy by monitoring long-term serum glucose regulation.
A1c is a weighted average of blood glucose levels during the life of the red blood cells (117 days for men and 106 days in women). Therefore, glucose levels on days nearer to the test contribute substantially more to the level of A1c than the levels in days further from the test.
This is also supported by data from clinical practice showing that HbA1c levels improved significantly after 20 days from start or intensification of glucose-lowering treatment.
Measurement.
Several techniques are used to measure hemoglobin A1c. Laboratories may use high-performance liquid chromatography, immunoassay, enzymatic assay, capillary electrophoresis, or boronate affinity chromatography. Point of care (e.g., doctor's office) devices use immunoassay boronate affinity chromatography.
In the United States, HbA1c testing laboratories are certified by the National Glycohemoglobin Standardization Program to standardize them against the results of the 1993 Diabetes Control and Complications Trial (DCCT). An additional percentage scale, Mono S has previously been in use by Sweden and KO500 is in use in Japan.
Switch to IFCC units.
The American Diabetes Association, European Association for the Study of Diabetes, and International Diabetes Federation have agreed that, in the future, HbA1c is to be reported in the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) units. IFCC reporting was introduced in Europe except for the UK in 2003; the UK carried out dual reporting from 1 June 2009 until 1 October 2011.
Conversion between DCCT and IFCC is by the following equation:
formula_0
Interpretation of results.
Laboratory results may differ depending on the analytical technique, the age of the subject, and biological variation among individuals. Higher levels of HbA1c are found in people with persistently elevated blood sugar, as in diabetes mellitus. While diabetic patient treatment goals vary, many include a target range of HbA1c values. A diabetic person with good glucose control has an HbA1c level that is close to or within the reference range.
The International Diabetes Federation and the American College of Endocrinology recommend HbA1c values below 48 mmol/mol (6.5 DCCT %), while the American Diabetes Association recommends HbA1c be below 53 mmol/mol (7.0 DCCT %) for most patients. Results from large trials in 2008–09 suggested that a target below 53 mmol/mol (7.0 DCCT %) for older adults with type 2 diabetes may be excessive: Below 53 mmol/mol, the health benefits of reduced A1c become smaller, and the intensive glycemic control required to reach this level leads to an increased rate of dangerous hypoglycemic episodes.
A retrospective study of 47,970 type 2 diabetes patients, aged 50 years and older, found that patients with an HbA1c more than 48 mmol/mol (6.5 DCCT %) had an increased mortality rate, but a later international study contradicted these findings.
A review of the UKPDS, Action to Control Cardiovascular Risk in Diabetes (ACCORD), Advance and Veterans Affairs Diabetes Trials (VADT) estimated that the risks of the main complications of diabetes (diabetic retinopathy, diabetic nephropathy, diabetic neuropathy, and macrovascular disease) decreased by about 3% for every 1 mmol/mol decrease in HbA1c.
However, a trial by ACCORD designed specifically to determine whether reducing HbA1c below 42 mmol/mol (6.0 DCCT %) using increased amounts of medication would reduce the rate of cardiovascular events found higher mortality with this intensive therapy, so much so that the trial was terminated 17 months early.
Practitioners must consider patients' health, their risk of hypoglycemia, and their specific health risks when setting a target HbA1c level. Because patients are responsible for averting or responding to their own hypoglycemic episodes, their input and the doctors' assessments of the patients' self-care skills are also important.
Persistent elevations in blood sugar (and, therefore, HbA1c) increase the risk of long-term vascular complications of diabetes, such as coronary disease, heart attack, stroke, heart failure, kidney failure, blindness, erectile dysfunction, neuropathy (loss of sensation, especially in the feet), gangrene, and gastroparesis (slowed emptying of the stomach). Poor blood glucose control also increases the risk of short-term complications of surgery such as poor wound healing.
All-cause mortality is higher above 64 mmol/mol (8.0 DCCT%) HbA1c as well as below 42 mmol/mol (6.0 DCCT %) in diabetic patients, and above 42 mmol/mol (6.0 DCCT %) as well as below 31 mmol/mol (5.0 DCCT %) in non-diabetic persons, indicating the risks of hyperglycemia and hypoglycemia, respectively. Similar risk results are seen for cardiovascular disease.
The 2022 ADA guidelines reaffirmed the recommendation that HbA1c should be maintained below 7.0% for most patients. Higher target values are appropriate for children and adolescents, patients with extensive co-morbid illness and those with a history of severe hypoglycemia. More stringent targets (<6.0%) are preferred for pregnant patients if this can be achieved without significant hypoglycemia.
Factors other than glucose that affect A1c.
Lower-than-expected levels of HbA1c can be seen in people with shortened red blood cell lifespans, such as with glucose-6-phosphate dehydrogenase deficiency, sickle-cell disease, or any other condition causing premature red blood cell death. For these patients, alternate assessment with fructosamine or glycated albumin is recommended; these methods reflect glycemic control over the preceding 2-3 weeks. Blood donation will result in rapid replacement of lost RBCs with newly formed red blood cells. Since these new RBCs will have only existed for a short period of time, their presence will lead HbA1c to underestimate the actual average levels. There may also be distortions resulting from blood donation during the preceding two months, due to an abnormal synchronization of the age of the RBCs, resulting in an older than normal average blood cell life (resulting in an overestimate of actual average blood glucose levels). Conversely, higher-than-expected levels can be seen in people with a longer red blood cell lifespan, such as with iron deficiency.
Results can be unreliable in many circumstances, for example after blood loss, after surgery, blood transfusions, anemia, or high erythrocyte turnover; in the presence of chronic renal or liver disease; after administration of high-dose vitamin C; or erythropoetin treatment. Hypothyroidism can artificially raise the A1c. In general, the reference range (that found in healthy young persons), is about 30–33 mmol/mol (4.9–5.2 DCCT %). The mean HbA1c for diabetics type 1 in Sweden in 2014 was 63 mmol/mol (7.9 DCCT%) and for type 2, 61 mmol/mol (7.7 DCCT%). HbA1c levels show a small, but statistically significant, progressive uptick with age; the clinical importance of this increase is unclear.
Mapping from A1c to estimated average glucose.
The approximate mapping between HbA1c values given in DCCT percentage (%) and eAG (estimated average glucose) measurements is given by the following equation:
eAG(mg/dL) = 28.7 × A1C − 46.7<br>eAG(mmol/L) = 1.59 × A1C − 2.59<br>(Data in parentheses are 95% confidence intervals>)
Normal, prediabetic, and diabetic ranges.
The 2010 American Diabetes Association Standards of Medical Care in Diabetes added the HbA1c ≥ 48 mmol/mol (≥6.5 DCCT %) as another criterion for the diagnosis of diabetes.
Indications and uses.
Glycated hemoglobin testing is recommended for both checking the blood sugar control in people who might be prediabetic and monitoring blood sugar control in patients with more elevated levels, termed diabetes mellitus. For a single blood sample, it provides far more revealing information on glycemic behavior than a fasting blood sugar value. However, fasting blood sugar tests are crucial in making treatment decisions. The American Diabetes Association guidelines are similar to others in advising that the glycated hemoglobin test be performed at least twice a year in patients with diabetes who are meeting treatment goals (and who have stable glycemic control) and quarterly in patients with diabetes whose therapy has changed or who are not meeting glycemic goals.
Glycated hemoglobin measurement is not appropriate where a change in diet or treatment has been made within six weeks. Likewise, the test assumes a normal red blood cell aging process and mix of hemoglobin subtypes (predominantly HbA in normal adults). Hence, people with recent blood loss, hemolytic anemia, or genetic differences in the hemoglobin molecule (hemoglobinopathy) such as sickle-cell disease and other conditions, as well as those who have donated blood recently, are not suitable for this test.
Due to glycated hemoglobin's variability (as shown in the table above), additional measures should be checked in patients at or near recommended goals. People with HbA1c values at 64 mmol/mol or less should be provided additional testing to determine whether the HbA1c values are due to averaging out high blood glucose (hyperglycemia) with low blood glucose (hypoglycemia) or the HbA1c is more reflective of an elevated blood glucose that does not vary much throughout the day. Devices such as continuous blood glucose monitoring allow people with diabetes to determine their blood glucose levels on a continuous basis, testing every few minutes. Continuous use of blood glucose monitors is becoming more common, and the devices are covered by many health insurance plans, including Medicare in the United States. The supplies tend to be expensive, since the sensors must be changed at least every 2 weeks. Another useful test in determining if HbA1c values are due to wide variations of blood glucose throughout the day is 1,5-anhydroglucitol, also known as GlycoMark. GlycoMark reflects only the times that the person experiences hyperglycemia above 180 mg/dL over a two-week period.
Concentrations of hemoglobin A1 (HbA1) are increased, both in diabetic patients and in patients with kidney failure, when measured by ion-exchange chromatography. The thiobarbituric acid method (a chemical method specific for the detection of glycation) shows that patients with kidney failure have values for glycated hemoglobin similar to those observed in normal subjects, suggesting that the high values in these patients are a result of binding of something other than glucose to hemoglobin.
In autoimmune hemolytic anemia, concentrations of HbA1 is undetectable. Administration of prednisolone will allow the HbA1 to be detected. The alternative fructosamine test may be used in these circumstances and it also reflects an average of blood glucose levels over the preceding 2 to 3 weeks.
All the major institutions such as the International Expert Committee Report, drawn from the International Diabetes Federation, the European Association for the Study of Diabetes, and the American Diabetes Association, suggest the HbA1c level of 48 mmol/mol (6.5 DCCT %) as a diagnostic level. The Committee Report further states that, when HbA1c testing cannot be done, the fasting and glucose-tolerance tests be done. Screening for diabetes during pregnancy continues to require fasting and glucose-tolerance measurements for gestational diabetes at 24 to 28 weeks gestation, although glycated hemoglobin may be used for screening at the first prenatal visit.
Modification by diet.
Meta-analysis has shown probiotics to cause a statistically significant reduction in glycated hemoglobin in type-2 diabetics. Trials with multiple strains of probiotics had statistically significant reductions in glycated hemoglobin, whereas trials with single strains did not.
Standardization and traceability.
Most clinical studies recommend the use of HbA1c assays that are traceable to the DCCT assay. The National Glycohemoglobin Standardization Program (NGSP) and IFCC have improved assay standardization. For initial diagnosis of diabetes, only HbA1c methods that are NGSP-certified should be used, not point-of-care testing devices. Analytical performance has been a problem with earlier point-of-care devices for HbA1c testing, specifically large standard deviations and negative bias.
Veterinary medicine.
HbA1c testing has not been found useful in the monitoring during the treatment of cats and dogs with diabetes, and is not generally used; monitoring of fructosamine levels is favoured instead.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathrm{IFCC\\ HBA1c}\\, \\Big(\\frac{\\text{mmol}}{\\text{mol}}\\Big)=[\\mathrm{DCCT\\ HBA1c}\\,(\\%) - 2.14] \\times 10.929 "
}
] | https://en.wikipedia.org/wiki?curid=1191951 |
11919629 | Rod calculus | Rod calculus or rod calculation was the mechanical method of algorithmic computation with counting rods in China from the Warring States to Ming dynasty before the counting rods were increasingly replaced by the more convenient and faster abacus. Rod calculus played a key role in the development of Chinese mathematics to its height in Song Dynasty and Yuan Dynasty, culminating in the invention
of polynomial equations of up to four unknowns in the work of Zhu Shijie.
Hardware.
The basic equipment for carrying out rod calculus is a bundle of counting rods and a counting board. The counting rods are usually made of bamboo sticks, about 12 cm- 15 cm in length, 2mm to 4 mm diameter, sometimes from animal bones, or ivory and jade (for well-heeled merchants). A counting board could be a table top, a wooden board with or without grid, on the floor or on sand.
In 1971 Chinese archaeologists unearthed a bundle of well-preserved animal bone counting rods stored in a silk pouch from a tomb in Qian Yang county in Shanxi province, dated back to the first half of Han dynasty (206 BC – 8AD). In 1975 a bundle of bamboo counting rods was unearthed.
The use of counting rods for rod calculus flourished in the Warring States, although no archaeological artefacts were found earlier than the Western Han Dynasty (the first half of Han dynasty; however, archaeologists did unearth software artefacts of rod calculus dated back to the Warring States); since the rod calculus software must have gone along with rod calculus hardware, there is no doubt that rod calculus was already flourishing during the Warring States more than 2,200 years ago.
Software.
The key software required for rod calculus was a simple 45 phrase positional decimal multiplication table used in China since antiquity, called the nine-nine table, which were learned by heart by pupils, merchants, government officials and mathematicians alike.
Rod numerals.
Displaying numbers.
Rod numerals is the only numeric system that uses different placement combination of a single symbol to convey any number or fraction in the Decimal System. For numbers in the units place, every vertical rod represent 1. Two vertical rods represent 2, and so on, until 5 vertical rods, which represents 5. For number between 6 and 9, a biquinary system is used, in which a horizontal bar on top of the vertical bars represent 5. The first row are the number 1 to 9 in rod numerals, and the second row is the same numbers in horizontal form.
For numbers larger than 9, a decimal system is used. Rods placed one place to the left of the units place represent 10 times that number. For the hundreds place, another set of rods is placed to the left which represents 100 times of that number, and so on. As shown in the adjacent image, the number 231 is represented in rod numerals in the top row, with one rod in the units place representing 1, three rods in the tens place representing 30, and two rods in the hundreds place representing 200, with a sum of 231.
When doing calculation, usually there was no grid on the surface. If rod numerals two, three, and one is placed consecutively in the vertical form, there's a possibility of it being mistaken for 51 or 24, as shown in the second and third row of the adjacent image. To avoid confusion, number in consecutive places are placed in alternating vertical and horizontal form, with the units place in vertical form, as shown in the bottom row on the right.
Displaying zeroes.
In Rod numerals, zeroes are represented by a space, which serves both as a number and a place holder value. Unlike in Hindu-Arabic numerals, there is no specific symbol to represent zero. Before the introduction of a written zero, in addition to a space to indicate no units, the character in the subsequent unit column would be rotated by 90°, to reduce the ambiguity of a single zero. For example 107 (𝍠 𝍧) and 17 (𝍩𝍧) would be distinguished by rotation, in addition to the space, though multiple zero units could lead to ambiguity, e.g. 1007 (𝍩 𝍧), and 10007 (𝍠 𝍧). In the adjacent image, the number zero is merely represented with a space.
Negative and positive numbers.
Song mathematicians used red to represent positive numbers and black for negative numbers. However, another way is to add a slash to the last place to show that the number is negative.
Decimal fraction.
The Mathematical Treatise of Sunzi used decimal fraction metrology. The unit of length was 1 "chi",
1 "chi" = 10 "cun", 1 "cun" = 10 "fen", 1 "fen" = 10 "li", 1 "li" = 10 "hao", 10 hao = 1 shi, 1 shi = 10 "hu".
1 "chi" 2 "cun" 3 "fen" 4 "li" 5 "hao" 6 "shi" 7 "hu" is laid out on counting board as
where is the unit measurement "chi".
Southern Song dynasty mathematician Qin Jiushao extended the use of decimal fraction beyond metrology. In his book "Mathematical Treatise in Nine Sections", he formally expressed 1.1446154 day as
日
He marked the unit with a word “日” (day) underneath it.
Addition.
Rod calculus works on the principle of addition. Unlike Arabic numerals, digits represented by counting rods have additive properties. The process of addition involves mechanically moving the rods without the need of memorising an addition table. This is the biggest difference with Arabic numerals, as one cannot mechanically put 1 and 2 together to form 3, or 2 and 3 together to form 5.
The adjacent image presents the steps in adding 3748 to 289:
The rods in the augend change throughout the addition, while the rods in the addend at the bottom "disappear".
Subtraction.
Without borrowing.
In situation in which no borrowing is needed, one only needs to take the number of rods in the subtrahend from the minuend. The result of the calculation is the difference. The adjacent image shows the steps in subtracting 23 from 54.
Borrowing.
In situations in which borrowing is needed such as 4231–789, one need use a more complicated procedure. The steps for this example are shown on the left.
Multiplication.
"Sunzi Suanjing" described in detail the algorithm of multiplication. On the left are the steps to calculate 38×76:
Division.
The animation on the left shows the steps for calculating
44.
The Sunzi algorithm for division was transmitted in toto by al Khwarizmi to Islamic country from Indian sources in 825AD. Al Khwarizmi's book was translated into Latin in the 13th century, The Sunzi division algorithm later evolved into Galley division in Europe. The division algorithm in Abu'l-Hasan al-Uqlidisi's 925AD book "Kitab al-Fusul fi al-Hisab al-Hindi" and in 11th century Kushyar ibn Labban's Principles of Hindu Reckoning were identical to Sunzu's division algorithm.
Fractions.
If there is a remainder in a place value decimal rod calculus division, both the remainder and the divisor must be left in place with one on top of another. In Liu Hui's notes to Jiuzhang suanshu (2nd century BCE), the number on top is called "shi" (实), while the one at bottom is called "fa" (法). In "Sunzi Suanjing", the number on top is called "zi" (子) or "fenzi" (lit., son of fraction), and the one on the bottom is called "mu" (母) or "fenmu" (lit., mother of fraction). Fenzi and Fenmu are also the modern Chinese name for numerator and denominator, respectively. As shown on the right, 1 is the numerator remainder, 7 is the denominator divisor, formed a fraction . The quotient of the division is 44 + .
Liu Hui used a lot of calculations with fractions in Haidao Suanjing.
This form of fraction with numerator on top and denominator at bottom without a horizontal bar in between, was transmitted to Arabic country in an 825AD book by al Khwarizmi via India, and in use by 10th century Abu'l-Hasan al-Uqlidisi and 15th century Jamshīd al-Kāshī's work "Arithematic Key".
Multiplication.
3 × 5
18
Highest common factor and fraction reduction.
The algorithm for finding the highest common factor of two numbers and reduction of
fraction was laid out in Jiuzhang suanshu.
The highest common factor is found by successive division with remainders until
the last two remainders are identical.
The animation on the right illustrates the algorithm for finding the highest common factor of and reduction of a fraction.
In this case the hcf is 25.
Divide the numerator and denominator by 25. The reduced fraction is .
Interpolation.
Calendarist and mathematician He Chengtian () used fraction interpolation method, called "harmonisation of the divisor of the day" () to obtain a better approximate value than the old one by iteratively adding the numerators and denominators a "weaker" fraction with a "stronger fraction". Zu Chongzhi's legendary could be obtained with He Chengtian's method
System of linear equations.
Chapter Eight "Rectangular Arrays of Jiuzhang suanshu provided an algorithm for solving System of linear equations by method of elimination:
Problem 8-1: Suppose we have 3 bundles of top quality cereals, 2 bundles of medium quality cereals, and a bundle of low quality cereal with accumulative weight of 39 dou. We also have 2, 3 and 1 bundles of respective cereals amounting to 34 dou; we also have 1,2 and 3 bundles of respective cereals, totaling 26 dou.
Find the quantity of top, medium, and poor quality cereals.
In algebra, this problem can be expressed in three system equations with three unknowns.
formula_0
This problem was solved in Jiuzhang suanshu with counting rods laid out on a counting board in a tabular format similar to a 3x4 matrix:
Algorithm:
The amount of one bundle of low quality cereal formula_1
From which the amount of one bundle of top and medium quality cereals can be found easily:
Extraction of Square root.
Algorithm for extraction of square root was described in Jiuzhang suanshu and with minor difference in terminology in Sunzi Suanjing.
The animation shows the algorithm for rod calculus extraction of an approximation of the square root formula_3 from the algorithm in chap 2 problem 19 of Sunzi Suanjing:
"Now there is a square area 234567, find one side of the square".
The algorithm is as follows:
North Song dynasty mathematician Jia Xian developed an additive multiplicative algorithm for square root extraction, in which he replaced the traditional "doubling" of "fang fa" by adding
shang digit to fang fa digit, with same effect.
Extraction of cubic root.
Jiuzhang suanshu vol iv "shaoguang" provided algorithm for extraction of cubic root.
problem 19: We have a 1860867 cubic chi, what is the length of a side ? Answer:123 chi.
formula_4
North Song dynasty mathematician Jia Xian invented a method similar to simplified form of Horner scheme for extraction of cubic root.
The animation at right shows Jia Xian's algorithm for solving problem 19 in Jiuzhang suanshu vol 4.
Polynomial equation.
North Song dynasty mathematician Jia Xian invented Horner scheme for solving simple 4th order equation of the form
formula_5
South Song dynasty mathematician Qin Jiushao improved Jia Xian's Horner method to solve polynomial equation up to 10th order.
The following is algorithm for solving
formula_6 in his Mathematical Treatise in Nine Sections vol 6 problem 2.
This equation was arranged bottom up with counting rods on counting board in tabular form
Algorithm:
Tian Yuan shu.
Yuan dynasty mathematician Li Zhi developed rod calculus into Tian yuan shu
Example Li Zhi Ceyuan haijing vol II, problem 14 equation of one unknown:
formula_9
元
Polynomial equations of four unknowns.
Mathematician Zhu Shijie further developed rod calculus to include polynomial equations of 2 to four unknowns.
For example, polynomials of three unknowns:
Equation 1:formula_10
太
Equation 2:formula_11
Equation 3:formula_12
太
After successive elimination of two unknowns, the polynomial equations of three unknowns was reduced to
a polynomial equation of one unknown:
formula_13
Solved x=5;
Which ignores 3 other answers, 2 are repeated.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{cases}\n\n3x+2y+z=39 \\\\\n2x+3y+z=34 \\\\\nx+2y+3z=26\n\n\\end{cases}\n"
},
{
"math_id": 1,
"text": "=\\frac{99}{36}=2 \\frac{3}{4}"
},
{
"math_id": 2,
"text": "\\frac{1}{4}"
},
{
"math_id": 3,
"text": "\\sqrt{234567}\\approx484\\tfrac{311}{968}"
},
{
"math_id": 4,
"text": "\\sqrt[3]{1860867}=123"
},
{
"math_id": 5,
"text": "x^4=a"
},
{
"math_id": 6,
"text": "-x^4 +15245x^2-6262506.25=0"
},
{
"math_id": 7,
"text": "\\frac{32450625}{59056400}"
},
{
"math_id": 8,
"text": "x=20\\frac{1298205}{2362256}"
},
{
"math_id": 9,
"text": "-x^2-680x+96000=0"
},
{
"math_id": 10,
"text": "-y-z-y^2*x-x+xyz=0"
},
{
"math_id": 11,
"text": "-y-z+x-x^2+xz=0"
},
{
"math_id": 12,
"text": "y^2-z^2+x^2=0;"
},
{
"math_id": 13,
"text": "x^4-6x^3+4x^2+6x-5=0"
}
] | https://en.wikipedia.org/wiki?curid=11919629 |
1192008 | Wolf number | Measure of sunspot activity
The Wolf number (also known as the relative sunspot number or Zürich number) is a quantity that measures the number of sunspots and groups of sunspots present on the surface of the Sun. Historically, it was only possible to detect sunspots on the far side of the Sun indirectly using helioseismology. Since 2006, NASA's STEREO spacecrafts allow their direct observation.
History.
Astronomers have been observing the Sun recording information about sunspots since the advent of the telescope in 1609.
However, the idea of compiling the information about the sunspot number from various observers originates in Rudolf Wolf in 1848 in Zürich, Switzerland. The produced series initially had his name, but now it is more commonly referred to as the international sunspot number series.
The international sunspot number series is still being produced today at the observatory of Brussels. The international number series shows an approximate periodicity of 11 years, the solar cycle, which was first found by Heinrich Schwabe in 1843, thus sometimes it is also referred to as the Schwabe cycle. The periodicity is not constant but varies roughly in the range 9.5 to 11 years. The international sunspot number series extends back to 1700 with annual values while daily values exist only since 1818.
Since 1 July 2015 a revised and updated international sunspot number series has been made available. The biggest difference is an overall increase by a factor of 1.6 to the entire series. Traditionally, a scaling of 0.6 was applied to all sunspot counts after 1893, to compensate for Alfred Wolfer's better equipment, after taking over from Wolf. This scaling has been dropped from the revised series, making modern counts closer to their raw values. Also, counts were reduced slightly after 1947 to compensate for bias introduced by a new counting method adopted that year, in which sunspots are weighted according to their size.
Calculation.
The relative sunspot number formula_0 is computed using the formula
formula_1
where
The observatory factor compensates for the differing number of recorded individual sunspots and sunspot groups by different observers. These differences in recorded values occur due to differences in instrumentation, local seeing, personal experience, and other factors between observers. Since Wolf was the primary observer for the relative sunspot number, his observatory factor was 1.
Smoothed monthly mean.
To calculate the 13-month smoothed monthly mean sunspot number, which is commonly used to calculate the minima and maxima of solar cycles, a tapered-boxcar smoothing function is used. For a given month formula_5, with a monthly sunspot number of formula_6, the smoothed monthly mean formula_7 can be expressed as
formula_8
where formula_9 is the monthly sunspot number formula_10 months away from month formula_5. The smoothed monthly mean is intended to dampen any sudden jumps in the monthly sunspot number and remove the effects of the 27-day solar rotation period.
Alternative series.
The accuracy of the compilation of the group sunspot number series has been questioned, motivating the development of several alternative series suggesting different behavior of sunspot group activity before the 20th century.
However, indirect indices of solar activity favor the group sunspot number series by Chatzistergos T. et al.
A different index of sunspot activity was introduced in 1998 in the form of the number of groups apparent on the solar disc.
With this index it was made possible to include sunspot data acquired since 1609, being the date of the invention of the telescope.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "R"
},
{
"math_id": 1,
"text": "R = k(10g + s) "
},
{
"math_id": 2,
"text": "s"
},
{
"math_id": 3,
"text": "g"
},
{
"math_id": 4,
"text": "k"
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "R_m"
},
{
"math_id": 7,
"text": "R_s"
},
{
"math_id": 8,
"text": "R_s = (0.5 R_{m-6} + R_{m-5} + \\dots+ R_{m-1} + R_{m} + R_{m+1} + \\dots + R_{m+5} + 0.5 R_{m+6}) / 12 "
},
{
"math_id": 9,
"text": "R_{m+n}"
},
{
"math_id": 10,
"text": "n"
}
] | https://en.wikipedia.org/wiki?curid=1192008 |
1192012 | Green's relations | In mathematics, Green's relations are five equivalence relations that characterise the elements of a semigroup in terms of the principal ideals they generate. The relations are named for James Alexander Green, who introduced them in a paper of 1951. John Mackintosh Howie, a prominent semigroup theorist, described this work as "so all-pervading that, on encountering a new semigroup, almost the first question one asks is 'What are the Green relations like?'" (Howie 2002). The relations are useful for understanding the nature of divisibility in a semigroup; they are also valid for groups, but in this case tell us nothing useful, because groups always have divisibility.
Instead of working directly with a semigroup "S", it is convenient to define Green's relations over the monoid "S"1. ("S"1 is ""S" with an identity adjoined if necessary"; if "S" is not already a monoid, a new element is adjoined and defined to be an identity.) This ensures that principal ideals generated by some semigroup element do indeed contain that element. For an element "a" of "S", the relevant ideals are:
The L, R, and J relations.
For elements "a" and "b" of "S", Green's relations "L", "R" and "J" are defined by
That is, "a" and "b" are "L"-related if they generate the same left ideal; "R"-related if they generate the same right ideal; and "J"-related if they generate the same two-sided ideal. These are equivalence relations on "S", so each of them yields a partition of "S" into equivalence classes. The "L"-class of "a" is denoted "L""a" (and similarly for the other relations). The "L"-classes and "R"-classes can be equivalently understood as the strongly connected components of the left and right Cayley graphs of "S"1. Further, the "L", "R", and "J" relations define three preorders ≤"L", ≤"R", and ≤"J", where "a" ≤"J" "b" holds for two elements "a" and "b" of "S" if the ideal generated by "a" is included in that of "b", i.e., "S"1 "a" "S"1 ⊆ "S"1 "b" "S"1, and ≤"L" and ≤"R" are defined analogously.
Green used the lowercase blackletter formula_7, formula_8 and formula_9 for these relations, and wrote formula_10 for "a" "L" "b" (and likewise for "R" and "J"). Mathematicians today tend to use script letters such as formula_11 instead, and replace Green's modular arithmetic-style notation with the infix style used here. Ordinary letters are used for the equivalence classes.
The "L" and "R" relations are left-right dual to one another; theorems concerning one can be translated into similar statements about the other. For example, "L" is "right-compatible": if "a" "L" "b" and "c" is another element of "S", then "ac" "L" "bc". Dually, "R" is "left-compatible": if "a" "R" "b", then "ca" "R" "cb".
If "S" is commutative, then "L", "R" and "J" coincide.
The H and D relations.
The remaining relations are derived from "L" and "R". Their intersection is "H":
"a" "H" "b" if and only if "a" "L" "b" and "a" "R" "b".
This is also an equivalence relation on "S". The class "H""a" is the intersection of "L""a" and "R""a". More generally, the intersection of any "L"-class with any "R"-class is either an "H"-class or the empty set.
"Green's Theorem" states that for any formula_12-class "H" of a semigroup S either (i) formula_13 or (ii) formula_14 and "H" is a subgroup of "S". An important corollary is that the equivalence class "H""e", where "e" is an idempotent, is a subgroup of "S" (its identity is "e", and all elements have inverses), and indeed is the largest subgroup of "S" containing "e". No formula_12-class can contain more than one idempotent, thus formula_12 is "idempotent separating". In a monoid "M", the class "H"1 is traditionally called the group of units. (Beware that unit does not mean identity in this context, i.e. in general there are non-identity elements in "H"1. The "unit" terminology comes from ring theory.) For example, in the transformation monoid on "n" elements, "T""n", the group of units is the symmetric group "S""n".
Finally, "D" is defined: "a" "D" "b" if and only if there exists a "c" in "S" such that "a" "L" "c" and "c" "R" "b". In the language of lattices, "D" is the join of "L" and "R". (The join for equivalence relations is normally more difficult to define, but is simplified in this case by the fact that "a" "L" "c" and "c" "R" "b" for some "c" if and only if "a" "R" "d" and "d" "L" "b" for some "d".)
As "D" is the smallest equivalence relation containing both "L" and "R", we know that "a" "D" "b" implies "a" "J" "b"—so "J" contains "D". In a finite semigroup, "D" and "J" are the same, as also in a rational monoid. Furthermore they also coincide in any epigroup.
There is also a formulation of "D" in terms of equivalence classes, derived directly from the above definition:
"a" "D" "b" if and only if the intersection of "R""a" and "L""b" is not empty.
Consequently, the "D"-classes of a semigroup can be seen as unions of "L"-classes, as unions of "R"-classes, or as unions of "H"-classes. Clifford and Preston (1961) suggest thinking of this situation in terms of an "egg-box":
Each row of eggs represents an "R"-class, and each column an "L"-class; the eggs themselves are the "H"-classes. For a group, there is only one egg, because all five of Green's relations coincide, and make all group elements equivalent. The opposite case, found for example in the bicyclic semigroup, is where each element is in an "H"-class of its own. The egg-box for this semigroup would contain infinitely many eggs, but all eggs are in the same box because there is only one "D"-class. (A semigroup for which all elements are "D"-related is called "bisimple".)
It can be shown that within a "D"-class, all "H"-classes are the same size. For example, the transformation semigroup "T"4 contains four "D"-classes, within which the "H"-classes have 1, 2, 6, and 24 elements respectively.
Recent advances in the combinatorics of semigroups have used Green's relations to help enumerate semigroups with certain properties. A typical result (Satoh, Yama, and Tokizawa 1994) shows that there are exactly 1,843,120,128 non-equivalent semigroups of order 8, including 221,805 that are commutative; their work is based on a systematic exploration of possible "D"-classes. (By contrast, there are only five groups of order 8.)
Example.
The full transformation semigroup "T"3 consists of all functions from the set {1, 2, 3} to itself; there are 27 of these. Write ("a" "b" "c") for the function that sends 1 to "a", 2 to "b", and 3 to "c". Since "T"3 contains the identity map, (1 2 3), there is no need to adjoin an identity.
The egg-box diagram for "T"3 has three "D"-classes. They are also "J"-classes, because these relations coincide for a finite semigroup.
In "T"3, two functions are "L"-related if and only if they have the same image. Such functions appear in the same column of the table above. Likewise, the functions "f" and "g" are "R"-related if and only if
"f"("x") = "f"("y") ⇔ "g"("x") = "g"("y")
for "x" and "y" in {1, 2, 3}; such functions are in the same table row. Consequently, two functions are "D"-related if and only if their images are the same size.
The elements in bold are the idempotents. Any "H"-class containing one of these is a (maximal) subgroup. In particular, the third "D"-class is isomorphic to the symmetric group "S"3. There are also six subgroups of order 2, and three of order 1 (as well as subgroups of these subgroups). Six elements of "T"3 are not in any subgroup.
Generalisations.
There are essentially two ways of generalising an algebraic theory. One is to change its definitions so that it covers more or different objects; the other, more subtle way, is to find some desirable outcome of the theory and consider alternative ways of reaching that conclusion.
Following the first route, analogous versions of Green's relations have been defined for semirings (Grillet 1970) and rings (Petro 2002). Some, but not all, of the properties associated with the relations in semigroups carry over to these cases. Staying within the world of semigroups, Green's relations can be extended to cover relative ideals, which are subsets that are only ideals with respect to a subsemigroup (Wallace 1963).
For the second kind of generalisation, researchers have concentrated on properties of bijections between "L"- and "R"- classes. If "x" "R" "y", then it is always possible to find bijections between "L""x" and "L""y" that are "R"-class-preserving. (That is, if two elements of an "L"-class are in the same "R"-class, then their images under a bijection will still be in the same "R"-class.) The dual statement for "x" "L" "y" also holds. These bijections are right and left translations, restricted to the appropriate equivalence classes. The question that arises is: how else could there be such bijections?
Suppose that Λ and Ρ are semigroups of partial transformations of some semigroup "S". Under certain conditions, it can be shown that if "x" Ρ = "y" Ρ, with "x" "ρ"1 = "y" and "y" "ρ"2 = "x", then the restrictions
"ρ"1 : Λ "x" → Λ "y"
"ρ"2 : Λ "y" → Λ "x"
are mutually inverse bijections. (Conventionally, arguments are written on the right for Λ, and on the left for Ρ.) Then the "L" and "R" relations can be defined by
"x" "L" "y" if and only if Λ "x" = Λ "y"
"x" "R" "y" if and only if "x" Ρ = "y" Ρ
and "D" and "H" follow as usual. Generalisation of "J" is not part of this system, as it plays no part in the desired property.
We call (Λ, Ρ) a "Green's pair". There are several choices of partial transformation semigroup that yield the original relations. One example would be to take Λ to be the semigroup of all left translations on "S"1, restricted to "S", and Ρ the corresponding semigroup of restricted right translations.
These definitions are due to Clark and Carruth (1980). They subsume Wallace's work, as well as various other generalised definitions proposed in the mid-1970s. The full axioms are fairly lengthy to state; informally, the most important requirements are that both Λ and Ρ should contain the identity transformation, and that elements of Λ should commute with elements of Ρ.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S^1 a = \\{sa \\mid s \\in S^1\\}"
},
{
"math_id": 1,
"text": "\\{sa \\mid s \\in S\\} \\cup \\{a\\}"
},
{
"math_id": 2,
"text": "Sa \\cup \\{a\\}"
},
{
"math_id": 3,
"text": "a S^1 = \\{as \\mid s \\in S^1\\}"
},
{
"math_id": 4,
"text": "aS \\cup \\{a\\}"
},
{
"math_id": 5,
"text": "S^1 a S^1"
},
{
"math_id": 6,
"text": "SaS \\cup aS \\cup Sa \\cup \\{a\\}"
},
{
"math_id": 7,
"text": "\\mathfrak{l}"
},
{
"math_id": 8,
"text": "\\mathfrak{r}"
},
{
"math_id": 9,
"text": "\\mathfrak{f}"
},
{
"math_id": 10,
"text": "a \\equiv b (\\mathfrak{l})"
},
{
"math_id": 11,
"text": "\\mathcal{R}"
},
{
"math_id": 12,
"text": "\\mathcal H"
},
{
"math_id": 13,
"text": "H^2 \\cap H = \\emptyset"
},
{
"math_id": 14,
"text": "H^2 \\subseteq H"
}
] | https://en.wikipedia.org/wiki?curid=1192012 |
1192013 | Whitehead manifold | In mathematics, the Whitehead manifold is an open 3-manifold that is contractible, but not homeomorphic to formula_0 J. H. C. Whitehead (1935) discovered this puzzling object while he was trying to prove the Poincaré conjecture, correcting an error in an earlier paper where he incorrectly claimed that no such manifold exists.
A contractible manifold is one that can continuously be shrunk to a point inside the manifold itself. For example, an open ball is a contractible manifold. All manifolds homeomorphic to the ball are contractible, too. One can ask whether "all" contractible manifolds are homeomorphic to a ball. For dimensions 1 and 2, the answer is classical and it is "yes". In dimension 2, it follows, for example, from the Riemann mapping theorem. Dimension 3 presents the first counterexample: the Whitehead manifold.
Construction.
Take a copy of formula_1 the three-dimensional sphere. Now find a compact unknotted solid torus formula_2 inside the sphere. (A solid torus is an ordinary three-dimensional doughnut, that is, a filled-in torus, which is topologically a circle times a disk.) The closed complement of the solid torus inside formula_3 is another solid torus.
Now take a second solid torus formula_6 inside formula_2 so that formula_6 and a tubular neighborhood of the meridian curve of formula_2 is a thickened Whitehead link.
Note that formula_6 is null-homotopic in the complement of the meridian of formula_5 This can be seen by considering formula_3 as formula_7 and the meridian curve as the "z"-axis together with formula_8 The torus formula_6 has zero winding number around the "z"-axis. Thus the necessary null-homotopy follows. Since the Whitehead link is symmetric, that is, a homeomorphism of the 3-sphere switches components, it is also true that the meridian of formula_2 is also null-homotopic in the complement of formula_4
Now embed formula_9 inside formula_6 in the same way as formula_6 lies inside formula_10 and so on; to infinity. Define "W", the Whitehead continuum, to be formula_11 or more precisely the intersection of all the formula_12 for formula_13
The Whitehead manifold is defined as formula_14 which is a non-compact manifold without boundary. It follows from our previous observation, the Hurewicz theorem, and Whitehead's theorem on homotopy equivalence, that "X" is contractible. In fact, a closer analysis involving a result of Morton Brown shows that formula_15 However, "X" is not homeomorphic to formula_0 The reason is that it is not simply connected at infinity.
The one point compactification of "X" is the space formula_16 (with "W" crunched to a point). It is not a manifold. However, formula_17 is homeomorphic to formula_18
David Gabai showed that "X" is the union of two copies of formula_19 whose intersection is also homeomorphic to formula_0
Related spaces.
More examples of open, contractible 3-manifolds may be constructed by proceeding in similar fashion and picking different embeddings of formula_20 in formula_21 in the iterative process. Each embedding should be an unknotted solid torus in the 3-sphere. The essential properties are that the meridian of formula_21 should be null-homotopic in the complement of formula_22 and in addition the longitude of formula_20 should not be null-homotopic in formula_23
Another variation is to pick several subtori at each stage instead of just one. The cones over some of these continua appear as the complements of Casson handles in a 4-ball.
The dogbone space is not a manifold but its product with formula_24 is homeomorphic to formula_18
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\R^3."
},
{
"math_id": 1,
"text": "S^3,"
},
{
"math_id": 2,
"text": "T_1"
},
{
"math_id": 3,
"text": "S^3"
},
{
"math_id": 4,
"text": "T_2."
},
{
"math_id": 5,
"text": "T_1."
},
{
"math_id": 6,
"text": "T_2"
},
{
"math_id": 7,
"text": "\\R^3 \\cup \\{\\infty\\}"
},
{
"math_id": 8,
"text": "\\infty."
},
{
"math_id": 9,
"text": "T_3"
},
{
"math_id": 10,
"text": "T_1,"
},
{
"math_id": 11,
"text": "W = T_\\infty,"
},
{
"math_id": 12,
"text": "T_k"
},
{
"math_id": 13,
"text": "k = 1,2,3,\\dots."
},
{
"math_id": 14,
"text": "X = S^3 \\setminus W,"
},
{
"math_id": 15,
"text": "X \\times \\R \\cong \\R^4."
},
{
"math_id": 16,
"text": "S^3/W"
},
{
"math_id": 17,
"text": "\\left(\\R^3/W\\right) \\times \\R"
},
{
"math_id": 18,
"text": "\\R^4."
},
{
"math_id": 19,
"text": "\\R^3"
},
{
"math_id": 20,
"text": "T_{i+1}"
},
{
"math_id": 21,
"text": "T_i"
},
{
"math_id": 22,
"text": "T_{i+1},"
},
{
"math_id": 23,
"text": "T_i \\setminus T_{i+1}."
},
{
"math_id": 24,
"text": "\\R^1"
}
] | https://en.wikipedia.org/wiki?curid=1192013 |
11924 | Game theory | Mathematical models of strategic interactions
Game theory is the study of mathematical models of strategic interactions. It has applications in many fields of social science, and is used extensively in economics, logic, systems science and computer science. Initially, game theory addressed two-person zero-sum games, in which a participant's gains or losses are exactly balanced by the losses and gains of the other participant. In the 1950s, it was extended to the study of non zero-sum games, and was eventually applied to a wide range of behavioral relations. It is now an umbrella term for the science of rational decision making in humans, animals, and computers.
Modern game theory began with the idea of mixed-strategy equilibria in two-person zero-sum games and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by "Theory of Games and Economic Behavior" (1944), co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty.
Game theory was developed extensively in the 1950s, and was explicitly applied to evolution in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. John Maynard Smith was awarded the Crafoord Prize for his application of evolutionary game theory in 1999, and fifteen game theorists have won the Nobel Prize in economics as of 2020, including most recently Paul Milgrom and Robert B. Wilson.
<templatestyles src="Template:TOC limit/styles.css" />
History.
Game-theoretic thought.
Game-theoretic strategy within recorded history dates back at least to Sun Tzu's guide on military strategy. In "The Art of War", he wrote
<templatestyles src="Template:Blockquote/styles.css" />Knowing the other and knowing oneself, In one hundred battles no danger,
Not knowing the other and knowing oneself, One victory for one loss,
Not knowing the other and not knowing oneself, In every battle certain defeat
Mathematical origins.
Discussions on the mathematics of games began long before the rise of modern mathematical game theory. Cardano's work "Liber de ludo aleae" ("Book on Games of Chance"), which was written around 1564 but published posthumously in 1663, sketches some basic ideas on games of chance. In the 1650s, Pascal and Huygens developed the concept of expectation on reasoning about the structure of games of chance. Pascal argued for equal division when chances are equal while Huygens extended the argument by considering strategies for a player who can make any bet with any opponent so long as its terms are equal. Huygens later published his gambling calculus as "De ratiociniis in ludo aleæ" ("On Reasoning in Games of Chance") in 1657.
In 1713, a letter attributed to Charles Waldegrave, an active Jacobite and uncle to British diplomat James Waldegrave, analyzed a game called "le her". Waldegrave provided a minimax mixed strategy solution to a two-person version of the card game, and the problem is now known as Waldegrave problem. In 1838, Antoine Augustin Cournot considered a duopoly and presented a solution that is the Nash equilibrium of the game in his ("Researches into the Mathematical Principles of the Theory of Wealth").
In 1913, Ernst Zermelo published ("On an Application of Set Theory to the Theory of the Game of Chess"), which proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems.
In 1938, the Danish mathematical economist Frederik Zeuthen proved that the mathematical model had a winning strategy by using Brouwer's fixed point theorem. In his 1938 book and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix is symmetric and provided a solution to a non-trivial infinite game (known in English as Blotto game). Borel conjectured the non-existence of mixed-strategy equilibria in finite two-person zero-sum games, a conjecture that was proved false by von Neumann.
Birth and early developments.
Game theory emerged as a unique field when John von Neumann published the paper "On the Theory of Games of Strategy" in 1928. Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. Von Neumann's work in game theory culminated in his 1944 book "Theory of Games and Economic Behavior", co-authored with Oskar Morgenstern. The second edition of this book provided an axiomatic theory of utility, which reincarnated Daniel Bernoulli's old theory of utility (of money) as an independent discipline. This foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. Subsequent work focused primarily on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies.
In 1950, the first mathematical discussion of the prisoner's dilemma appeared, and an experiment was undertaken by notable mathematicians Merrill M. Flood and Melvin Dresher, as part of the RAND Corporation's investigations into game theory. RAND pursued the studies because of possible applications to global nuclear strategy. Around this same time, John Nash developed a criterion for mutual consistency of players' strategies known as the Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. Nash proved that every finite n-player, non-zero-sum (not just two-player zero-sum) non-cooperative game has what is now known as a Nash equilibrium in mixed strategies.
Game theory experienced a flurry of activity in the 1950s, during which the concepts of the core, the extensive form game, fictitious play, repeated games, and the Shapley value were developed. The 1950s also saw the first applications of game theory to philosophy and political science.
Prize-winning achievements.
In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium. Later he would introduce trembling hand perfection as well. In 1994 Nash, Selten and Harsanyi became Economics Nobel Laureates for their contributions to economic game theory.
In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection and common knowledge were introduced and analyzed.
In 1994, John Nash was awarded the Nobel Memorial Prize in the Economic Sciences for his contribution to game theory. Nash's most famous contribution to game theory is the concept of the Nash equilibrium, which is a solution concept for non-cooperative games. A Nash equilibrium is a set of strategies, one for each player, such that no player can improve their payoff by unilaterally changing their strategy.
In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten, and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory. Aumann contributed more to the equilibrium school, introducing equilibrium coarsening and correlated equilibria, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences.
In 2007, Leonid Hurwicz, Eric Maskin, and Roger Myerson were awarded the Nobel Prize in Economics "for having laid the foundations of mechanism design theory". Myerson's contributions include the notion of proper equilibrium, and an important graduate text: "Game Theory, Analysis of Conflict". Hurwicz introduced and formalized the concept of incentive compatibility.
In 2012, Alvin E. Roth and Lloyd S. Shapley were awarded the Nobel Prize in Economics "for the theory of stable allocations and the practice of market design". In 2014, the Nobel went to game theorist Jean Tirole.
Different types of games.
Cooperative / non-cooperative.
A game is "cooperative" if the players are able to form binding commitments externally enforced (e.g. through contract law). A game is "non-cooperative" if players cannot form alliances or if all agreements need to be self-enforcing (e.g. through credible threats).
Cooperative games are often analyzed through the framework of "cooperative game theory", which focuses on predicting which coalitions will form, the joint actions that groups take, and the resulting collective payoffs. It is different from "non-cooperative game theory" which focuses on predicting individual players' actions and payoffs by analyzing Nash equilibria.
Cooperative game theory provides a high-level approach as it describes only the structure and payoffs of coalitions, whereas non-cooperative game theory also looks at how strategic interaction will affect the distribution of payoffs. As non-cooperative game theory is more general, cooperative games can be analyzed through the approach of non-cooperative game theory (the converse does not hold) provided that sufficient assumptions are made to encompass all the possible strategies available to players due to the possibility of external enforcement of cooperation.
Symmetric / asymmetric.
A symmetric game is a game where each player earns the same payoff when making the same choice. In other words, the identity of the player does not change the resulting game facing the other player. Many of the commonly studied 2×2 games are symmetric. The standard representations of chicken, the prisoner's dilemma, and the stag hunt are all symmetric games.
The most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured in this section's graphic is asymmetric despite having identical strategy sets for both players.
Zero-sum / non-zero-sum.
Zero-sum games (more generally, constant-sum games) are games in which choices by players can neither increase nor decrease the available resources. In zero-sum games, the total benefit goes to all players in a game, for every combination of strategies, and always adds to zero (more informally, a player benefits only at the equal expense of others). Poker exemplifies a zero-sum game (ignoring the possibility of the house's cut), because one wins exactly the amount one's opponents lose. Other zero-sum games include matching pennies and most classical board games including Go and chess.
Many games studied by game theorists (including the famed prisoner's dilemma) are non-zero-sum games, because the outcome has net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another.
Furthermore, "constant-sum games" correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any constant-sum game into a (possibly asymmetric) zero-sum game by adding a dummy player (often called "the board") whose losses compensate the players' net winnings.
Simultaneous / sequential.
Simultaneous games are games where both players move simultaneously, or instead the later players are unaware of the earlier players' actions (making them "effectively" simultaneous). Sequential games (or dynamic games) are games where players do not make decisions simultaneously, and player's earlier actions affect the outcome and decisions of other players. This need not be perfect information about every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while they do not know which of the other available actions the first player actually performed.
The difference between simultaneous and sequential games is captured in the different representations discussed above. Often, normal form is used to represent simultaneous games, while extensive form is used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extensive form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; see subgame perfection.
In short, the differences between sequential and simultaneous games are as follows:
Perfect information and imperfect information.
An important subset of sequential games consists of games of perfect information. A game with perfect information means that all players, at every move in the game, know the previous history of the game and the moves previously made by all other players. An imperfect information game is played when the players do not know all moves already made by the opponent such as a simultaneous move game. Examples of perfect-information games include tic-tac-toe, checkers, chess, and Go.
Many card games are games of imperfect information, such as poker and bridge. Perfect information is often confused with complete information, which is a similar concept pertaining to the common knowledge of each player's sequence, strategies, and payoffs throughout gameplay. Complete information requires that every player know the strategies and payoffs available to the other players but not necessarily the actions taken, whereas perfect information is knowledge of all aspects of the game and players. Games of incomplete information can be reduced, however, to games of imperfect information by introducing "moves by nature".
Bayesian game.
One of the assumptions of the Nash equilibrium is that every player has correct beliefs about the actions of the other players. However, there are many situations in game theory where participants do not fully understand the characteristics of their opponents. Negotiators may be unaware of their opponent's valuation of the object of negotiation, companies may be unaware of their opponent's cost functions, combatants may be unaware of their opponent's strengths, and jurors may be unaware of their colleague's interpretation of the evidence at trial. In some cases, participants may know the character of their opponent well, but may not know how well their opponent knows his or her own character.
Bayesian game means a strategic game with incomplete information. For a strategic game, decision makers are players, and every player has a group of actions. A core part of the imperfect information specification is the set of states. Every state completely describes a collection of characteristics relevant to the player such as their preferences and details about them. There must be a state for every set of features that some player believes may exist.
For example, where Player 1 is unsure whether Player 2 would rather date her or get away from her, while Player 2 understands Player 1's preferences as before. To be specific, supposing that Player 1 believes that Player 2 wants to date her under a probability of 1/2 and get away from her under a probability of 1/2 (this evaluation comes from Player 1's experience probably: she faces players who want to date her half of the time in such a case and players who want to avoid her half of the time). Due to the probability involved, the analysis of this situation requires to understand the player's preference for the draw, even though people are only interested in pure strategic equilibrium.
Combinatorial games.
Games in which the difficulty of finding an optimal strategy stems from the multiplicity of possible moves are called combinatorial games. Examples include chess and Go. Games that involve imperfect information may also have a strong combinatorial character, for instance backgammon. There is no unified theory addressing combinatorial elements in games. There are, however, mathematical tools that can solve some particular problems and answer some general questions.
Games of perfect information have been studied in combinatorial game theory, which has developed novel representations, e.g. surreal numbers, as well as combinatorial and algebraic (and sometimes non-constructive) proof methods to solve games of certain types, including "loopy" games that may result in infinitely long sequences of moves. These methods address games with higher combinatorial complexity than those usually considered in traditional (or "economic") game theory. A typical game that has been solved this way is Hex. A related field of study, drawing from computational complexity theory, is game complexity, which is concerned with estimating the computational difficulty of finding optimal strategies.
Research in artificial intelligence has addressed both perfect and imperfect information games that have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal strategies have been found. The practical solutions involve computational heuristics, like alpha–beta pruning or use of artificial neural networks trained by reinforcement learning, which make games more tractable in computing practice.
Discrete and continuous games.
Much of game theory is concerned with finite, discrete games that have a finite number of players, moves, events, outcomes, etc. Many concepts can be extended, however. Continuous games allow players to choose a strategy from a continuous strategy set. For instance, Cournot competition is typically modeled with players' strategies being any non-negative quantities, including fractional quantities.
Differential games.
Differential games such as the continuous pursuit and evasion game are continuous games where the evolution of the players' state variables is governed by differential equations. The problem of finding an optimal strategy in a differential game is closely related to the optimal control theory. In particular, there are two types of strategies: the open-loop strategies are found using the Pontryagin maximum principle while the closed-loop strategies are found using Bellman's Dynamic Programming method.
A particular case of differential games are the games with a random time horizon. In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectation of the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval.
Evolutionary game theory.
Evolutionary game theory studies players who adjust their strategies over time according to rules that are not necessarily rational or farsighted. In general, the evolution of strategies over time according to such rules is modeled as a Markov chain with a state variable such as the current strategy profile or how the game has been played in the recent past. Such rules may feature imitation, optimization, or survival of the fittest.
In biology, such models can represent evolution, in which offspring adopt their parents' strategies and parents who play more successful strategies (i.e. corresponding to higher payoffs) have a greater number of offspring. In the social sciences, such models typically represent strategic adjustment by players who play a game many times within their lifetime and, consciously or unconsciously, occasionally adjust their strategies.
Stochastic outcomes (and relation to other fields).
Individual decision problems with stochastic outcomes are sometimes considered "one-player games". They may be modeled using similar tools within the related disciplines of decision theory, operations research, and areas of artificial intelligence, particularly AI planning (with uncertainty) and multi-agent system. Although these fields may have different motivators, the mathematics involved are substantially the same, e.g. using Markov decision processes (MDP).
Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes "chance moves" ("moves by nature"). This player is not typically considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice where required by the game.
For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For example, the difference in approach between MDPs and the minimax solution is that the latter considers the worst-case over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy in such scenarios if it is assumed that an adversary can force such an event to happen. (See Black swan theory for more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in investment banking.)
General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of moves by other players) have also been studied. The "gold standard" is considered to be partially observable stochastic game (POSG), but few realistic problems are computationally feasible in POSG representation.
Metagames.
These are games the play of which is the development of the rules for another game, the target or subject game. Metagames seek to maximize the utility value of the rule set developed. The theory of metagames is related to mechanism design theory.
The term metagame analysis is also used to refer to a practical approach developed by Nigel Howard, whereby a situation is framed as a strategic game in which stakeholders try to realize their objectives by means of the options available to them. Subsequent developments have led to the formulation of confrontation analysis.
Mean field game theory.
Mean field game theory is the study of strategic decision making in very large populations of small interacting agents. This class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal, in the engineering literature by Peter E. Caines, and by mathematicians Pierre-Louis Lions and Jean-Michel Lasry.
Representation of games.
The games studied in game theory are well-defined mathematical objects. To be fully defined, a game must specify the following elements: the "players" of the game, the "information" and "actions" available to each player at each decision point, and the "payoffs" for each outcome. (Eric Rasmusen refers to these four "essential elements" by the acronym "PAPI".) A game theorist typically uses these elements, along with a solution concept of their choosing, to deduce a set of equilibrium strategies for each player such that, when these strategies are employed, no player can profit by unilaterally deviating from their strategy. These equilibrium strategies determine an equilibrium to the game—a stable state in which either one outcome occurs or a set of outcomes occur with known probability.
Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games.
Extensive form.
The extensive form can be used to formalize games with a time sequencing of moves. Extensive form games can be visualised using game trees (as pictured here). Here each vertex (or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree. The extensive form can be viewed as a multi-player generalization of a decision tree. To solve any extensive form game, backward induction must be used. It involves working backward up the game tree to determine what a rational player would do at the last vertex of the tree, what the player with the previous move would do given that the player with the last move is rational, and so on until the first vertex of the tree is reached.
The game pictured consists of two players. The way this particular game is structured (i.e., with sequential decision making and perfect information), "Player 1" "moves" first by choosing either F or U (fair or unfair). Next in the sequence, "Player 2", who has now observed "Player 1"'s move, can choose to play either A or R (accept or reject). Once "Player 2" has made their choice, the game is considered finished and each player gets their respective payoff, represented in the image as two numbers, where the first number represents Player 1's payoff, and the second number represents Player 2's payoff. Suppose that "Player 1" chooses U and then "Player 2" chooses A: "Player 1" then gets a payoff of "eight" (which in real-world terms can be interpreted in many ways, the simplest of which is in terms of money but could mean things such as eight days of vacation or eight countries conquered or even eight more opportunities to play the same game against other players) and "Player 2" gets a payoff of "two".
The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e. the players do not know at which point they are), or a closed line is drawn around them. (See example in the imperfect information section.)
Normal form.
The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and payoffs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 plays "Up" and that Player 2 plays "Left". Then Player 1 gets a payoff of 4, and Player 2 gets 3.
When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form.
Every extensive-form game has an equivalent normal-form game, however, the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical.
Characteristic function form.
In cooperative game theory the characteristic function lists the payoff of each coalition. The origin of this formulation is in John von Neumann and Oskar Morgenstern's book.
Formally, a characteristic function is a function formula_0 from the set of all possible coalitions of players to a set of payments, and also satisfies formula_1. The function describes how much collective payoff a set of players can gain by forming a coalition.
Alternative game representations.
Alternative game representation forms are used for some subclasses of games or adjusted to the needs of interdisciplinary research. In addition to classical game representations, some of the alternative representations also encode time related aspects.
General and applied uses.
As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The first use of game-theoretic analysis was by Antoine Augustin Cournot in 1838 with his solution of the Cournot duopoly. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well.
Although pre-twentieth-century naturalists such as Charles Darwin made game-theoretic kinds of statements, the use of game-theoretic analysis in biology began with Ronald Fisher's studies of animal behavior during the 1930s. This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his 1982 book "Evolution and the Theory of Games".
In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop theories of ethical or normative behavior and to prescribe such behavior. In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic approaches have also been suggested in the philosophy of language and philosophy of science. Game-theoretic arguments of this type can be found as far back as Plato. An alternative version of game theory, called chemical game theory, represents the player's choices as metaphorical chemical reactant molecules called "knowlecules". Chemical game theory then calculates the outcomes as equilibrium solutions to a system of chemical reactions.
Description and modeling.
The primary use of game theory is to describe and model how human populations behave. Some scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has been criticized. It is argued that the assumptions made by game theorists are often violated when applied to real-world situations. Game theorists usually assume players act rationally, but in practice, human rationality and/or behavior often deviates from the model of rationality as used in game theory. Game theorists respond by comparing their assumptions to those used in physics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to the models used by physicists. However, empirical work has shown that in some classic games, such as the centipede game, guess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments and whether the analysis of the experiments fully captures all aspects of the relevant situation.
Some game theorists, following the work of John Maynard Smith and George R. Price, have turned to evolutionary game theory in order to resolve these issues. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics).
Prescriptive or normative analysis.
Some scholars see game theory not as a predictive tool for the behavior of human beings, but as a suggestion for how people ought to behave. Since a strategy, corresponding to a Nash equilibrium of a game constitutes one's best response to the actions of the other players – provided they are in (the same) Nash equilibrium – playing a strategy that is part of a Nash equilibrium seems appropriate. This normative use of game theory has also come under criticism.
Use of game theory in economics.
Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents. Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers and acquisitions pricing, fair division, duopolies, oligopolies, social network formation, agent-based computational economics, general equilibrium, mechanism design, and voting systems; and across such broad areas as experimental economics, behavioral economics, information economics, industrial organization, and political economy.
This research usually focuses on particular sets of strategies known as "solution concepts" or "equilibria". A common assumption is that players act rationally. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. If all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing.
The payoffs of the game are generally taken to represent the utility of individual players.
A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of a particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Economists and business professors suggest two primary uses (noted above): "descriptive" and "prescriptive".
Application in managerial economics.
Game theory also has an extensive use in a specific branch or stream of economics – Managerial Economics. One important usage of it in the field of managerial economics is in analyzing strategic interactions between firms. For example, firms may be competing in a market with limited resources, and game theory can help managers understand how their decisions impact their competitors and the overall market outcomes. Game theory can also be used to analyze cooperation between firms, such as in forming strategic alliances or joint ventures. Another use of game theory in managerial economics is in analyzing pricing strategies. For example, firms may use game theory to determine the optimal pricing strategy based on how they expect their competitors to respond to their pricing decisions. Overall, game theory serves as a useful tool for analyzing strategic interactions and decision making in the context of managerial economics.
Use of game theory in business.
The Chartered Institute of Procurement & Supply (CIPS) promotes knowledge and use of game theory within the context of business procurement. CIPS and TWS Partners have conducted a series of surveys designed to explore the understanding, awareness and application of game theory among procurement professionals. Some of the main findings in their third annual survey (2019) include:
Use of game theory in project management.
Sensible decision-making is critical for the success of projects. In project management, game theory is used to model the decision-making process of players, such as investors, project managers, contractors, sub-contractors, governments and customers. Quite often, these players have competing interests, and sometimes their interests are directly detrimental to other players, making project management scenarios well-suited to be modeled by game theory.
Piraveenan (2019) in his review provides several examples where game theory is used to model project management scenarios. For instance, an investor typically has several investment options, and each option will likely result in a different project, and thus one of the investment options has to be chosen before the project charter can be produced. Similarly, any large project involving subcontractors, for instance, a construction project, has a complex interplay between the main contractor (the project manager) and subcontractors, or among the subcontractors themselves, which typically has several decision points. For example, if there is an ambiguity in the contract between the contractor and subcontractor, each must decide how hard to push their case without jeopardizing the whole project, and thus their own stake in it. Similarly, when projects from competing organizations are launched, the marketing personnel have to decide what is the best timing and strategy to market the project, or its resultant product or service, so that it can gain maximum traction in the face of competition. In each of these scenarios, the required decisions depend on the decisions of other players who, in some way, have competing interests to the interests of the decision-maker, and thus can ideally be modeled using game theory.
Piraveenan summarises that two-player games are predominantly used to model project management scenarios, and based on the identity of these players, five distinct types of games are used in project management.
In terms of types of games, both cooperative as well as non-cooperative, normal-form as well as extensive-form, and zero-sum as well as non-zero-sum are used to model various project management scenarios.
Political science.
The application of game theory to political science is focused in the overlapping areas of fair division, political economy, public choice, war bargaining, positive political theory, and social choice theory. In each of these areas, researchers have developed game-theoretic models in which the players are often voters, states, special interest groups, and politicians.
Early examples of game theory applied to political science are provided by Anthony Downs. In his 1957 book "An Economic Theory of Democracy", he applies the Hotelling firm location model to the political process. In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. Downs first shows how the political candidates will converge to the ideology preferred by the median voter if voters are fully informed, but then argues that voters choose to remain rationally ignorant which allows for candidate divergence. Game theory was applied in 1962 to the Cuban Missile Crisis during the presidency of John F. Kennedy.
It has also been proposed that game theory explains the stability of any form of political government. Taking the simplest case of a monarchy, for example, the king, being only one person, does not and cannot maintain his authority by personally exercising physical control over all or even any significant number of his subjects. Sovereign control is instead explained by the recognition by each citizen that all other citizens expect each other to view the king (or other established government) as the person whose orders will be followed. Coordinating communication among citizens to replace the sovereign is effectively barred, since conspiracy to replace the sovereign is generally punishable as a crime. Thus, in a process that can be modeled by variants of the prisoner's dilemma, during periods of stability no citizen will find it rational to move to replace the sovereign, even if all the citizens know they would be better off if they were all to act collectively.
A game-theoretic explanation for democratic peace is that public and open debate in democracies sends clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a non-democracy.
However, game theory predicts that two countries may still go to war even if their leaders are cognizant of the costs of fighting. War may result from asymmetric information; two countries may have incentives to mis-represent the amount of military resources they have on hand, rendering them unable to settle disputes agreeably without resorting to fighting. Moreover, war may arise because of commitment problems: if two countries wish to settle a dispute via peaceful means, but each wishes to go back on the terms of that settlement, they may have no choice but to resort to warfare. Finally, war may result from issue indivisibilities.
Game theory could also help predict a nation's responses when there is a new rule or law to be applied to that nation. One example is Peter John Wood's (2013) research looking into what nations could do to help reduce climate change. Wood thought this could be accomplished by making treaties with other nations to reduce greenhouse gas emissions. However, he concluded that this idea could not work because it would create a prisoner's dilemma for the nations.
Use of game theory in defence science and technology.
Game theory has been used extensively to model decision-making scenarios relevant to defence applications. Most studies that has applied game theory in defence settings are concerned with Command and Control Warfare, and can be further classified into studies dealing with (i) Resource Allocation Warfare (ii) Information Warfare (iii) Weapons Control Warfare, and (iv) Adversary Monitoring Warfare. Many of the problems studied are concerned with sensing and tracking, for example a surface ship trying to track a hostile submarine and the submarine trying to evade being tracked, and the interdependent decision making that takes place with regards to bearing, speed, and the sensor technology activated by both vessels. Ho et al provides a concise summary of the state-of-the-art with regards to the use of game theory in defence applications and highlights the benefits and limitations of game theory in the considered scenarios.
Use of game theory in biology.
Unlike those in economics, the payoffs for games in biology are often interpreted as corresponding to fitness. In addition, the focus has been less on equilibria that correspond to a notion of rationality and more on ones that would be maintained by evolutionary forces. The best-known equilibrium in biology is known as the "evolutionarily stable strategy" (ESS), first introduced in . Although its initial motivation did not involve any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium.
In biology, game theory has been used as a model to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1 sex ratios. suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren.
Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal communication. The analysis of signaling games and other communication games has provided insight into the evolution of communication among animals. For example, the mobbing behavior of many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization. Ants have also been shown to exhibit feed-forward behavior akin to fashion (see Paul Ormerod's "Butterfly Economics").
Biologists have used the game of chicken to analyze fighting behavior and territoriality.
According to Maynard Smith, in the preface to "Evolution and the Theory of Games", "paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature.
One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to vervet monkeys that warn group members of a predator's approach, even when it endangers that individual's chance of survival. All of these actions increase the overall fitness of a group, but occur at a cost to the individual.
Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the individuals they help and favor relatives. Hamilton's rule explains the evolutionary rationale behind this selection with the equation c < b × r, where the cost c to the altruist must be less than the benefit b to the recipient multiplied by the coefficient of relatedness r. The more closely related two organisms are causes the incidences of altruism to increase because they share many of the same alleles. This means that the altruistic individual, by ensuring that the alleles of its close relative are passed on through survival of its offspring, can forgo the option of having offspring itself because the same number of alleles are passed on. For example, helping a sibling (in diploid animals) has a coefficient of <templatestyles src="Fraction/styles.css" />1⁄2, because (on average) an individual shares half of the alleles in its sibling's offspring. Ensuring that enough of a sibling's offspring survive to adulthood precludes the necessity of the altruistic individual producing offspring. The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a coefficient that was <templatestyles src="Fraction/styles.css" />1⁄2 in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies smaller.
Computer science and logic.
Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. In addition, computer scientists have used games to model interactive computations. Also, game theory provides a theoretical basis to the field of multi-agent systems.
Separately, game theory has played a role in online algorithms; in particular, the k-server problem, which has in the past been referred to as "games with moving costs" and "request-answer games". Yao's principle is a game-theoretic technique for proving lower bounds on the computational complexity of randomized algorithms, especially online algorithms.
The emergence of the Internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets. Algorithmic game theory and within it algorithmic mechanism design combine computational algorithm design and analysis of complex systems with economic theory.
Philosophy.
Game theory has been put to several uses in philosophy. Responding to two papers by W.V.O. Quine (1960, 1967), used game theory to develop a philosophical account of convention. In so doing, he provided the first analysis of common knowledge and employed it in analyzing play in coordination games. In addition, he first suggested that one can understand meaning in terms of signaling games. This later suggestion has been pursued by several philosophers since Lewis. Following game-theoretic account of conventions, Edna Ullmann-Margalit (1977) and Bicchieri (2006) have developed theories of social norms that define them as Nash equilibria that result from transforming a mixed-motive game into a coordination game.
Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from the interactions of agents. Philosophers who have worked in this area include Bicchieri (1989, 1993), Skyrms (1990), and Stalnaker (1999).
The synthesis of game theory with ethics was championed by R. B. Braithwaite. The hope was that rigorous mathematical analysis of game theory might help formalize the more imprecise philosophical discussions. However, this expectation was only materialized to a limited extent.
In ethics, some (most notably David Gauthier, Gregory Kavka, and Jean Hampton) authors have attempted to pursue Thomas Hobbes' project of deriving morality from self-interest. Since games like the prisoner's dilemma present an apparent conflict between morality and self-interest, explaining why cooperation is required by self-interest is an important component of this project. This general strategy is a component of the general social contract view in political philosophy (for examples, see and ).
Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the prisoner's dilemma, stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about morality (see, e.g., Skyrms (1996, 2004) and Sober and Wilson (1998)).
Epidemiology.
Since the decision to take a vaccine for a particular disease is often made by individuals, who may consider a range of factors and parameters in making this decision (such as the incidence and prevalence of the disease, perceived and real risks associated with contracting the disease, mortality rate, perceived and real risks associated with vaccination, and financial cost of vaccination), game theory has been used to model and predict vaccination uptake in a society.
Artificial Intelligence and Machine Learning.
Game theory has multiple applications in the field of AI/ML. It is often used in developing autonomous systems that can make complex decisions in uncertain environment. Some other areas of application of game theory in AI/ML context are as follows - multi-agent system formation, reinforcement learning, mechanism design etc. By using game theory to model the behavior of other agents and anticipate their actions, AI/ML systems can make better decisions and operate more effectively.
Well known examples of games.
Prisoner's dilemma.
William Poundstone described the game in his 1993 book Prisoner's Dilemma:
Two members of a criminal gang, A and B, are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communication with their partner. The principal charge would lead to a sentence of ten years in prison; however, the police do not have the evidence for a conviction. They plan to sentence both to two years in prison on a lesser charge but offer each prisoner a Faustian bargain: If one of them confesses to the crime of the principal charge, betraying the other, they will be pardoned and free to leave while the other must serve the entirety of the sentence instead of just two years for the lesser charge.
The dominant strategy (and therefore the best response to any possible opponent strategy), is to betray the other, which aligns with the sure-thing principle. However, both prisoners staying silent would yield a greater reward for both of them than mutual betrayal.
Battle of the sexes.
The "battle of the sexes" is a term used to describe the perceived conflict between men and women in various areas of life, such as relationships, careers, and social roles. This conflict is often portrayed in popular culture, such as movies and television shows, as a humorous or dramatic competition between the genders. This conflict can be depicted in a game theory framework. This is an example of non-cooperative games.
An example of the "battle of the sexes" can be seen in the portrayal of relationships in popular media, where men and women are often depicted as being fundamentally different and in conflict with each other. For instance, in some romantic comedies, the male and female protagonists are shown as having opposing views on love and relationships, and they have to overcome these differences in order to be together.
In this game, there are two pure strategy Nash equilibria: one where both the players choose the same strategy and the other where the players choose different options. If the game is played in mixed strategies, where each player chooses their strategy randomly, then there is an infinite number of Nash equilibria. However, in the context of the "battle of the sexes" game, the assumption is usually made that the game is played in pure strategies.
Ultimatum game.
The ultimatum game is a game that has become a popular instrument of economic experiments. An early description is by Nobel laureate John Harsanyi in 1961.
One player, the proposer, is endowed with a sum of money. The proposer is tasked with splitting it with another player, the responder (who knows what the total sum is). Once the proposer communicates his decision, the responder may accept it or reject it. If the responder accepts, the money is split per the proposal; if the responder rejects, both players receive nothing. Both players know in advance the consequences of the responder accepting or rejecting the offer. The game demonstrates how social acceptance, fairness, and generosity influence the players decisions.
Ultimatum game has a variant, that is the dictator game. They are mostly identical, except in dictator game the responder has no power to reject the proposer's offer.
Trust game.
The Trust Game is an experiment designed to measure trust in economic decisions. It is also called "the investment game" and is designed to investigate trust and demonstrate its importance rather than "rationality" of self-interest. The game was designed by Berg Joyce, John Dickhaut and Kevin McCabe in 1995.
In the game, one player (the investor) is given a sum of money and must decide how much of it to give to another player (the trustee). The amount given is then tripled by the experimenter. The trustee then decides how much of the tripled amount to return to the investor. If the recipient is completely self interested, then he/she should return nothing. However that is not true as the experiment conduct. The outcome suggest that people are willing to place a trust, by risking some amount of money, in the belief that there would be reciprocity.
Cournot Competition.
The Cournot competition model involves players choosing quantity of a homogenous product to produce independently and simultaneously, where marginal cost can be different for each firm and the firm's payoff is profit. The production costs are public information and the firm aims to find their profit-maximizing quantity based on what they believe the other firm will produce and behave like monopolies. In this game firms want to produce at the monopoly quantity but there is a high incentive to deviate and produce more, which decreases the market-clearing price. For example, firms may be tempted to deviate from the monopoly quantity if there is a low monopoly quantity and high price, with the aim of increasing production to maximize profit. However this option does not provide the highest payoff, as a firm's ability to maximize profits depends on its market share and the elasticity of the market demand. The Cournot equilibrium is reached when each firm operates on their reaction function with no incentive to deviate, as they have the best response based on the other firms output. Within the game, firms reach the Nash equilibrium when the Cournot equilibrium is achieved.
Bertrand Competition.
The Bertrand competition assumes homogenous products and a constant marginal cost and players choose the prices. The equilibrium of price competition is where the price is equal to marginal costs, assuming complete information about the competitors' costs. Therefore, the firms have an incentive to deviate from the equilibrium because a homogenous product with a lower price will gain all of the market share, known as a cost advantage.
See also.
Lists
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
*reprinted edition: | [
{
"math_id": 0,
"text": " v : 2^N \\to \\mathbb{R} "
},
{
"math_id": 1,
"text": " v( \\emptyset ) = 0 "
}
] | https://en.wikipedia.org/wiki?curid=11924 |
1192521 | RC oscillator | Linear electronic oscillator circuits, which generate a sinusoidal output signal, are composed of an amplifier and a frequency selective element, a filter. A linear oscillator circuit which uses an RC network, a combination of resistors and capacitors, for its frequency selective part is called an RC oscillator.
Description.
RC oscillators are a type of feedback oscillator; they consist of an amplifying device, a transistor, vacuum tube, or op-amp, with some of its output energy fed back into its input through a network of resistors and capacitors, an RC network, to achieve positive feedback, causing it to generate an oscillating sinusoidal voltage. They are used to produce lower frequencies, mostly audio frequencies, in such applications as audio signal generators and electronic musical instruments. At radio frequencies, another type of feedback oscillator, the LC oscillator is used, but at frequencies below 100 kHz the size of the inductors and capacitors needed for the LC oscillator become cumbersome, and RC oscillators are used instead. Their lack of bulky inductors also makes them easier to integrate into microelectronic devices. Since the oscillator's frequency is determined by the value of resistors and capacitors, which vary with temperature, RC oscillators do not have as good frequency stability as crystal oscillators.
The frequency of oscillation is determined by the Barkhausen criterion, which says that the circuit will only oscillate at frequencies for which the phase shift around the feedback loop is equal to 360° (2π radians) or a multiple of 360°, and the loop gain (the amplification around the feedback loop) is equal to one. The purpose of the feedback RC network is to provide the correct phase shift at the desired oscillating frequency so the loop has 360° phase shift, so the sine wave, after passing through the loop will be in phase with the sine wave at the beginning and reinforce it, resulting in positive feedback. The amplifier provides gain to compensate for the energy lost as the signal passes through the feedback network, to create sustained oscillations. As long as the gain of the amplifier is high enough that the total gain around the loop is unity or higher, the circuit will generally oscillate.
In RC oscillator circuits which use a single inverting amplifying device, such as a transistor, tube, or an op amp with the feedback applied to the inverting input, the amplifier provides 180° of the phase shift, so the RC network must provide the other 180°. Since each capacitor can provide a maximum of 90° of phase shift, RC oscillators require at least two frequency-determining capacitors in the circuit (two poles), and most have three or more, with a comparable number of resistors.
This makes tuning the circuit to different frequencies more difficult than in other types such as the LC oscillator, in which the frequency is determined by a single LC circuit so only one element must be varied. Although the frequency can be varied over a small range by adjusting a single circuit element, to tune an RC oscillator over a wide range two or more resistors or capacitors must be varied in unison, requiring them to be "ganged" together mechanically on the same shaft. The oscillation frequency is proportional to the inverse of the capacitance or resistance, whereas in an LC oscillator the frequency is proportional to inverse square root of the capacitance or inductance. So a much wider frequency range can be covered by a given variable capacitor in an RC oscillator. For example, a variable capacitor that could be varied over a 9:1 capacitance range will give an RC oscillator a 9:1 frequency range, but in an LC oscillator it will give only a 3:1 range.
Some examples of common RC oscillator circuits are listed below:
Phase-shift oscillator.
In the phase-shift oscillator the feedback network is three identical cascaded RC sections. In the simplest design the capacitors and resistors in each section have the same value formula_0 and formula_1. Then at the oscillation frequency each RC section contributes 60° phase shift for a total of 180°. The oscillation frequency is
formula_2
The feedback network has an attenuation of 1/29, so the op-amp must have a gain of 29 to give a loop gain of one for the circuit to oscillate
formula_3
Twin-T oscillator.
Another common design is the "Twin-T" oscillator as it uses two "T" RC circuits operated in parallel. One circuit is an R-C-R "T" which acts as a low-pass filter. The second circuit is a C-R-C "T" which operates as a high-pass filter. Together, these circuits form a bridge which is tuned at the desired frequency of oscillation. The signal in the C-R-C branch of the Twin-T filter is advanced, in the R-C-R - delayed, so they may cancel one another for frequency formula_4 if formula_5; if it is connected as a negative feedback to an amplifier, and x>2, the amplifier becomes an oscillator. (Note: formula_6.)
Quadrature oscillator.
The quadrature oscillator uses two cascaded op-amp integrators in a feedback loop, one with the signal applied to the inverting input or two integrators and an invertor. The advantage of this circuit is that the sinusoidal outputs of the two op-amps are 90° out of phase (in quadrature). This is useful in some communication circuits.
It is possible to stabilize a quadrature oscillator by squaring the sine and cosine outputs, adding them together, (Pythagorean trigonometric identity) subtracting a constant, and applying the difference to a multiplier that adjusts the loop gain around an inverter. Such circuits have a near-instant amplitude response to the constant input and extremely low distortion.
Low distortion oscillators.
The Barkhausen criterion mentioned above does not determine the amplitude of oscillation. An oscillator circuit with only "linear" components is unstable with respect to amplitude. As long as the loop gain is exactly one, the amplitude of the sine wave would be constant, but the slightest increase in gain, due to a drift in the value of components will cause the amplitude to increase exponentially without limit. Similarly, the slightest decrease will cause the sine wave to die out exponentially to zero. Therefore, all practical oscillators must have a nonlinear component in the feedback loop, to reduce the gain as the amplitude increases, leading to stable operation at the amplitude where the loop gain is unity.
In most ordinary oscillators, the nonlinearity is simply the saturation (clipping) of the amplifier as the amplitude of the sine wave approaches the power supply rails. The oscillator is designed to have a small-signal loop gain greater than one. The higher gain allows an oscillator to start by exponentially amplifying some ever-present noise.
As the peaks of the sine wave approach the supply rails, the saturation of the amplifier device flattens (clips) the peaks, reducing the gain. For example, the oscillator might have a loop gain of 3 for small signals, but that loop gain instaneously drops to zero when the output reaches one of the power supply rails. The net effect is the oscillator amplitude will stabilize when average gain over a cycle is one. If the average loop gain is greater than one, the output amplitude increases until the nonlinearity reduces the average gain to one; if the average loop gain is less than one, then the output amplitude decreases until the average gain is one. The nonlinearity that reduces the gain may also be more subtle than running into a power supply rail.
The result of this gain averaging is some harmonic distortion in the output signal. If the small-signal gain is just a little bit more than one, then only a small amount of gain compression is needed, so there won't be much harmonic distortion. If the small-signal gain is much more than one, then significant distortion will be present. However the oscillator must have gain significantly above one to start reliably.
So in oscillators that must produce a very low-distortion sine wave, a system that keeps the gain roughly constant during the entire cycle is used. A common design uses an incandescent lamp or a thermistor in the feedback circuit. These oscillators exploit the resistance of a tungsten filament of the lamp increases in proportion to its temperature (a thermistor works in a similar fashion). The lamp both measures the output amplitude and controls the oscillator gain at the same time. The oscillator's signal level heats the filament. If the level is too high, then the filament temperature gradually increases, the resistance increases, and the loop gain falls (thus decreasing the oscillator's output level). If the level is too low, the lamp cools down and increases the gain. The 1939 HP200A oscillator uses this technique. Modern variations may use explicit level detectors and gain-controlled amplifiers.
Wien bridge oscillator.
One of the most common gain-stabilized circuits is the Wien bridge oscillator. In this circuit, two RC circuits are used, one with the RC components in series and one with the RC components in parallel. The Wien Bridge is often used in audio signal generators because it can be easily tuned using a two-section variable capacitor or a two section variable potentiometer (which is more easily obtained than a variable capacitor suitable for generation at low frequencies). The archetypical HP200A audio oscillator is a Wien Bridge oscillator.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle R\\;=\\;R1\\;=\\;R2\\;=\\;R3"
},
{
"math_id": 1,
"text": "\\scriptstyle C\\;=\\;C1\\;=\\;C2\\;=\\;C3"
},
{
"math_id": 2,
"text": "f = \\frac{1}{2\\pi RC\\sqrt{6}}"
},
{
"math_id": 3,
"text": "R_\\mathrm{fb} = 29\\cdot R"
},
{
"math_id": 4,
"text": "f=\\frac{1}{2\\pi RC}"
},
{
"math_id": 5,
"text": "x=2"
},
{
"math_id": 6,
"text": "x = C2/C1 = R1/R2"
}
] | https://en.wikipedia.org/wiki?curid=1192521 |
1192757 | Skellam distribution | The Skellam distribution is the discrete probability distribution of the difference formula_0 of two statistically independent random variables formula_1 and formula_2 each Poisson-distributed with respective expected values formula_3 and formula_4. It is useful in describing the statistics of the difference of two images with simple photon noise, as well as describing the point spread distribution in sports where all scored points are equal, such as baseball, hockey and soccer.
The distribution is also applicable to a special case of the difference of dependent Poisson random variables, but just the obvious case where the two variables have a common additive random contribution which is cancelled by the differencing: see Karlis & Ntzoufras (2003) for details and an application.
The probability mass function for the Skellam distribution for a difference formula_5 between two independent Poisson-distributed random variables with means formula_3 and formula_4 is given by:
formula_6
where "Ik"("z") is the of the first kind. Since "k" is an integer we have that "Ik"("z")="I|k|"("z").
Derivation.
The probability mass function of a Poisson-distributed random variable with mean μ is given by
formula_7
for formula_8 (and zero otherwise). The Skellam probability mass function for the difference of two independent counts formula_5 is the convolution of two Poisson distributions: (Skellam, 1946)
formula_9
Since the Poisson distribution is zero for negative values of the count formula_10, the second sum is only taken for those terms where formula_11 and formula_12. It can be shown that the above sum implies that
formula_13
so that:
formula_14
where "I" k(z) is the of the first kind. The special case for formula_15 is given by Irwin (1937):
formula_16
Using the limiting values of the modified Bessel function for small arguments, we can recover the Poisson distribution as a special case of the Skellam distribution for formula_17.
Properties.
As it is a discrete probability function, the Skellam probability mass function is normalized:
formula_18
We know that the probability generating function (pgf) for a Poisson distribution is:
formula_19
It follows that the pgf, formula_20, for a Skellam probability mass function will be:
formula_21
Notice that the form of the probability-generating function implies that the distribution of the sums or the differences of any number of independent Skellam-distributed variables are again Skellam-distributed. It is sometimes claimed that any linear combination of two Skellam distributed variables are again Skellam-distributed, but this is clearly not true since any multiplier other than formula_22 would change the support of the distribution and alter the pattern of moments in a way that no Skellam distribution can satisfy.
The moment-generating function is given by:
formula_23
which yields the raw moments "m""k" . Define:
formula_24
formula_25
Then the raw moments "m""k" are
formula_26
formula_27
formula_28
The central moments "M" "k" are
formula_29
formula_30
formula_31
The mean, variance, skewness, and kurtosis excess are respectively:
formula_32
The cumulant-generating function is given by:
formula_33
which yields the cumulants:
formula_34
formula_35
For the special case when μ1 = μ2, an
asymptotic expansion of the modified Bessel function of the first kind yields for large μ:
formula_36
(Abramowitz & Stegun 1972, p. 377). Also, for this special case, when "k" is also large, and of order of the square root of 2μ, the distribution tends to a normal distribution:
formula_37
These special results can easily be extended to the more general case of different means.
Bounds on weight above zero.
If formula_38, with formula_39, then
formula_40
Details can be found in Poisson distribution#Poisson races | [
{
"math_id": 0,
"text": "N_1-N_2"
},
{
"math_id": 1,
"text": "N_1"
},
{
"math_id": 2,
"text": "N_2,"
},
{
"math_id": 3,
"text": "\\mu_1"
},
{
"math_id": 4,
"text": "\\mu_2"
},
{
"math_id": 5,
"text": "K=N_1-N_2"
},
{
"math_id": 6,
"text": "\n p(k;\\mu_1,\\mu_2) = \\Pr\\{K=k\\} = e^{-(\\mu_1+\\mu_2)}\n \\left({\\mu_1\\over\\mu_2}\\right)^{k/2}I_{k}(2\\sqrt{\\mu_1\\mu_2})\n"
},
{
"math_id": 7,
"text": "\n p(k;\\mu)={\\mu^k\\over k!}e^{-\\mu}.\\,\n "
},
{
"math_id": 8,
"text": "k \\ge 0"
},
{
"math_id": 9,
"text": "\n\\begin{align}\n p(k;\\mu_1,\\mu_2)\n & =\\sum_{n=-\\infty}^\\infty\n p(k+n;\\mu_1)p(n;\\mu_2) \\\\\n & =e^{-(\\mu_1+\\mu_2)}\\sum_{n=\\max(0,-k)}^\\infty\n {{\\mu_1^{k+n}\\mu_2^n}\\over{n!(k+n)!}}\n\\end{align}\n "
},
{
"math_id": 10,
"text": "(p(N<0;\\mu)=0)"
},
{
"math_id": 11,
"text": "n\\ge0"
},
{
"math_id": 12,
"text": "n+k\\ge0"
},
{
"math_id": 13,
"text": "\\frac{p(k;\\mu_1,\\mu_2)}{p(-k;\\mu_1,\\mu_2)}=\\left(\\frac{\\mu_1}{\\mu_2}\\right)^k"
},
{
"math_id": 14,
"text": "\n p(k;\\mu_1,\\mu_2)= e^{-(\\mu_1+\\mu_2)}\n \\left({\\mu_1\\over\\mu_2}\\right)^{k/2}I_{|k|}(2\\sqrt{\\mu_1\\mu_2})\n "
},
{
"math_id": 15,
"text": "\\mu_1=\\mu_2(=\\mu)"
},
{
"math_id": 16,
"text": "\n p\\left(k;\\mu,\\mu\\right) = e^{-2\\mu}I_{|k|}(2\\mu).\n "
},
{
"math_id": 17,
"text": "\\mu_2=0"
},
{
"math_id": 18,
"text": "\n \\sum_{k=-\\infty}^\\infty p(k;\\mu_1,\\mu_2)=1.\n "
},
{
"math_id": 19,
"text": "\n G\\left(t;\\mu\\right)= e^{\\mu(t-1)}.\n "
},
{
"math_id": 20,
"text": "G(t;\\mu_1,\\mu_2)"
},
{
"math_id": 21,
"text": "\n\\begin{align}\nG(t;\\mu_1,\\mu_2) & = \\sum_{k=-\\infty}^\\infty p(k;\\mu_1,\\mu_2)t^k \\\\[4pt]\n& = G\\left(t;\\mu_1\\right)G\\left(1/t;\\mu_2\\right) \\\\[4pt]\n& = e^{-(\\mu_1+\\mu_2)+\\mu_1 t+\\mu_2/t}.\n\\end{align}\n"
},
{
"math_id": 22,
"text": "\\pm 1"
},
{
"math_id": 23,
"text": "M\\left(t;\\mu_1,\\mu_2\\right) = G(e^t;\\mu_1,\\mu_2) = \\sum_{k=0}^\\infty { t^k \\over k!}\\,m_k"
},
{
"math_id": 24,
"text": "\\Delta\\ \\stackrel{\\mathrm{def}}{=}\\ \\mu_1-\\mu_2\\,"
},
{
"math_id": 25,
"text": "\\mu\\ \\stackrel{\\mathrm{def}}{=}\\ (\\mu_1+\\mu_2)/2.\\,"
},
{
"math_id": 26,
"text": "m_1=\\left.\\Delta\\right.\\,"
},
{
"math_id": 27,
"text": "m_2=\\left.2\\mu+\\Delta^2\\right.\\,"
},
{
"math_id": 28,
"text": "m_3=\\left.\\Delta(1+6\\mu+\\Delta^2)\\right.\\,"
},
{
"math_id": 29,
"text": "M_2=\\left.2\\mu\\right.,\\,"
},
{
"math_id": 30,
"text": "M_3=\\left.\\Delta\\right.,\\,"
},
{
"math_id": 31,
"text": "M_4=\\left.2\\mu+12\\mu^2\\right..\\,"
},
{
"math_id": 32,
"text": "\n\\begin{align}\n\\operatorname E(n) & = \\Delta, \\\\[4pt]\n\\sigma^2 & =2\\mu, \\\\[4pt]\n\\gamma_1 & =\\Delta/(2\\mu)^{3/2}, \\\\[4pt]\n\\gamma_2 & = 1/2.\n\\end{align}\n"
},
{
"math_id": 33,
"text": "\n K(t;\\mu_1,\\mu_2)\\ \\stackrel{\\mathrm{def}}{=}\\ \\ln(M(t;\\mu_1,\\mu_2))\n = \\sum_{k=0}^\\infty { t^k \\over k!}\\,\\kappa_k\n "
},
{
"math_id": 34,
"text": "\\kappa_{2k}=\\left.2\\mu\\right."
},
{
"math_id": 35,
"text": "\\kappa_{2k+1}=\\left.\\Delta\\right. ."
},
{
"math_id": 36,
"text": "\n p(k;\\mu,\\mu)\\sim\n {1\\over\\sqrt{4\\pi\\mu}}\\left[1+\\sum_{n=1}^\\infty\n (-1)^n{\\{4k^2-1^2\\}\\{4k^2-3^2\\}\\cdots\\{4k^2-(2n-1)^2\\}\n \\over n!\\,2^{3n}\\,(2\\mu)^n}\\right].\n "
},
{
"math_id": 37,
"text": "\n p(k;\\mu,\\mu)\\sim\n {e^{-k^2/4\\mu}\\over\\sqrt{4\\pi\\mu}}.\n "
},
{
"math_id": 38,
"text": "X \\sim \\operatorname{Skellam} (\\mu_1, \\mu_2) "
},
{
"math_id": 39,
"text": "\\mu_1 < \\mu_2"
},
{
"math_id": 40,
"text": "\n\\frac{\\exp(-(\\sqrt{\\mu_1} -\\sqrt{\\mu_2})^2 )}{(\\mu_1 + \\mu_2)^2} - \\frac{e^{-(\\mu_1 + \\mu_2)}}{2\\sqrt{\\mu_1 \\mu_2}} - \\frac{e^{-(\\mu_1 + \\mu_2)}}{4\\mu_1 \\mu_2} \\leq \\Pr\\{X \\geq 0\\} \\leq \\exp (- (\\sqrt{\\mu_1} -\\sqrt{\\mu_2})^2)\n"
}
] | https://en.wikipedia.org/wiki?curid=1192757 |
11930108 | Volatility (finance) | Degree of variation of a trading price series over time
In finance, volatility (usually denoted by "σ") is the degree of variation of a trading price series over time, usually measured by the standard deviation of logarithmic returns.
Historic volatility measures a time series of past market prices. Implied volatility looks forward in time, being derived from the market price of a market-traded derivative (in particular, an option).
Volatility terminology.
Volatility as described here refers to the actual volatility, more specifically:
Now turning to implied volatility, we have:
For a financial instrument whose price follows a Gaussian random walk, or Wiener process, the width of the distribution increases as time increases. This is because there is an increasing probability that the instrument's price will be farther away from the initial price as time increases. However, rather than increase linearly, the volatility increases with the square-root of time as time increases, because some fluctuations are expected to cancel each other out, so the most likely deviation after twice the time will not be twice the distance from zero.
Since observed price changes do not follow Gaussian distributions, others such as the Lévy distribution are often used. These can capture attributes such as "fat tails".
Volatility is a statistical measure of dispersion around the average of any random variable such as market parameters etc.
Mathematical definition.
For any fund that evolves randomly with time, volatility is defined as the standard deviation of a sequence of random variables, each of which is the return of the fund over some corresponding sequence of (equally sized) times.
Thus, "annualized" volatility "σ"annually is the standard deviation of an instrument's yearly logarithmic returns.
The generalized volatility "σ""T" for time horizon "T" in years is expressed as:
formula_0
Therefore, if the daily logarithmic returns of a stock have a standard deviation of "σ"daily and the time period of returns is "P" in trading days, the annualized volatility is
formula_1
so
formula_2
A common assumption is that "P" = 252 trading days in any given year. Then, if "σ"daily = 0.01, the annualized volatility is
formula_3
The monthly volatility (i.e. formula_4 of a year) is
formula_5
The formulas used above to convert returns or volatility measures from one time period to another assume a particular underlying model or process. These formulas are accurate extrapolations of a random walk, or Wiener process, whose steps have finite variance. However, more generally, for natural stochastic processes, the precise relationship between volatility measures for different time periods is more complicated. Some use the Lévy stability exponent "α" to extrapolate natural processes:
formula_6
If "α" = 2 the Wiener process scaling relation is obtained, but some people believe "α" < 2 for financial activities such as stocks, indexes and so on. This was discovered by Benoît Mandelbrot, who looked at cotton prices and found that they followed a Lévy alpha-stable distribution with "α" = 1.7. (See New Scientist, 19 April 1997.)
Volatility origin.
Much research has been devoted to modeling and forecasting the volatility of financial returns, and yet few theoretical models explain how volatility comes to exist in the first place.
Roll (1984) shows that volatility is affected by market microstructure. Glosten and Milgrom (1985) shows that at least one source of volatility can be explained by the liquidity provision process. When market makers infer the possibility of adverse selection, they adjust their trading ranges, which in turn increases the band of price oscillation.
In September 2019, JPMorgan Chase determined the effect of US President Donald Trump's tweets, and called it the Volfefe index combining volatility and the covfefe meme.
Volatility for investors.
Volatility matters to investors for at least eight reasons, several of which are alternative statements of the same feature or are directly consequent on each other:
Volatility versus direction.
Volatility does not measure the direction of price changes, merely their dispersion. This is because when calculating standard deviation (or variance), all differences are squared, so that negative and positive differences are combined into one quantity. Two instruments with different volatilities may have the same expected return, but the instrument with higher volatility will have larger swings in values over a given period of time.
For example, a lower volatility stock may have an expected (average) return of 7%, with annual volatility of 5%. Ignoring compounding effects, this would indicate returns from approximately negative 3% to positive 17% most of the time (19 times out of 20, or 95% via a two standard deviation rule). A higher volatility stock, with the same expected return of 7% but with annual volatility of 20%, would indicate returns from approximately negative 33% to positive 47% most of the time (19 times out of 20, or 95%). These estimates assume a normal distribution; in reality stock price movements are found to be leptokurtotic (fat-tailed).
Volatility over time.
Although the Black-Scholes equation assumes predictable constant volatility, this is not observed in real markets. Amongst more realistic models are Emanuel Derman and Iraj Kani's and Bruno Dupire's local volatility, Poisson process where volatility jumps to new levels with a predictable frequency, and the increasingly popular Heston model of stochastic volatility.[link broken]
It is common knowledge that many types of assets experience periods of high and low volatility. That is, during some periods, prices go up and down quickly, while during other times they barely move at all. In foreign exchange market, price changes are seasonally heteroskedastic with periods of one day and one week.
Periods when prices fall quickly (a crash) are often followed by prices going down even more, or going up by an unusual amount. Also, a time when prices rise quickly (a possible bubble) may often be followed by prices going up even more, or going down by an unusual amount.
Most typically, extreme movements do not appear 'out of nowhere'; they are presaged by larger movements than usual or by known uncertainty in specific future events. This is termed autoregressive conditional heteroskedasticity. Whether such large movements have the same direction, or the opposite, is more difficult to say. And an increase in volatility does not always presage a further increase—the volatility may simply go back down again.
Measures of volatility depend not only on the period over which it is measured, but also on the selected time resolution, as the information flow between short-term and long-term traders is asymmetric. As a result, volatility measured with high resolution contains information that is not covered by low resolution volatility and vice versa.
The risk parity weighted volatility of the three assets Gold, Treasury bonds and Nasdaq acting as proxy for the Marketportfolio seems to have a low point at 4% after turning upwards for the 8th time since 1974 at this reading in the summer of 2014.
Alternative measures of volatility.
Some authors point out that realized volatility and implied volatility are backward and forward looking measures, and do not reflect current volatility. To address that issue an alternative, ensemble measures of volatility were suggested. One of the measures is defined as the standard deviation of ensemble returns instead of time series of returns. Another considers the regular sequence of directional-changes as the proxy for the instantaneous volatility.
Volatility as it Relates to Options Trading.
One method of measuring Volatility, often used by quant option trading firms, divides up volatility into two components. Clean volatility - the amount of volatility caused standard events like daily transactions and general noise - and dirty vol, the amount caused by specific events like earnings or policy announcements. For instance, a company like Microsoft would have clean volatility caused by people buying and selling on a daily basis but dirty (or event vol) events like quarterly earnings or a possibly anti-trust announcement.
Breaking down volatility into two components is useful in order to accurately price how much an option is worth, especially when identifying what events may contribute to a swing. The job of fundamental analysts at market makers and option trading boutique firms typically entails trying to assign numeric values to these numbers.
Implied volatility parametrisation.
There exist several known parametrisations of the implied volatility surface, Schonbucher, SVI and gSVI.
Crude volatility estimation.
Using a simplification of the above formula it is possible to estimate annualized volatility based solely on approximate observations. Suppose you notice that a market price index, which has a current value near 10,000, has moved about 100 points a day, on average, for many days. This would constitute a 1% daily movement, up or down.
To annualize this, you can use the "rule of 16", that is, multiply by 16 to get 16% as the annual volatility. The rationale for this is that 16 is the square root of 256, which is approximately the number of trading days in a year (252). This also uses the fact that the standard deviation of the sum of "n" independent variables (with equal standard deviations) is √n times the standard deviation of the individual variables.
However importantly this does not capture (or in some cases may give excessive weight to) occasional large movements in market price which occur less frequently than once a year.
The average magnitude of the observations is merely an approximation of the standard deviation of the market index. Assuming that the market index daily changes are normally distributed with mean zero and standard deviation "σ", the expected value of the magnitude of the observations is √(2/π)"σ" = 0.798"σ". The net effect is that this crude approach underestimates the true volatility by about 20%.
Estimate of compound annual growth rate (CAGR).
Consider the Taylor series:
formula_7
Taking only the first two terms one has:
formula_8
Volatility thus mathematically represents a drag on the CAGR (formalized as the "volatility tax"). Realistically, most financial assets have negative skewness and leptokurtosis, so this formula tends to be over-optimistic. Some people use the formula:
formula_9
for a rough estimate, where "k" is an empirical factor (typically five to ten).
Criticisms of volatility forecasting models.
Despite the sophisticated composition of most volatility forecasting models, critics claim that their predictive power is similar to that of plain-vanilla measures, such as simple past volatility especially out-of-sample, where different data are used to estimate the models and to test them. Other works have agreed, but claim critics failed to correctly implement the more complicated models. Some practitioners and portfolio managers seem to completely ignore or dismiss volatility forecasting models. For example, Nassim Taleb famously titled one of his "Journal of Portfolio Management" papers "We Don't Quite Know What We are Talking About When We Talk About Volatility". In a similar note, Emanuel Derman expressed his disillusion with the enormous supply of empirical models unsupported by theory. He argues that, while "theories are attempts to uncover the hidden principles underpinning the world around us, as Albert Einstein did with his theory of relativity", we should remember that "models are metaphors – analogies that describe one thing relative to another".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma_\\text{T} = \\sigma_\\text{annually} \\sqrt{T}."
},
{
"math_id": 1,
"text": "\\sigma_\\text{annually} = \\sigma_\\text{daily} \\sqrt{P}."
},
{
"math_id": 2,
"text": "\\sigma_\\text{T} = \\sigma_\\text{daily} \\sqrt{PT}."
},
{
"math_id": 3,
"text": "\\sigma_\\text{annually} = 0.01 \\sqrt{252} = 0.1587."
},
{
"math_id": 4,
"text": "T = \\tfrac{1}{12}"
},
{
"math_id": 5,
"text": "\\sigma_\\text{monthly} = 0.01 \\sqrt{\\tfrac{252}{12}} = 0.0458."
},
{
"math_id": 6,
"text": "\\sigma_T = T^{1/\\alpha} \\sigma.\\,"
},
{
"math_id": 7,
"text": "\\log(1+y) = y - \\tfrac{1}{2}y^2 + \\tfrac{1}{3}y^3 - \\tfrac{1}{4}y^4 + \\cdots "
},
{
"math_id": 8,
"text": "\\mathrm{CAGR} \\approx \\mathrm{AR} - \\tfrac{1}{2}\\sigma^2"
},
{
"math_id": 9,
"text": "\\mathrm{CAGR} \\approx \\mathrm{AR} - \\tfrac{1}{2}k\\sigma^2"
}
] | https://en.wikipedia.org/wiki?curid=11930108 |
11930843 | Infinity (philosophy) | Philosophical concept
In philosophy and theology, infinity is explored in articles under headings such as the Absolute, God, and Zeno's paradoxes.
In Greek philosophy, for example in Anaximander, 'the Boundless' is the origin of all that is. He took the beginning or first principle to be an endless, unlimited primordial mass (ἄπειρον, "apeiron"). The Jain metaphysics and mathematics were the first to define and delineate different "types" of infinities. The work of the mathematician Georg Cantor first placed infinity into a coherent mathematical framework. Keenly aware of his departure from traditional wisdom, Cantor also presented a comprehensive historical and philosophical discussion of infinity. In Christian theology, for example in the work of Duns Scotus, the infinite nature of God invokes a sense of being without constraint, rather than a sense of being unlimited in quantity.
Early thinking.
Greek.
Anaximander.
An early engagement with the idea of infinity was made by Anaximander who considered infinity to be a foundational and primitive basis of reality. Anaximander was the first in the Greek philosophical tradition to propose that the universe was infinite.
Anaxagoras.
Anaxagoras (500–428 BCE) was of the opinion that matter of the universe had an innate capacity for infinite division.
The Atomists.
A group of thinkers of ancient Greece (later identified as the Atomists) all similarly considered matter to be made of an infinite number of structures as considered by imagining dividing or separating matter from itself an infinite number of times.
Aristotle and after.
Aristotle, alive for the period 384–322 BCE, is credited with being the root of a field of thought, in his influence of succeeding thinking for a period spanning more than one subsequent millennium, by his rejection of the idea of actual infinity.
In Book 3 of his work entitled "Physics", Aristotle deals with the concept of infinity in terms of his notion of actuality and of potentiality.
<templatestyles src="Template:Blockquote/styles.css" />... It is always possible to think of a larger number: for the number of times a magnitude can be bisected is infinite. Hence the infinite is potential, never actual; the number of parts that can be taken always surpasses any assigned number.
This is often called potential infinity; however, there are two ideas mixed up with this. One is that it is always possible to find a number of things that surpasses any given number, even if there are not actually such things. The other is that we may quantify over infinite sets without restriction. For example, formula_0, which reads, "for any integer n, there exists an integer m > n such that P(m)". The second view is found in a clearer form by medieval writers such as William of Ockham:
<templatestyles src="Template:Blockquote/styles.css" />"Sed omne continuum est actualiter existens. Igitur quaelibet pars sua est vere existens in rerum natura. Sed partes continui sunt infinitae quia non tot quin plures, igitur partes infinitae sunt actualiter existentes."<br><br>But every continuum is actually existent. Therefore any of its parts is really existent in nature. But the parts of the continuum are infinite because there are not so many that there are not more, and therefore the infinite parts are actually existent.
The parts are actually there, in some sense. However, in this view, no infinite magnitude can have a number, for whatever number we can imagine, there is always a larger one: "There are not so many (in number) that there are no more."
Aristotle's views on the continuum foreshadow some topological aspects of modern mathematical theories of the continuum. Aristotle's emphasis on the connectedness of the continuum may have inspired—in different ways—modern philosophers and mathematicians such as Charles Sanders Peirce, Cantor, and LEJ Brouwer.
Among the scholastics, Aquinas also argued against the idea that infinity could be in any sense complete or a totality.
Aristotle deals with infinity in the context of the prime mover, in Book 7 of the same work, the reasoning of which was later studied and commented on by Simplicius.
Roman.
Plotinus.
Plotinus considered infinity, while he was alive, during the 3rd century A.D.
Simplicius.
Simplicius, alive circa 490 to 560 AD, thought the concept "Mind" was infinite.
Augustine.
Augustine thought infinity to be "incomprehensible for the human mind".
Early Indian thinking.
The Jain upanga āgama Surya Prajnapti (c. 400 BC) classifies all numbers into three sets: enumerable, innumerable, and infinite. Each of these was further subdivided into three orders:
The Jains were the first to discard the idea that all infinities were the same or equal. They recognized different types of infinities: infinite in length (one dimension), infinite in area (two dimensions), infinite in volume (three dimensions), and infinite perpetually (infinite number of dimensions).
According to Singh (1987), Joseph (2000) and Agrawal (2000), the highest enumerable number "N" of the Jains corresponds to the modern concept of aleph-null formula_1 (the cardinal number of the infinite set of integers 1, 2, ...), the smallest cardinal transfinite number. The Jains also defined a whole system of infinite cardinal numbers, of which the highest enumerable number "N" is the smallest.
In the Jaina work on the theory of sets, two basic types of infinite numbers are distinguished. On both physical and ontological grounds, a distinction was made between ("countless, innumerable") and "ananta" ("endless, unlimited"), between rigidly bounded and loosely bounded infinities.
Views from the Renaissance to modern times.
Galileo.
Galileo Galilei (February 15, 1564 – January 8, 1642) discussed the example of comparing the square numbers {1, 4, 9, 16, ...} with the natural numbers {1, 2, 3, 4, ...} as follows:
1 → 1<br> 2 → 4<br> 3 → 9<br> 4 → 16<br> …
It appeared by this reasoning as though a "set" (Galileo did not use the terminology) which is naturally smaller than the "set" of which it is a part (since it does not contain all the members) is in some sense the same "size". Galileo found no way around this problem:
<templatestyles src="Template:Blockquote/styles.css" />So far as I see we can only infer that the totality of all numbers is infinite, that the number of squares is infinite, and that the number of their roots is infinite; neither is the number of squares less than the totality of all numbers, nor the latter greater than the former; and finally the attributes "equal," "greater," and "less," are not applicable to infinite, but only to finite, quantities.
The idea that size can be measured by one-to-one correspondence is today known as Hume's principle, although Hume, like Galileo, believed the principle could not be applied to the infinite. The same concept, applied by Georg Cantor, is used in relation to infinite sets.
Thomas Hobbes.
Famously, the ultra-empiricist Hobbes (April 5, 1588 – December 4, 1679) tried to defend the idea of a potential infinity in light of the discovery, by Evangelista Torricelli, of a figure (Gabriel's Horn) whose surface area is infinite, but whose volume is finite. Not reported, this motivation of Hobbes came too late as curves having infinite length yet bounding finite areas were known much before.
John Locke.
Locke (August 29, 1632 – October 28, 1704) in common with most of the empiricist philosophers, also believed that we can have no proper idea of the infinite. They believed all our ideas were derived from sense data or "impressions," and since all sensory impressions are inherently finite, so too are our thoughts and ideas. Our idea of infinity is merely negative or privative.
<templatestyles src="Template:Blockquote/styles.css" />Whatever "positive" ideas we have in our minds of any space, duration, or number, let them be never so great, they are still finite; but when we suppose an inexhaustible remainder, from which we remove all bounds, and wherein we allow the mind an endless progression of thought, without ever completing the idea, there we have our idea of infinity... yet when we would frame in our minds the idea of an infinite space or duration, that idea is very obscure and confused, because it is made up of two parts very different, if not inconsistent. For let a man frame in his mind an idea of any space or number, as great as he will, it is plain the mind rests and terminates in that idea; which is contrary to the idea of infinity, which consists in a supposed endless progression.
He considered that in considerations on the subject of eternity, which he classified as an infinity, humans are likely to make mistakes.
Modern philosophical views.
Modern discussion of the infinite is now regarded as part of set theory and mathematics. Contemporary philosophers of mathematics engage with the topic of infinity and generally acknowledge its role in mathematical practice. Although set theory is now widely accepted, this was not always so. Influenced by L.E.J Brouwer and verificationism in part, Wittgenstein (April 26, 1889 – April 29, 1951) made an impassioned attack upon axiomatic set theory, and upon the idea of the actual infinite, during his "middle period".
<templatestyles src="Template:Blockquote/styles.css" />Does the relation formula_2 correlate the class of all numbers with one of its subclasses? No. It correlates any arbitrary number with another, and in that way we arrive at infinitely many pairs of classes, of which one is correlated with the other, but which are never related as class and subclass. Neither is this infinite process itself in some sense or other such a pair of classes... In the superstition that formula_2 correlates a class with its subclass, we merely have yet another case of ambiguous grammar.
Unlike the traditional empiricists, he thought that the infinite was in some way given to sense experience.
<templatestyles src="Template:Blockquote/styles.css" />... I can see in space the possibility of any finite experience... we recognize [the] essential infinity of space in its smallest part." "[Time] is infinite in the same sense as the three-dimensional space of sight and movement is infinite, even if in fact I can only see as far as the walls of my room.
<templatestyles src="Template:Blockquote/styles.css" />... what is infinite about endlessness is only the endlessness itself.
Emmanuel Levinas.
The philosopher Emmanuel Levinas (January 12, 1906 – December 25, 1995) uses infinity to designate that which cannot be defined or reduced to knowledge or power. In Levinas' magnum opus Totality and Infinity he says :
<templatestyles src="Template:Blockquote/styles.css" />...infinity is produced in the relationship of the same with the other, and how the particular and the personal, which are unsurpassable, as it were magnetize the very field in which the production of infinity is enacted...
The idea of infinity is not an incidental notion forged by a subjectivity to reflect the case of an entity encountering on the outside nothing that limits it, overflowing every limit, and thereby infinite. The production of the infinite entity is inseparable from the idea of infinity, for it is precisely in the disproportion between the idea of infinity and the infinity of which it is the idea that this exceeding of limits is produced. The idea of infinity is the mode of being, the infinition, of infinity... All knowing qua intentionality already presupposes the idea of infinity, which is preeminently non-adequation.
Levinas also wrote a work entitled "Philosophy and the Idea of Infinity", which was published during 1957.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\forall n \\in \\mathbb{Z} (\\exists m \\in \\mathbb{Z} [m > n \\wedge P(m)] )"
},
{
"math_id": 1,
"text": "\\aleph_0"
},
{
"math_id": 2,
"text": "m = 2n"
}
] | https://en.wikipedia.org/wiki?curid=11930843 |
11934455 | Primitive part and content | In algebra, the content of a nonzero polynomial with integer coefficients (or, more generally, with coefficients in a unique factorization domain) is the greatest common divisor of its coefficients. The primitive part of such a polynomial is the quotient of the polynomial by its content. Thus a polynomial is the product of its primitive part and its content, and this factorization is unique up to the multiplication of the content by a unit of the ring of the coefficients (and the multiplication of the primitive part by the inverse of the unit).
A polynomial is primitive if its content equals 1. Thus the primitive part of a polynomial is a primitive polynomial.
Gauss's lemma for polynomials states that the product of primitive polynomials (with coefficients in the same unique factorization domain) also is primitive. This implies that the content and the primitive part of the product of two polynomials are, respectively, the product of the contents and the product of the primitive parts.
As the computation of greatest common divisors is generally much easier than polynomial factorization, the first step of a polynomial factorization algorithm is generally the computation of its primitive part–content factorization (see ). Then the factorization problem is reduced to factorize separately the content and the primitive part.
Content and primitive part may be generalized to polynomials over the rational numbers, and, more generally, to polynomials over the field of fractions of a unique factorization domain. This makes essentially equivalent the problems of computing greatest common divisors and factorization of polynomials over the integers and of polynomials over the rational numbers.
Over the integers.
For a polynomial with integer coefficients, the content may be either the greatest common divisor of the coefficients or its additive inverse. The choice is arbitrary, and may depend on a further convention, which is commonly that the leading coefficient of the primitive part be positive.
For example, the content of formula_0 may be either 2 or −2, since 2 is the greatest common divisor of −12, 30, and −20. If one chooses 2 as the content, the primitive part of this polynomial is
formula_1
and thus the primitive-part-content factorization is
formula_2
For aesthetic reasons, one often prefers choosing a negative content, here −2, giving the primitive-part-content factorization
formula_3
Properties.
In the remainder of this article, we consider polynomials over a unique factorization domain "R", which can typically be the ring of integers, or a polynomial ring over a field. In "R", greatest common divisors are well defined, and are unique up to multiplication by a unit of "R".
The content "c"("P") of a polynomial "P" with coefficients in "R" is the greatest common divisor of its coefficients, and, as such, is defined up to multiplication by a unit. The primitive part pp("P") of "P" is the quotient "P"/"c"("P") of "P" by its content; it is a polynomial with coefficients in "R", which is unique up to multiplication by a unit. If the content is changed by multiplication by a unit "u", then the primitive part must be changed by dividing it by the same unit, in order to keep the equality
formula_4
which is called the primitive-part-content factorization of "P".
The main properties of the content and the primitive part are results of Gauss's lemma, which asserts that the product of two primitive polynomials is primitive, where a polynomial is primitive if 1 is the greatest common divisor of its coefficients. This implies:
The last property implies that the computation of the primitive-part-content factorization of a polynomial reduces the computation of its complete factorization to the separate factorization of the content and the primitive part. This is generally interesting, because the computation of the prime-part-content factorization involves only greatest common divisor computation in "R", which is usually much easier than factorization.
Over the rationals.
The primitive-part-content factorization may be extended to polynomials with rational coefficients as follows.
Given a polynomial "P" with rational coefficients, by rewriting its coefficients with the same common denominator "d", one may rewrite "P" as
formula_9
where Q is a polynomial with integer coefficients.
The content of "P" is the quotient by "d" of the content of "Q", that is
formula_10
and the primitive part of "P" is the primitive part of "Q":
formula_11
It is easy to show that this definition does not depend on the choice of the common denominator, and that the primitive-part-content factorization remains valid:
formula_12
This shows that every polynomial over the rationals is associated with a unique primitive polynomial over the integers, and that the Euclidean algorithm allows the computation of this primitive polynomial.
A consequence is that factoring polynomials over the rationals is equivalent to factoring primitive polynomials over the integers. As polynomials with coefficients in a field are more common than polynomials with integer coefficients, it may seem that this equivalence may be used for factoring polynomials with integer coefficients. In fact, the truth is exactly the opposite: every known efficient algorithm for factoring polynomials with rational coefficients uses this equivalence for reducing the problem modulo some prime number "p" (see Factorization of polynomials).
This equivalence is also used for computing greatest common divisors of polynomials, although the Euclidean algorithm is defined for polynomials with rational coefficients. In fact, in this case, the Euclidean algorithm requires one to compute the reduced form of many fractions, and this makes the Euclidean algorithm less efficient than algorithms which work only with polynomials over the integers (see Polynomial greatest common divisor).
Over a field of fractions.
The results of the preceding section remain valid if the ring of integers and the field of rationals are respectively replaced by any unique factorization domain "R" and its field of fractions "K".
This is typically used for factoring multivariate polynomials, and for proving that a polynomial ring over a unique factorization domain is also a unique factorization domain.
Unique factorization property of polynomial rings.
A polynomial ring over a field is a unique factorization domain. The same is true for a polynomial ring over a unique factorization domain. To prove this, it suffices to consider the univariate case, as the general case may be deduced by induction on the number of indeterminates.
The unique factorization property is a direct consequence of Euclid's lemma: If an irreducible element divides a product, then it divides one of the factors. For univariate polynomials over a field, this results from Bézout's identity, which itself results from the Euclidean algorithm.
So, let "R" be a unique factorization domain, which is not a field, and "R"["X"] the univariate polynomial ring over "R". An irreducible element "r" in "R"["X"] is either an irreducible element in "R" or an irreducible primitive polynomial.
If "r" is in "R" and divides a product formula_13 of two polynomials, then it divides the content formula_14 Thus, by Euclid's lemma in "R", it divides one of the contents, and therefore one of the polynomials.
If "r" is not "R", it is a primitive polynomial (because it is irreducible). Then Euclid's lemma in "R"["X"] results immediately from Euclid's lemma in "K"["X"], where "K" is the field of fractions of "R".
Factorization of multivariate polynomials.
For factoring a multivariate polynomial over a field or over the integers, one may consider it as a univariate polynomial with coefficients in a polynomial ring with one less indeterminate. Then the factorization is reduced to factorizing separately the primitive part and the content. As the content has one less indeterminate, it may be factorized by applying the method recursively. For factorizing the primitive part, the standard method consists of substituting integers to the indeterminates of the coefficients in a way that does not change the degree in the remaining variable, factorizing the resulting univariate polynomial, and lifting the result to a factorization of the primitive part. | [
{
"math_id": 0,
"text": "-12x^3+30x-20"
},
{
"math_id": 1,
"text": "-6x^3+15x-10 = \\frac{-12x^3+30x-20}{2},"
},
{
"math_id": 2,
"text": "-12x^3+30x-20 = 2 (-6x^3+15x-10)."
},
{
"math_id": 3,
"text": "-12x^3+30x-20 =-2 (6x^3-15x+10)."
},
{
"math_id": 4,
"text": "P = c(P) \\operatorname{pp}(P),"
},
{
"math_id": 5,
"text": "c(P_1 P_2) = c(P_1) c(P_2)."
},
{
"math_id": 6,
"text": " \\operatorname{pp}(P_1 P_2) = \\operatorname{pp}(P_1) \\operatorname{pp}(P_2)."
},
{
"math_id": 7,
"text": " c(\\operatorname{gcd}(P_1, P_2)) = \\operatorname{gcd}(c(P_1), c(P_2))."
},
{
"math_id": 8,
"text": " \\operatorname{pp}(\\operatorname{gcd}(P_1, P_2)) = \\operatorname{gcd}(\\operatorname{pp}(P_1), \\operatorname{pp}(P_2))."
},
{
"math_id": 9,
"text": "P=\\frac{Q}{d},"
},
{
"math_id": 10,
"text": "c(P)=\\frac{c(Q)}{d},"
},
{
"math_id": 11,
"text": "\\operatorname{pp}(P) = \\operatorname{pp}(Q)."
},
{
"math_id": 12,
"text": "P=c(P)\\operatorname{pp}(P)."
},
{
"math_id": 13,
"text": "P_1P_2"
},
{
"math_id": 14,
"text": "c(P_1P_2) = c(P_1)c(P_2)."
}
] | https://en.wikipedia.org/wiki?curid=11934455 |
1193525 | Normal order | Type of operator ordering in quantum field theory
In quantum field theory a product of quantum fields, or equivalently their creation and annihilation operators, is usually said to be normal ordered (also called Wick order) when all creation operators are to the left of all annihilation operators in the product. The process of putting a product into normal order is called normal ordering (also called Wick ordering). The terms antinormal order and antinormal ordering are analogously defined, where the annihilation operators are placed to the left of the creation operators.
Normal ordering of a product of quantum fields or creation and annihilation operators can also be defined in many other ways. Which definition is most appropriate depends on the expectation values needed for a given calculation. Most of this article uses the most common definition of normal ordering as given above, which is appropriate when taking expectation values using the vacuum state of the creation and annihilation operators.
The process of normal ordering is particularly important for a quantum mechanical Hamiltonian. When quantizing a classical Hamiltonian there is some freedom when choosing the operator order, and these choices lead to differences in the ground state energy. That's why the process can also be used to eliminate the infinite vacuum energy of a quantum field.
Notation.
If formula_0 denotes an arbitrary product of creation and/or annihilation operators (or equivalently, quantum fields), then the normal ordered form of formula_0 is denoted by formula_1.
An alternative notation is formula_2.
Note that normal ordering is a concept that only makes sense for products of operators. Attempting to apply normal ordering to a sum of operators is not useful as normal ordering is not a linear operation.
Bosons.
Bosons are particles which satisfy Bose–Einstein statistics. We will now examine the normal ordering of bosonic creation and annihilation operator products.
Single bosons.
If we start with only one type of boson there are two operators of interest:
These satisfy the commutator relationship
formula_5
formula_6
formula_7
where formula_8 denotes the commutator. We may rewrite the last one as: formula_9
Examples.
1. We'll consider the simplest case first. This is the normal ordering of formula_10:
formula_11
The expression formula_12 has not been changed because it is "already" in normal order - the creation operator formula_13 is already to the left of the annihilation operator formula_14.
2. A more interesting example is the normal ordering of formula_15:
formula_16
Here the normal ordering operation has "reordered" the terms by placing formula_3 to the left of formula_4.
These two results can be combined with the commutation relation obeyed by formula_4 and formula_3 to get
formula_17
or
formula_18
This equation is used in defining the contractions used in Wick's theorem.
3. An example with multiple operators is:
formula_19
4. A simple example shows that normal ordering cannot be extended by linearity from the monomials to all operators in a self-consistent way. Assume that we can apply the commutation relations to obtain:
formula_20
Then, by linearity,
formula_21
a contradiction.
The implication is that normal ordering is not a linear function on operators, but on the free algebra generated by the operators, i.e. the operators do not satisfy the canonical commutation relations while inside the normal ordering (or any other ordering operator like time-ordering, etc).
Multiple bosons.
If we now consider formula_22 different bosons there are formula_23 operators:
Here formula_27.
These satisfy the commutation relations:
formula_28
formula_29
formula_30
where formula_31 and formula_32 denotes the Kronecker delta.
These may be rewritten as:
formula_33
formula_34
formula_35
Examples.
1. For two different bosons (formula_36) we have
formula_37
formula_38
2. For three different bosons (formula_39) we have
formula_40
Notice that since (by the commutation relations) formula_41 the order in which we write the annihilation operators does not matter.
formula_42
formula_43
Bosonic operator functions.
Normal ordering of bosonic operator functions formula_44, with occupation number operator formula_45, can be accomplished using (falling) factorial powers formula_46 and Newton series instead of Taylor series:
It is easy to show
that factorial powers formula_47 are equal to normal-ordered (raw) powers formula_48 and are therefore normal ordered by construction,
formula_49
such that the Newton series expansion
formula_50
of an operator function formula_51, with formula_52-th forward difference formula_53 at formula_54, is always normal ordered. Here, the eigenvalue equation formula_55 relates formula_56 and formula_57.
As a consequence, the normal-ordered Taylor series of an arbitrary function formula_44 is equal to the Newton series of an associated function formula_51, fulfilling
formula_58
if the series coefficients of the Taylor series of formula_59, with continuous formula_60, match the coefficients of the Newton series of formula_61, with integer formula_57,
formula_62
with formula_52-th partial derivative formula_63 at formula_64.
The functions formula_65 and formula_66 are related through the so-called normal-order transform formula_67 according to
formula_68
which can be expressed in terms of the Mellin transform formula_69, see for details.
Fermions.
Fermions are particles which satisfy Fermi–Dirac statistics. We will now examine the normal ordering of fermionic creation and annihilation operator products.
Single fermions.
For a single fermion there are two operators of interest:
These satisfy the anticommutator relationships
formula_72
formula_73
formula_74
where formula_75 denotes the anticommutator. These may be rewritten as
formula_76
formula_77
formula_78
To define the normal ordering of a product of fermionic creation and annihilation operators we must take into account the number of interchanges between neighbouring operators. We get a minus sign for each such interchange.
Examples.
1. We again start with the simplest cases:
formula_79
This expression is already in normal order so nothing is changed. In the reverse case, we introduce a minus sign because we have to change the order of two operators:
formula_80
These can be combined, along with the anticommutation relations, to show
formula_81
or
formula_82
This equation, which is in the same form as the bosonic case above, is used in defining the contractions used in Wick's theorem.
2. The normal order of any more complicated cases gives zero because there will be at least one creation or annihilation operator appearing twice. For example:
formula_83
Multiple fermions.
For formula_22 different fermions there are formula_23 operators:
Here formula_27.
These satisfy the anti-commutation relations:
formula_86
formula_87
formula_88
where formula_31 and formula_32 denotes the Kronecker delta.
These may be rewritten as:
formula_89
formula_90
formula_91
When calculating the normal order of products of fermion operators we must take into account the number of interchanges of neighbouring operators required to rearrange the expression. It is as if we pretend the creation and annihilation operators anticommute and then we reorder the expression to ensure the creation operators are on the left and the annihilation operators are on the right - all the time taking account of the anticommutation relations.
Examples.
1. For two different fermions (formula_36) we have
formula_92
Here the expression is already normal ordered so nothing changes.
formula_93
Here we introduce a minus sign because we have interchanged the order of two operators.
formula_94
Note that the order in which we write the operators here, unlike in the bosonic case, "does matter".
2. For three different fermions (formula_39) we have
formula_95
Notice that since (by the anticommutation relations) formula_96 the order in which we write the operators "does matter" in this case.
Similarly we have
formula_97
formula_98
Uses in quantum field theory.
The vacuum expectation value of a normal ordered product of creation and annihilation operators is zero. This is because, denoting the vacuum state by formula_99, the creation and annihilation operators satisfy
formula_100
(here formula_101 and formula_102 are creation and annihilation operators (either bosonic or fermionic)).
Let formula_0 denote a non-empty product of creation and annihilation operators. Although this may satisfy
formula_103
we have
formula_104
Normal ordered operators are particularly useful when defining a quantum mechanical Hamiltonian. If the Hamiltonian of a theory is in normal order then the ground state energy will be zero:
formula_105.
Free fields.
With two free fields φ and χ,
formula_106
where formula_99 is again the vacuum state. Each of the two terms on the right hand side typically blows up in the limit as y approaches x but the difference between them has a well-defined limit. This allows us to define :φ(x)χ(x):.
Wick's theorem.
Wick's theorem states the relationship between the time ordered product of formula_57 fields and a sum of
normal ordered products. This may be expressed for formula_57 even as
formula_107
where the summation is over all the distinct ways in which one may pair up fields. The result for formula_57 odd looks the same
except for the last line which reads
formula_108
This theorem provides a simple method for computing vacuum expectation values of time ordered products of operators and was the motivation behind the introduction of normal ordering.
Alternative definitions.
The most general definition of normal ordering involves splitting all quantum fields into two parts (for example see Evans and Steer 1996)
formula_109.
In a product of fields, the fields are split into the two parts and the formula_110 parts are moved so as to be always to the left of all the formula_111 parts. In the usual case considered in the rest of the article, the formula_110 contains only creation operators, while the formula_111 contains only annihilation operators. As this is a mathematical identity, one can split fields in any way one likes. However, for this to be a useful procedure one demands that the normal ordered product of "any" combination of fields has zero expectation value
formula_112
It is also important for practical calculations that all the commutators (anti-commutator for fermionic fields) of all formula_113 and formula_114 are all c-numbers. These two properties means that we can apply Wick's theorem in the usual way, turning expectation values of time-ordered products of fields into products of c-number pairs, the contractions. In this generalised setting, the contraction is defined to be the difference between the time-ordered product and the normal ordered product of a pair of fields.
The simplest example is found in the context of thermal quantum field theory (Evans and Steer 1996). In this case the expectation values of interest are statistical ensembles, traces over all states weighted by formula_115. For instance, for a single bosonic quantum harmonic oscillator we have that the thermal expectation value of the number operator is simply the Bose–Einstein distribution
formula_116
So here the number operator formula_10 is normal ordered in the usual sense used in the rest of the article yet its thermal expectation values are non-zero. Applying Wick's theorem and doing calculation with the usual normal ordering in this thermal context is possible but computationally impractical. The solution is to define a different ordering, such that the formula_113 and formula_114 are "linear combinations" of the original annihilation and creations operators. The combinations are chosen to ensure that the thermal expectation values of normal ordered products are always zero so the split chosen will depend on the temperature.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{O}"
},
{
"math_id": 1,
"text": "\\mathopen{:} \\hat{O} \\mathclose{:}"
},
{
"math_id": 2,
"text": " \\mathcal{N}(\\hat{O})"
},
{
"math_id": 3,
"text": "\\hat{b}^\\dagger"
},
{
"math_id": 4,
"text": "\\hat{b}"
},
{
"math_id": 5,
"text": "\\left[\\hat{b}^\\dagger, \\hat{b}^\\dagger \\right]_- = 0"
},
{
"math_id": 6,
"text": "\\left[\\hat{b}, \\hat{b} \\right]_- = 0"
},
{
"math_id": 7,
"text": "\\left[\\hat{b}, \\hat{b}^\\dagger \\right]_- = 1"
},
{
"math_id": 8,
"text": "\\left[ A, B \\right]_- \\equiv AB - BA"
},
{
"math_id": 9,
"text": "\\hat{b}\\, \\hat{b}^\\dagger = \\hat{b}^\\dagger\\, \\hat{b} + 1."
},
{
"math_id": 10,
"text": "\\hat{b}^\\dagger \\hat{b}"
},
{
"math_id": 11,
"text": " {:\\,}\\hat{b}^\\dagger \\, \\hat{b}{\\,:} = \\hat{b}^\\dagger \\, \\hat{b}. "
},
{
"math_id": 12,
"text": "\\hat{b}^\\dagger \\, \\hat{b}"
},
{
"math_id": 13,
"text": "(\\hat{b}^\\dagger)"
},
{
"math_id": 14,
"text": "(\\hat{b})"
},
{
"math_id": 15,
"text": "\\hat{b} \\, \\hat{b}^\\dagger "
},
{
"math_id": 16,
"text": " {:\\,}\\hat{b} \\, \\hat{b}^\\dagger{\\,:} = \\hat{b}^\\dagger \\, \\hat{b}. "
},
{
"math_id": 17,
"text": " \\hat{b} \\, \\hat{b}^\\dagger = \\hat{b}^\\dagger \\, \\hat{b} + 1 = {:\\,}\\hat{b} \\, \\hat{b}^\\dagger{\\,:} \\; + 1."
},
{
"math_id": 18,
"text": " \\hat{b} \\, \\hat{b}^\\dagger - {:\\,}\\hat{b} \\, \\hat{b}^\\dagger{\\,:} = 1."
},
{
"math_id": 19,
"text": " {:\\,}\\hat{b}^\\dagger \\, \\hat{b} \\, \\hat{b} \\, \\hat{b}^\\dagger \\, \\hat{b} \\, \\hat{b}^\\dagger \\, \\hat{b}{\\,:} = \\hat{b}^\\dagger \\, \\hat{b}^\\dagger \\, \\hat{b}^\\dagger \\, \\hat{b} \\, \\hat{b} \\, \\hat{b} \\, \\hat{b} = (\\hat{b}^\\dagger)^3 \\, \\hat{b}^4."
},
{
"math_id": 20,
"text": " {:\\,}\\hat{b} \\hat{b}^\\dagger{\\,:} = {:\\,}1 + \\hat{b}^\\dagger \\hat{b}{\\,:}."
},
{
"math_id": 21,
"text": "{:\\,}1 + \\hat{b}^\\dagger \\hat{b}{\\,:} = {:\\,}1{\\,:} + {:\\,}\\hat{b}^\\dagger \\hat{b}{\\,:} = \n1 + \\hat{b}^\\dagger \\hat{b} \\ne \\hat{b}^\\dagger \\hat{b}={:\\,}\\hat{b} \\hat{b}^\\dagger{\\,:},"
},
{
"math_id": 22,
"text": "N"
},
{
"math_id": 23,
"text": "2 N"
},
{
"math_id": 24,
"text": "\\hat{b}_i^\\dagger"
},
{
"math_id": 25,
"text": "i^{th}"
},
{
"math_id": 26,
"text": "\\hat{b}_i"
},
{
"math_id": 27,
"text": "i = 1,\\ldots,N"
},
{
"math_id": 28,
"text": "\\left[\\hat{b}_i^\\dagger, \\hat{b}_j^\\dagger \\right]_- = 0 "
},
{
"math_id": 29,
"text": "\\left[\\hat{b}_i, \\hat{b}_j \\right]_- = 0 "
},
{
"math_id": 30,
"text": "\\left[\\hat{b}_i, \\hat{b}_j^\\dagger \\right]_- = \\delta_{ij} "
},
{
"math_id": 31,
"text": "i,j = 1,\\ldots,N"
},
{
"math_id": 32,
"text": "\\delta_{ij}"
},
{
"math_id": 33,
"text": "\\hat{b}_i^\\dagger \\, \\hat{b}_j^\\dagger = \\hat{b}_j^\\dagger \\, \\hat{b}_i^\\dagger "
},
{
"math_id": 34,
"text": "\\hat{b}_i \\, \\hat{b}_j = \\hat{b}_j \\, \\hat{b}_i "
},
{
"math_id": 35,
"text": "\\hat{b}_i \\,\\hat{b}_j^\\dagger = \\hat{b}_j^\\dagger \\,\\hat{b}_i + \\delta_{ij}."
},
{
"math_id": 36,
"text": "N=2"
},
{
"math_id": 37,
"text": " : \\hat{b}_1^\\dagger \\,\\hat{b}_2 : \\,= \\hat{b}_1^\\dagger \\,\\hat{b}_2 "
},
{
"math_id": 38,
"text": " : \\hat{b}_2 \\, \\hat{b}_1^\\dagger : \\,= \\hat{b}_1^\\dagger \\,\\hat{b}_2 "
},
{
"math_id": 39,
"text": "N=3"
},
{
"math_id": 40,
"text": " : \\hat{b}_1^\\dagger \\,\\hat{b}_2 \\,\\hat{b}_3 : \\,= \\hat{b}_1^\\dagger \\,\\hat{b}_2 \\,\\hat{b}_3"
},
{
"math_id": 41,
"text": "\\hat{b}_2 \\,\\hat{b}_3 = \\hat{b}_3 \\,\\hat{b}_2"
},
{
"math_id": 42,
"text": " : \\hat{b}_2 \\, \\hat{b}_1^\\dagger \\, \\hat{b}_3 : \\,= \\hat{b}_1^\\dagger \\,\\hat{b}_2 \\, \\hat{b}_3 "
},
{
"math_id": 43,
"text": " : \\hat{b}_3 \\hat{b}_2 \\, \\hat{b}_1^\\dagger : \\,= \\hat{b}_1^\\dagger \\,\\hat{b}_2 \\, \\hat{b}_3 "
},
{
"math_id": 44,
"text": "f(\\hat n)"
},
{
"math_id": 45,
"text": "\\hat n=\\hat b\\vphantom{\\hat n}^\\dagger \\hat b"
},
{
"math_id": 46,
"text": "\\hat n^{\\underline{k}}=\\hat n(\\hat n-1)\\cdots(\\hat n-k+1)"
},
{
"math_id": 47,
"text": "\\hat n^{\\underline{k}}"
},
{
"math_id": 48,
"text": "\\hat n^{k}"
},
{
"math_id": 49,
"text": "\n\\hat{n}^{\\underline{k}}\n = \\hat b\\vphantom{\\hat n}^{\\dagger k} \\hat b\\vphantom{\\hat n}^k\n = {:\\,}\\hat n^k{\\,:},\n"
},
{
"math_id": 50,
"text": "\n\\tilde f(\\hat n) = \\sum_{k=0}^\\infty \\Delta_n^k \\tilde f(0) \\, \\frac{\\hat n^{\\underline{k}}}{k!}\n"
},
{
"math_id": 51,
"text": "\\tilde f(\\hat n)"
},
{
"math_id": 52,
"text": "k"
},
{
"math_id": 53,
"text": "\\Delta_n^k \\tilde f(0)"
},
{
"math_id": 54,
"text": "n=0"
},
{
"math_id": 55,
"text": "\\hat n |n\\rangle = n |n\\rangle"
},
{
"math_id": 56,
"text": "\\hat n"
},
{
"math_id": 57,
"text": "n"
},
{
"math_id": 58,
"text": " \n\\tilde f(\\hat n) = {:\\,} f(\\hat n) {\\,:},\n"
},
{
"math_id": 59,
"text": "f(x)"
},
{
"math_id": 60,
"text": "x"
},
{
"math_id": 61,
"text": "\\tilde f(n)"
},
{
"math_id": 62,
"text": "\n\\begin{align}\n f(x) &= \\sum_{k=0}^\\infty F_k \\, \\frac{x^k }{k!}, \\\\\n\\tilde f(n) &= \\sum_{k=0}^\\infty F_k \\, \\frac{n^{\\underline{k}}}{k!}, \\\\\nF_k &= \\partial_x^k f(0) = \\Delta_n^k \\tilde f(0),\n\\end{align}\n"
},
{
"math_id": 63,
"text": "\\partial_x^k f(0)"
},
{
"math_id": 64,
"text": "x=0"
},
{
"math_id": 65,
"text": "f"
},
{
"math_id": 66,
"text": "\\tilde f"
},
{
"math_id": 67,
"text": "\\mathcal N[f]"
},
{
"math_id": 68,
"text": "\n\\begin{align}\n\\tilde f(n) &= \\mathcal N_x[f(x)](n) \\\\\n &= \\frac{1}{\\Gamma(-n)} \\int_{-\\infty}^0 \\mathrm d x \\, e^x \\, f(x) \\, (-x)^{-(n+1)} \\\\\n &= \\frac{1}{\\Gamma(-n)}\\mathcal M_{-x}[e^{x} f(x)](-n),\n\\end{align}\n"
},
{
"math_id": 69,
"text": "\\mathcal M"
},
{
"math_id": 70,
"text": "\\hat{f}^\\dagger"
},
{
"math_id": 71,
"text": "\\hat{f}"
},
{
"math_id": 72,
"text": "\\left[\\hat{f}^\\dagger, \\hat{f}^\\dagger \\right]_+ = 0"
},
{
"math_id": 73,
"text": "\\left[\\hat{f}, \\hat{f} \\right]_+ = 0"
},
{
"math_id": 74,
"text": "\\left[\\hat{f}, \\hat{f}^\\dagger \\right]_+ = 1"
},
{
"math_id": 75,
"text": "\\left[A, B \\right]_+ \\equiv AB + BA"
},
{
"math_id": 76,
"text": "\\hat{f}^\\dagger\\, \\hat{f}^\\dagger = 0 "
},
{
"math_id": 77,
"text": "\\hat{f} \\,\\hat{f} = 0 "
},
{
"math_id": 78,
"text": "\\hat{f} \\,\\hat{f}^\\dagger = 1 - \\hat{f}^\\dagger \\,\\hat{f} ."
},
{
"math_id": 79,
"text": " : \\hat{f}^\\dagger \\, \\hat{f} : \\,= \\hat{f}^\\dagger \\, \\hat{f} "
},
{
"math_id": 80,
"text": " : \\hat{f} \\, \\hat{f}^\\dagger : \\,= -\\hat{f}^\\dagger \\, \\hat{f} "
},
{
"math_id": 81,
"text": " \\hat{f} \\, \\hat{f}^\\dagger \\,= 1 - \\hat{f}^\\dagger \\, \\hat{f} = 1 + :\\hat{f} \\,\\hat{f}^\\dagger :"
},
{
"math_id": 82,
"text": " \\hat{f} \\, \\hat{f}^\\dagger - : \\hat{f} \\, \\hat{f}^\\dagger : = 1."
},
{
"math_id": 83,
"text": " : \\hat{f}\\,\\hat{f}^\\dagger \\, \\hat{f} \\hat{f}^\\dagger : \\,= -\\hat{f}^\\dagger \\,\\hat{f}^\\dagger \\,\\hat{f}\\,\\hat{f} = 0 "
},
{
"math_id": 84,
"text": "\\hat{f}_i^\\dagger"
},
{
"math_id": 85,
"text": "\\hat{f}_i"
},
{
"math_id": 86,
"text": "\\left[\\hat{f}_i^\\dagger, \\hat{f}_j^\\dagger \\right]_+ = 0 "
},
{
"math_id": 87,
"text": "\\left[\\hat{f}_i, \\hat{f}_j \\right]_+ = 0 "
},
{
"math_id": 88,
"text": "\\left[\\hat{f}_i, \\hat{f}_j^\\dagger \\right]_+ = \\delta_{ij} "
},
{
"math_id": 89,
"text": "\\hat{f}_i^\\dagger \\, \\hat{f}_j^\\dagger = -\\hat{f}_j^\\dagger \\, \\hat{f}_i^\\dagger "
},
{
"math_id": 90,
"text": "\\hat{f}_i \\, \\hat{f}_j = -\\hat{f}_j \\, \\hat{f}_i "
},
{
"math_id": 91,
"text": "\\hat{f}_i \\,\\hat{f}_j^\\dagger = \\delta_{ij} - \\hat{f}_j^\\dagger \\,\\hat{f}_i ."
},
{
"math_id": 92,
"text": " : \\hat{f}_1^\\dagger \\,\\hat{f}_2 : \\,= \\hat{f}_1^\\dagger \\,\\hat{f}_2 "
},
{
"math_id": 93,
"text": " : \\hat{f}_2 \\, \\hat{f}_1^\\dagger : \\,= -\\hat{f}_1^\\dagger \\,\\hat{f}_2 "
},
{
"math_id": 94,
"text": " : \\hat{f}_2 \\, \\hat{f}_1^\\dagger \\, \\hat{f}^\\dagger_2 : \\,= \\hat{f}_1^\\dagger \\, \\hat{f}_2^\\dagger \\,\\hat{f}_2 = -\\hat{f}_2^\\dagger \\, \\hat{f}_1^\\dagger \\,\\hat{f}_2 "
},
{
"math_id": 95,
"text": " : \\hat{f}_1^\\dagger \\, \\hat{f}_2 \\, \\hat{f}_3 : \\,= \\hat{f}_1^\\dagger \\,\\hat{f}_2 \\,\\hat{f}_3 = -\\hat{f}_1^\\dagger \\,\\hat{f}_3 \\,\\hat{f}_2"
},
{
"math_id": 96,
"text": "\\hat{f}_2 \\,\\hat{f}_3 = -\\hat{f}_3 \\,\\hat{f}_2"
},
{
"math_id": 97,
"text": " : \\hat{f}_2 \\, \\hat{f}_1^\\dagger \\, \\hat{f}_3 : \\,= -\\hat{f}_1^\\dagger \\,\\hat{f}_2 \\, \\hat{f}_3 = \\hat{f}_1^\\dagger \\,\\hat{f}_3 \\, \\hat{f}_2"
},
{
"math_id": 98,
"text": " : \\hat{f}_3 \\hat{f}_2 \\, \\hat{f}_1^\\dagger : \\,= \\hat{f}_1^\\dagger \\,\\hat{f}_3 \\, \\hat{f}_2 = -\\hat{f}_1^\\dagger \\,\\hat{f}_2 \\, \\hat{f}_3 "
},
{
"math_id": 99,
"text": "|0\\rangle"
},
{
"math_id": 100,
"text": "\\langle 0 | \\hat{a}^\\dagger = 0 \\qquad \\textrm{and} \\qquad \\hat{a} |0\\rangle = 0"
},
{
"math_id": 101,
"text": "\\hat{a}^\\dagger"
},
{
"math_id": 102,
"text": "\\hat{a}"
},
{
"math_id": 103,
"text": "\\langle 0 | \\hat{O} | 0 \\rangle \\neq 0,"
},
{
"math_id": 104,
"text": "\\langle 0 | :\\hat{O}: | 0 \\rangle = 0."
},
{
"math_id": 105,
"text": "\\langle 0 |\\hat{H}|0\\rangle = 0"
},
{
"math_id": 106,
"text": ":\\phi(x)\\chi(y):\\,\\,=\\phi(x)\\chi(y)-\\langle 0|\\phi(x)\\chi(y)| 0\\rangle"
},
{
"math_id": 107,
"text": "\\begin{align}\nT\\left[\\phi(x_1)\\cdots \\phi(x_n)\\right]=&:\\phi(x_1)\\cdots \\phi(x_n):\n+\\sum_\\textrm{perm}\\langle 0 |T\\left[\\phi(x_1)\\phi(x_2)\\right]|0\\rangle :\\phi(x_3)\\cdots \\phi(x_n):\\\\\n&+\\sum_\\textrm{perm}\\langle 0 |T\\left[\\phi(x_1)\\phi(x_2)\\right]|0\\rangle \\langle 0 |T\\left[\\phi(x_3)\\phi(x_4)\\right]|0\\rangle:\\phi(x_5)\\cdots \\phi(x_n):\\\\ \n\\vdots \\\\\n&+\\sum_\\textrm{perm}\\langle 0 |T\\left[\\phi(x_1)\\phi(x_2)\\right]|0\\rangle\\cdots \\langle 0 |T\\left[\\phi(x_{n-1})\\phi(x_n)\\right]|0\\rangle\n\\end{align}"
},
{
"math_id": 108,
"text": "\n\\sum_\\text{perm}\\langle 0 |T\\left[\\phi(x_1)\\phi(x_2)\\right]|0\\rangle\\cdots\\langle 0 | T\\left[\\phi(x_{n-2})\\phi(x_{n-1})\\right]|0\\rangle\\phi(x_n).\n"
},
{
"math_id": 109,
"text": "\\phi_i(x)=\\phi^+_i(x)+\\phi^-_i(x)"
},
{
"math_id": 110,
"text": "\\phi^+(x)"
},
{
"math_id": 111,
"text": "\\phi^-(x)"
},
{
"math_id": 112,
"text": "\\langle :\\phi_1(x_1)\\phi_2(x_2)\\ldots\\phi_n(x_n):\\rangle=0"
},
{
"math_id": 113,
"text": "\\phi^+_i"
},
{
"math_id": 114,
"text": "\\phi^-_j"
},
{
"math_id": 115,
"text": "\\exp (-\\beta \\hat{H})"
},
{
"math_id": 116,
"text": "\\langle\\hat{b}^\\dagger \\hat{b}\\rangle\n = \\frac{\\mathrm{Tr} (e^{-\\beta \\omega \\hat{b}^\\dagger \\hat{b}} \\hat{b}^\\dagger \\hat{b} )}{\\mathrm{Tr} (e^{-\\beta \\omega \\hat{b}^\\dagger \\hat{b} })}\n = \\frac{1}{e^{\\beta \\omega}-1}\n"
}
] | https://en.wikipedia.org/wiki?curid=1193525 |
1193823 | Jacobian conjecture | On invertibility of polynomial maps (mathematics)
In mathematics, the Jacobian conjecture is a famous unsolved problem concerning polynomials in several variables. It states that if a polynomial function from an "n"-dimensional space to itself has Jacobian determinant which is a non-zero constant, then the function has a polynomial inverse. It was first conjectured in 1939 by Ott-Heinrich Keller, and widely publicized by Shreeram Abhyankar, as an example of a difficult question in algebraic geometry that can be understood using little beyond a knowledge of calculus.
The Jacobian conjecture is notorious for the large number of attempted proofs that turned out to contain subtle errors. As of 2018, there are no plausible claims to have proved it. Even the two-variable case has resisted all efforts. There are currently no known compelling reasons for believing the conjecture to be true, and according to van den Essen there are some suspicions that the conjecture is in fact false for large numbers of variables (indeed, there is equally also no compelling evidence to support these suspicions). The Jacobian conjecture is number 16 in Stephen Smale's 1998 list of Mathematical Problems for the Next Century.
The Jacobian determinant.
Let "N" > 1 be a fixed integer and consider polynomials "f"1, ..., "f""N" in variables "X"1, ..., "X""N" with coefficients in a field "k". Then we define a vector-valued function "F": "kN" → "k""N" by setting:
"F"("X"1, ..., "X""N") = ("f"1("X"1, ...,"X""N")..., "f""N"("X"1...,"X""N")).
Any map "F": "kN" → "k""N" arising in this way is called a polynomial mapping.
The Jacobian determinant of "F", denoted by "JF", is defined as the determinant of the "N" × "N" Jacobian matrix consisting of the partial derivatives of "fi" with respect to "Xj":
formula_0
then "JF" is itself a polynomial function of the "N" variables "X"1, ..., "XN".
Formulation of the conjecture.
It follows from the multivariable chain rule that if "F" has a polynomial inverse function "G": "kN" → "kN", then "JF" has a polynomial reciprocal, so is a nonzero constant. The Jacobian conjecture is the following partial converse:
Jacobian conjecture: Let "k" have characteristic 0. If "JF" is a non-zero constant, then "F" has an inverse function "G": "kN" → "kN" which is regular, meaning its components are polynomials.
According to van den Essen, the problem was first conjectured by Keller in 1939 for the limited case of two variables and integer coefficients.
The obvious analogue of the Jacobian conjecture fails if "k" has characteristic "p" > 0 even for one variable. The characteristic of a field, if it is not zero, must be prime, so at least 2. The polynomial "x" − "x""p" has derivative 1 − "p x""p"−1 which is 1 (because "px" is 0) but it has no inverse function. However, Kossivi Adjamagbo suggested extending the Jacobian conjecture to characteristic "p" > 0 by adding the hypothesis that "p" does not divide the degree of the field extension "k"("X") / "k"("F").
The existence of a polynomial inverse is obvious if "F" is simply a set of functions linear in the variables, because then the inverse will also be a set of linear functions. A simple non-linear example is given by
formula_1
formula_2
so that the Jacobian determinant is
formula_3
In this case the inverse exists as the polynomials
formula_4
formula_5
But if we modify "F" slightly, to
formula_6
formula_2
then the determinant is
formula_7
which is not constant, and the Jacobian conjecture does not apply.
The function still has an inverse:
formula_8
formula_9
but the expression for "x" is not a polynomial.
The condition "JF" ≠ 0 is related to the inverse function theorem in multivariable calculus. In fact for smooth functions (and so in particular for polynomials) a smooth local inverse function to "F" exists at every point where "JF" is non-zero. For example, the map x → "x" + "x"3 has a smooth global inverse, but the inverse is not polynomial.
Results.
Stuart Sui-Sheng Wang proved the Jacobian conjecture for polynomials of degree 2. Hyman Bass, Edwin Connell, and David Wright showed that the general case follows from the special case where the polynomials are of degree 3, or even more specifically, of cubic homogeneous type, meaning of the form "F" = ("X"1 + "H"1, ..., "X""n" + "H""n"), where each "H""i" is either zero or a homogeneous cubic. Ludwik Drużkowski showed that one may further assume that the map is of cubic linear type, meaning that the nonzero "H""i" are cubes of homogeneous linear polynomials. It seems that Drużkowski's reduction is one most promising way to go forward. These reductions introduce additional variables and so are not available for fixed "N".
Edwin Connell and Lou van den Dries proved that if the Jacobian conjecture is false, then it has a counterexample with integer coefficients and Jacobian determinant 1. In consequence, the Jacobian conjecture is true either for all fields of characteristic 0 or for none. For fixed dimension "N", it is true if it holds for at least one algebraically closed field of characteristic 0.
Let "k"["X"] denote the polynomial ring "k"["X"1, ..., "X""n"] and "k"["F"] denote the "k"-subalgebra generated by "f"1, ..., "f""n". For a given "F", the Jacobian conjecture is true if, and only if, "k"["X"]
"k"["F"]. Keller (1939) proved the birational case, that is, where the two fields "k"("X") and "k"("F") are equal. The case where "k"("X") is a Galois extension of "k"("F") was proved by Andrew Campbell for complex maps and in general by Michael Razar and, independently, by David Wright. Tzuong-Tsieng Moh checked the conjecture for polynomials of degree at most 100 in two variables.
Michiel de Bondt and Arno van den Essen and Ludwik Drużkowski independently showed that it is enough to prove the Jacobian Conjecture for complex maps of cubic homogeneous type with a symmetric Jacobian matrix, and further showed that the conjecture holds for maps of cubic linear type with a symmetric Jacobian matrix, over any field of characteristic 0.
The strong real Jacobian conjecture was that a real polynomial map with a nowhere vanishing Jacobian determinant has a smooth global inverse. That is equivalent to asking whether such a map is topologically a proper map, in which case it is a covering map of a simply connected manifold, hence invertible. Sergey Pinchuk constructed two variable counterexamples of total degree 35 and higher.
It is well known that the Dixmier conjecture implies the Jacobian conjecture. Conversely, it is shown by Yoshifumi Tsuchimoto and independently by Alexei Belov-Kanel and Maxim Kontsevich that the Jacobian conjecture for "2N" variables implies the Dixmier conjecture in "N" dimensions. A self-contained and purely algebraic proof of the last implication is also given by Kossivi Adjamagbo and Arno van den Essen who also proved in the same paper that these two conjectures are equivalent to the Poisson conjecture.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "J_F = \\left | \\begin{matrix} \\frac{\\partial f_1}{\\partial X_1} & \\cdots & \\frac{\\partial f_1}{\\partial X_N} \\\\\n\\vdots & \\ddots & \\vdots \\\\\n\\frac{\\partial f_N}{\\partial X_1} & \\cdots & \\frac{\\partial f_N}{\\partial X_N} \\end{matrix} \\right |,"
},
{
"math_id": 1,
"text": "u=x^2+y+x"
},
{
"math_id": 2,
"text": "v=x^2+y"
},
{
"math_id": 3,
"text": "J_F = \\left | \\begin{matrix} 1+2x & 1 \\\\\n2x & 1 \\end{matrix} \\right |\n= (1+2x)(1) - (1)2x = 1.\n"
},
{
"math_id": 4,
"text": "x=u-v"
},
{
"math_id": 5,
"text": "y=v-(u-v)^2."
},
{
"math_id": 6,
"text": "u=2x^2+y"
},
{
"math_id": 7,
"text": "J_F = \\left | \\begin{matrix} 4x & 1 \\\\\n2x & 1 \\end{matrix} \\right |\n= (4x)(1) - 2x(1) = 2x,\n"
},
{
"math_id": 8,
"text": "x=\\sqrt{u-v}"
},
{
"math_id": 9,
"text": "y=2v-u,"
}
] | https://en.wikipedia.org/wiki?curid=1193823 |
11938767 | Polynomial chaos | Polynomial chaos (PC), also called polynomial chaos expansion (PCE) and Wiener chaos expansion, is a method for representing a random variable in terms of a polynomial function of other random variables. The polynomials are chosen to be orthogonal with respect to the joint probability distribution of these random variables. Note that despite its name, PCE has no immediate connections to chaos theory. The word "chaos" here should be understood as "random".
PCE was first introduced in 1938 by Norbert Wiener using Hermite polynomials to model stochastic processes with Gaussian random variables. It was introduced to the physics and engineering community by R. Ghanem and P. D. Spanos in 1991 and generalized to other orthogonal polynomial families by D. Xiu and G. E. Karniadakis in 2002. Mathematically rigorous proofs of existence and convergence of generalized PCE were given by O. G. Ernst and coworkers in 2011.
PCE has found widespread use in engineering and the applied sciences because it makes possible to deal with probabilistic uncertainty in the parameters of a system. In particular, PCE has been used as a surrogate model to facilitate uncertainty quantification analyses. PCE has also been widely used in stochastic finite element analysis and to determine the evolution of uncertainty in a dynamical system when there is probabilistic uncertainty in the system parameters.
Main principles.
Polynomial chaos expansion (PCE) provides a way to represent a random variable formula_0 with finite variance (i.e., formula_1) as a function of an formula_2-dimensional random vector formula_3, using a polynomial basis that is orthogonal with respect to the distribution of this random vector. The prototypical PCE can be written as:
formula_4
In this expression, formula_5 is a coefficient and formula_6 denotes a polynomial basis function. Depending on the distribution of formula_3, different PCE types are distinguished.
Hermite polynomial chaos.
The original PCE formulation used by Norbert Wiener was limited to the case where formula_3 is a random vector with a Gaussian distribution. Considering only the one-dimensional case (i.e., formula_7 and formula_8), the polynomial basis function orthogonal w.r.t. the Gaussian distribution are the set of formula_9-th degree Hermite polynomials formula_10. The PCE of formula_0 can then be written as:
formula_11.
Generalized polynomial chaos.
Xiu (in his PhD under Karniadakis at Brown University) generalized the result of Cameron–Martin to various continuous and discrete distributions using orthogonal polynomials from the so-called Askey-scheme and demonstrated formula_12 convergence in the corresponding Hilbert functional space. This is popularly known as the generalized polynomial chaos (gPC) framework. The gPC framework has been applied to applications including stochastic fluid dynamics, stochastic finite elements, solid mechanics, nonlinear estimation, the evaluation of finite word-length effects in non-linear fixed-point digital systems and probabilistic robust control. It has been demonstrated that gPC based methods are computationally superior to Monte-Carlo based methods in a number of applications. However, the method has a notable limitation. For large numbers of random variables, polynomial chaos becomes very computationally expensive and Monte-Carlo methods are typically more feasible.
Arbitrary polynomial chaos.
Recently chaos expansion received a generalization towards the arbitrary polynomial chaos expansion (aPC), which is a so-called data-driven generalization of the PC. Like all polynomial chaos expansion techniques, aPC approximates the dependence of simulation model output on model parameters by expansion in an orthogonal polynomial basis. The aPC generalizes chaos expansion techniques towards arbitrary distributions with arbitrary probability measures, which can be either discrete, continuous, or discretized continuous and can be specified either analytically (as probability density/cumulative distribution functions), numerically as histogram or as raw data sets. The aPC at finite expansion order only demands the existence of a finite number of moments and does not require the complete knowledge or even existence of a probability density function. This avoids the necessity to assign parametric probability distributions that are not sufficiently supported by limited available data. Alternatively, it allows modellers to choose freely of technical constraints the shapes of their statistical assumptions. Investigations indicate that the aPC shows an exponential convergence rate and converges faster than classical polynomial chaos expansion techniques. Yet these techniques are in progress but the impact of them on computational fluid dynamics (CFD) models is quite impressionable.
Polynomial chaos and incomplete statistical information.
In many practical situations, only incomplete and inaccurate statistical knowledge on uncertain input parameters are available. Fortunately, to construct a finite-order expansion, only some partial information on the probability measure is required that can be simply represented by a finite number of statistical moments. Any order of expansion is only justified if accompanied by reliable statistical information on input data. Thus, incomplete statistical information limits the utility of high-order polynomial chaos expansions.
Polynomial chaos and non-linear prediction.
Polynomial chaos can be utilized in the prediction of non-linear functionals of Gaussian stationary increment processes conditioned on their past realizations. Specifically, such prediction is obtained by deriving the chaos expansion of the functional with respect to a special basis for the Gaussian Hilbert space generated by the process that with the property that each basis element is either measurable or independent with respect to the given samples. For example, this approach leads to an easy prediction formula for the Fractional Brownian motion.
Bayesian polynomial chaos.
In a non-intrusive setting, the estimation of the expansion coefficients formula_5 for a given set of basis functions formula_6 can be considered as a Bayesian regression problem by constructing a surrogate model. This approach has benefits in that analytical expressions for the data evidence (in the sense of Bayesian inference) as well as the uncertainty of the expansion coefficients are available. The evidence then can be used as a measure for the selection of expansion terms and pruning of the series (see also Bayesian model comparison). The uncertainty of the expansion coefficients can be used to assess the quality and trustworthiness of the PCE, and furthermore the impact of this assessment on the actual quantity of interest formula_0.
Let formula_13 be a set of formula_14 pairs of input-output data that is used to estimate the expansion coefficients formula_5. Let formula_2 be the data matrix with elements formula_15, let formula_16 be the set of formula_17 output data written in vector form, and let be formula_18 the set of expansion coefficients in vector form. Under the assumption that the uncertainty of the PCE is of Gaussian type with unknown variance and a scale-invariant prior, the expectation value formula_19 for the expansion coefficients is
formula_20
With formula_21, then the covariance of the coefficients is
formula_22
where formula_23is the minimal misfit and formula_24 is the identity matrix. The uncertainty of the estimate for the coefficient formula_25 is then given by formula_26.Thus the uncertainty of the estimate for expansion coefficients can be obtained with simple vector-matrix multiplications. For a given input propability density function formula_27, it was shown the second moment for the quantity of interest then simply is
formula_28
This equation amounts the matrix-vector multiplications above plus the marginalization with respect to formula_3. The first term formula_29 determines the primary uncertainty of the quantity of interest formula_30, as obtained based on the PCE used as a surrogate. The second term formula_31 constitutes an additional inferential uncertainty (often of mixed aleatoric-epistemic type) in the quantity of interest formula_30 that is due to a finite uncertainty of the PCE. If enough data is available, in terms of quality and quantity, it can be shown that formula_32 becomes negligibly small and becomes small This can be judged by simply building the ratios of the two terms, e.g. formula_33.This ratio quantifies the amount of the PCE's own uncertainty in the total uncertainty and is in the interval formula_34. E.g., if formula_35, then half of the uncertainty stems from the PCE itself, and actions to improve the PCE can be taken or gather more data. Ifformula_36, then the PCE's uncertainty is low and the PCE may be deemed trustworthy.
In a Bayesian surrogate model selection, the probability for a particular surrogate model, i.e. a particular set formula_37 of expansion coefficients formula_5 and basis functions formula_6 , is given by the evidence of the data formula_38,
formula_39
where formula_40 is the Gamma-function, formula_41 is the determinant of formula_42, formula_17 is the number of data, and formula_43is the solid angle in formula_44dimensions, where formula_44is the number of terms in the PCE.
Analogous findings can be transferred to the computation of PCE-based sensitivity indices. Similar results can be obtained for Kriging.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Y"
},
{
"math_id": 1,
"text": "\\operatorname{Var}(Y)<\\infty"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "\\mathbf{X}"
},
{
"math_id": 4,
"text": "Y = \\sum_{i\\in\\mathbb{N}}c_{i}\\Psi_{i}(\\mathbf{X})."
},
{
"math_id": 5,
"text": "c_{i}"
},
{
"math_id": 6,
"text": "\\Psi_{i}"
},
{
"math_id": 7,
"text": "M=1"
},
{
"math_id": 8,
"text": "\\mathbf{X}=X"
},
{
"math_id": 9,
"text": "i"
},
{
"math_id": 10,
"text": "H_i"
},
{
"math_id": 11,
"text": "Y = \\sum_{i\\in\\mathbb{N}}c_{i}H_{i}(X)"
},
{
"math_id": 12,
"text": "L_2"
},
{
"math_id": 13,
"text": "D= \\{\\mathbf{X}^{(j)}, Y^{(j)}\\}"
},
{
"math_id": 14,
"text": "j = 1,...,N_s"
},
{
"math_id": 15,
"text": "[M]_{ij} = \\Psi_i(\\mathbf{X}^{(j)})"
},
{
"math_id": 16,
"text": "\\vec Y = (Y^{(1)},..., Y^{(j)},...,Y^{(N_s)})^T"
},
{
"math_id": 17,
"text": "N_s"
},
{
"math_id": 18,
"text": "\\vec c = (c_1,...,c_i,...,c_{N_p})^T"
},
{
"math_id": 19,
"text": "\\langle \\cdot \\rangle"
},
{
"math_id": 20,
"text": "\\langle \\vec c \\rangle = (M^T \\;M)^{-1}\\; M^T\\; \\vec Y"
},
{
"math_id": 21,
"text": "H = (M^T M)^{-1}"
},
{
"math_id": 22,
"text": "\\text{Cov}(c_m, c_n) = \\frac{\\chi_{\\text{min}}^2}{N_s-N_p-2} H_{m,n}"
},
{
"math_id": 23,
"text": "\\chi_{\\text{min}}^2= \\vec Y^T( \\mathrm{I}-M\\; H^{-1}M^T) \\;\\vec Y"
},
{
"math_id": 24,
"text": "\\mathrm{I}"
},
{
"math_id": 25,
"text": "n"
},
{
"math_id": 26,
"text": "\\text{Var}(c_m) = \\text{Cov}(c_m, c_m) "
},
{
"math_id": 27,
"text": " p(\\mathbf{X}) "
},
{
"math_id": 28,
"text": "\\langle Y^2 \\rangle = \\underbrace{\\sum_{m,m'} \\int\\Psi_m (\\mathbf{X}) \\Psi_{m'} (\\mathbf{X} )\n\\langle c_m\\rangle \\langle c_{m'} \\rangle p(\\mathbf{X})\\; dV_{\\mathbf{X}} } _{=I_1}\n+ \\underbrace{ \\sum_{m,m'} \\int\\Psi_m (\\mathbf{X}) \\Psi_{m'} (\\mathbf{X} )\\; \\text{Cov}(c_m, c_{m'})\\; p(\\mathbf{X})\\; dV_{\\mathbf{X}}} _{=I_2} "
},
{
"math_id": 29,
"text": "I_1"
},
{
"math_id": 30,
"text": "Y "
},
{
"math_id": 31,
"text": "I_2"
},
{
"math_id": 32,
"text": "\\text{Var}(c_m) "
},
{
"math_id": 33,
"text": "\\frac{I_1}{I_1+I_2}"
},
{
"math_id": 34,
"text": "[0,1]"
},
{
"math_id": 35,
"text": "\\frac{I_1}{I_1+I_2} \\approx 0.5"
},
{
"math_id": 36,
"text": "\\frac{I_1}{I_1+I_2} \\approx 1"
},
{
"math_id": 37,
"text": "S"
},
{
"math_id": 38,
"text": "Z_S"
},
{
"math_id": 39,
"text": "Z_S = \\Omega_{N_p} \\mid H \\mid^{-1/2} (\\chi^2_{\\text{min}})^{-\\frac{N_s-N_p}{2}}\n\\frac{\\Gamma\\big(\\frac{N_p}{2}\\big) \\Gamma \\big( \\frac{N_s-N_p}{2}\\big)}{\\Gamma \\big(\\frac{N_s}{2}\\big)}"
},
{
"math_id": 40,
"text": "\\Gamma"
},
{
"math_id": 41,
"text": "\\mid H \\mid"
},
{
"math_id": 42,
"text": "H"
},
{
"math_id": 43,
"text": "\\Omega_{N_p}"
},
{
"math_id": 44,
"text": "N_p"
}
] | https://en.wikipedia.org/wiki?curid=11938767 |
11939373 | Cartan–Hadamard theorem | On the structure of complete Riemannian manifolds of non-positive sectional curvature
In mathematics, the Cartan–Hadamard theorem is a statement in Riemannian geometry concerning the structure of complete Riemannian manifolds of non-positive sectional curvature. The theorem states that the universal cover of such a manifold is diffeomorphic to a Euclidean space via the exponential map at any point. It was first proved by Hans Carl Friedrich von Mangoldt for surfaces in 1881, and independently by Jacques Hadamard in 1898. Élie Cartan generalized the theorem to Riemannian manifolds in 1928 (; ; ). The theorem was further generalized to a wide class of metric spaces by Mikhail Gromov in 1987; detailed proofs were published by for metric spaces of non-positive curvature and by for general locally convex metric spaces.
Riemannian geometry.
The Cartan–Hadamard theorem in conventional Riemannian geometry asserts that the universal covering space of a connected complete Riemannian manifold of non-positive sectional curvature is diffeomorphic to R"n". In fact, for complete manifolds of non-positive curvature, the exponential map based at any point of the manifold is a covering map.
The theorem holds also for Hilbert manifolds in the sense that the exponential map of a non-positively curved geodesically complete connected manifold is a covering map (; ). Completeness here is understood in the sense that the exponential map is defined on the whole tangent space of a point.
Metric geometry.
In metric geometry, the Cartan–Hadamard theorem is the statement that the universal cover of a connected non-positively curved complete metric space "X" is a Hadamard space. In particular, if "X" is simply connected then it is a geodesic space in the sense that any two points are connected by a unique minimizing geodesic, and hence contractible.
A metric space "X" is said to be non-positively curved if every point "p" has a neighborhood "U" in which any two points are joined by a geodesic, and for any point "z" in "U" and constant speed geodesic γ in "U", one has
formula_0
This inequality may be usefully thought of in terms of a geodesic triangle Δ = "z"γ(0)γ(1). The left-hand side is the square distance from the vertex "z" to the midpoint of the opposite side. The right-hand side represents the square distance from the vertex to the midpoint of the opposite side in a Euclidean triangle having the same side lengths as Δ. This condition, called the CAT(0) condition is an abstract form of Toponogov's triangle comparison theorem.
Generalization to locally convex spaces.
The assumption of non-positive curvature can be weakened , although with a correspondingly weaker conclusion. Call a metric space "X" convex if, for any two constant speed minimizing geodesics "a"("t") and "b"("t"), the function
formula_1
is a convex function of "t". A metric space is then locally convex if every point has a neighborhood that is convex in this sense. The Cartan–Hadamard theorem for locally convex spaces states:
In particular, the universal covering of such a space is contractible. The convexity of the distance function along a pair of geodesics is a well-known consequence of non-positive curvature of a metric space, but it is not equivalent .
Significance.
The Cartan–Hadamard theorem provides an example of a local-to-global correspondence in Riemannian and metric geometry: namely, a local condition (non-positive curvature) and a global condition (simple-connectedness) together imply a strong global property (contractibility); or in the Riemannian case, diffeomorphism with Rn.
The metric form of the theorem demonstrates that a non-positively curved polyhedral cell complex is aspherical. This fact is of crucial importance for modern geometric group theory.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " d(z,\\gamma(1/2))^2 \\le \\frac{1}{2}d(z,\\gamma(0))^2 + \\frac{1}{2}d(z,\\gamma(1))^2 - \\frac{1}{4}d(\\gamma(0),\\gamma(1))^2."
},
{
"math_id": 1,
"text": "t\\mapsto d(a(t),b(t))"
}
] | https://en.wikipedia.org/wiki?curid=11939373 |
11940501 | Macaulay brackets | Macaulay brackets are a notation used to describe the ramp function
formula_0
A popular alternative transcription uses angle brackets, "viz." formula_1.
Another commonly used notation is formula_2+ or formula_3+ for the positive part of formula_2, which avoids conflicts with formula_4 for set notation.
In engineering.
Macaulay's notation is commonly used in the static analysis of bending moments of a beam. This is useful because shear forces applied on a member render the shear and moment diagram discontinuous. Macaulay's notation also provides an easy way of integrating these discontinuous curves to give bending moments, angular deflection, and so on. For engineering purposes, angle brackets are often used to denote the use of Macaulay's method.
formula_5 formula_6
The above example simply states that the function takes the value formula_7 for all "x" values larger than "a". With this, all the forces acting on a beam can be added, with their respective points of action being the value of "a".
A particular case is the unit step function,
formula_8
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\{x\\} = \\begin{cases} 0, & x < 0 \\\\ x, & x \\ge 0. \\end{cases}"
},
{
"math_id": 1,
"text": "\\langle x \\rangle"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "(x)"
},
{
"math_id": 4,
"text": "\\{...\\}"
},
{
"math_id": 5,
"text": "\\{x-a\\}^n = \\begin{cases} 0, & x < a \\\\ (x-a)^n, & x \\ge a. \\end{cases}"
},
{
"math_id": 6,
"text": " (n \\ge 0)"
},
{
"math_id": 7,
"text": "(x-a)^n"
},
{
"math_id": 8,
"text": "\\langle x-a\\rangle^0 \\equiv \\{x-a\\}^0 = \\begin{cases} 0, & x < a \\\\ 1, & x > a. \\end{cases}"
}
] | https://en.wikipedia.org/wiki?curid=11940501 |
11942555 | Singularity function | Class of discontinuous functions
Singularity functions are a class of discontinuous functions that contain singularities, i.e., they are discontinuous at their singular points. Singularity functions have been heavily studied in the field of mathematics under the alternative names of generalized functions and distribution theory. The functions are notated with brackets, as formula_0 where "n" is an integer. The "formula_1" are often referred to as singularity brackets . The functions are defined as:
where: "δ"("x") is the Dirac delta function, also called the unit impulse. The first derivative of "δ"("x") is also called the unit doublet. The function formula_2 is the Heaviside step function: "H"("x") = 0 for "x" < 0 and "H"("x") = 1 for "x" > 0. The value of "H"(0) will depend upon the particular convention chosen for the Heaviside step function. Note that this will only be an issue for "n" = 0 since the functions contain a multiplicative factor of "x" − "a" for "n" > 0.
formula_3 is also called the Ramp function.
Integration.
Integrating formula_4 can be done in a convenient way in which the constant of integration is automatically included so the result will be 0 at "x" = "a".
formula_5
Example beam calculation.
The deflection of a simply supported beam, as shown in the diagram, with constant cross-section and elastic modulus, can be found using Euler–Bernoulli beam theory. Here, we are using the sign convention of downward forces and sagging bending moments being positive.
Load distribution:
formula_6
Shear force:
formula_7
formula_8
Bending moment:
formula_9
formula_10
Slope:
formula_11
Because the slope is not zero at x = 0, a constant of integration, c, is added
formula_12
Deflection:
formula_13
formula_14
The boundary condition u = 0 at x = 4 m allows us to solve for c = −7 Nm2
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\langle x-a\\rangle ^n"
},
{
"math_id": 1,
"text": "\\langle \\rangle"
},
{
"math_id": 2,
"text": "H(x)"
},
{
"math_id": 3,
"text": "\\langle x-a\\rangle^1"
},
{
"math_id": 4,
"text": "\\langle x-a \\rangle^n"
},
{
"math_id": 5,
"text": "\\int\\langle x-a \\rangle^n dx = \\begin{cases}\n\\langle x-a \\rangle^{n+1}, & n< 0 \\\\\n\\frac{\\langle x-a \\rangle^{n+1}}{n+1}, & n \\ge 0\n\\end{cases}"
},
{
"math_id": 6,
"text": "w=-3\\text{ N}\\langle x-0 \\rangle^{-1}\\ +\\ 6\\text{ Nm}^{-1}\\langle x-2\\text{ m} \\rangle^0\\ -\\ 9\\text{ N}\\langle x-4\\text{ m}\\rangle^{-1}\\ -\\ 6\\text{ Nm}^{-1}\\langle x-4\\text{ m} \\rangle^0\\ "
},
{
"math_id": 7,
"text": "S=\\int w\\, dx"
},
{
"math_id": 8,
"text": "S=-3\\text{ N}\\langle x-0\\rangle^0\\ +\\ 6\\text{ Nm}^{-1}\\langle x-2\\text{ m}\\rangle^1\\ -\\ 9\\text{ N}\\langle x-4\\text{ m}\\rangle^0\\ -\\ 6\\text{ Nm}^{-1}\\langle x-4\\text{ m}\\rangle^1\\,"
},
{
"math_id": 9,
"text": "M = -\\int S\\, dx"
},
{
"math_id": 10,
"text": "M=3\\text{ N}\\langle x-0\\rangle^1\\ -\\ 3\\text{ Nm}^{-1}\\langle x-2\\text{ m}\\rangle^2\\ +\\ 9\\text{ N}\\langle x-4\\text{ m} \\rangle^1\\ +\\ 3\\text{ Nm}^{-1}\\langle x-4\\text{ m}\\rangle^2\\,"
},
{
"math_id": 11,
"text": "u'=\\frac{1}{EI}\\int M\\, dx"
},
{
"math_id": 12,
"text": "u'=\\frac{1}{EI}\\left(\\frac{3}{2}\\text{ N}\\langle x-0\\rangle^2\\ -\\ 1\\text{ Nm}^{-1}\\langle x-2\\text{ m}\\rangle^3\\ +\\ \\frac{9}{2}\\text{ N}\\langle x-4\\text{ m}\\rangle^2\\ +\\ 1\\text{ Nm}^{-1}\\langle x-4\\text{ m}\\rangle^3\\ +\\ c\\right)\\,"
},
{
"math_id": 13,
"text": "u=\\int u'\\, dx"
},
{
"math_id": 14,
"text": "u=\\frac{1}{EI}\\left(\\frac{1}{2}\\text{ N}\\langle x-0\\rangle^3\\ -\\ \\frac{1}{4}\\text{ Nm}^{-1}\\langle x-2\\text{ m}\\rangle^4\\ +\\ \\frac{3}{2}\\text{ N}\\langle x-4\\text{ m}\\rangle^3\\ +\\ \\frac{1}{4}\\text{ Nm}^{-1}\\langle x-4\\text{ m}\\rangle^4\\ +\\ cx\\right)\\,"
}
] | https://en.wikipedia.org/wiki?curid=11942555 |
1194259 | Influence diagram | Visual representation of a decision-making problem
An influence diagram (ID) (also called a relevance diagram, decision diagram or a decision network) is a compact graphical and mathematical representation of a decision situation. It is a generalization of a Bayesian network, in which not only probabilistic inference problems but also decision making problems (following the maximum expected utility criterion) can be modeled and solved.
ID was first developed in the mid-1970s by decision analysts with an intuitive semantic that is easy to understand. It is now adopted widely and becoming an alternative to the decision tree which typically suffers from exponential growth in number of branches with each variable modeled. ID is directly applicable in team decision analysis, since it allows incomplete sharing of information among team members to be modeled and solved explicitly. Extensions of ID also find their use in game theory as an alternative representation of the game tree.
Semantics.
An ID is a directed acyclic graph with three types (plus one subtype) of node and three types of arc (or arrow) between nodes.
Nodes:
*"Decision node" (corresponding to each decision to be made) is drawn as a rectangle.
*"Uncertainty node" (corresponding to each uncertainty to be modeled) is drawn as an oval.
*"Deterministic node" (corresponding to special kind of uncertainty that its outcome is deterministically known whenever the outcome of some other uncertainties are also known) is drawn as a double oval.
*"Value node" (corresponding to each component of additively separable Von Neumann-Morgenstern utility function) is drawn as an octagon (or diamond).
Arcs:
*"Functional arcs" (ending in value node) indicate that one of the components of additively separable utility function is a function of all the nodes at their tails.
*"Conditional arcs" (ending in uncertainty node) indicate that the uncertainty at their heads is probabilistically conditioned on all the nodes at their tails.
*"Conditional arcs" (ending in deterministic node) indicate that the uncertainty at their heads is deterministically conditioned on all the nodes at their tails.
*"Informational arcs" (ending in decision node) indicate that the decision at their heads is made with the outcome of all the nodes at their tails known beforehand.
Given a properly structured ID:
*Decision nodes and incoming information arcs collectively state the "alternatives" (what can be done when the outcome of certain decisions and/or uncertainties are known beforehand)
*Uncertainty/deterministic nodes and incoming conditional arcs collectively model the "information" (what are known and their probabilistic/deterministic relationships)
*Value nodes and incoming functional arcs collectively quantify the "preference" (how things are preferred over one another).
"Alternative, information, and preference" are termed "decision basis" in decision analysis, they represent three required components of any valid decision situation.
Formally, the semantic of influence diagram is based on sequential construction of nodes and arcs, which implies a specification of all conditional independencies in the diagram. The specification is defined by the formula_0-separation criterion of Bayesian network. According to this semantic, every node is probabilistically
independent on its non-successor nodes given the outcome of its immediate predecessor nodes. Likewise, a missing arc between non-value node formula_1 and non-value node formula_2 implies that there exists a set of non-value nodes formula_3, e.g., the parents of formula_2, that renders formula_2 independent of formula_1 given the outcome of the nodes in formula_3.
Example.
Consider the simple influence diagram representing a situation where a decision-maker is planning their vacation.
*There is 1 decision node ("Vacation Activity"), 2 uncertainty nodes ("Weather Condition, Weather Forecast"), and 1 value node ("Satisfaction").
*There are 2 functional arcs (ending in "Satisfaction"), 1 conditional arc (ending in "Weather Forecast"), and 1 informational arc (ending in "Vacation Activity").
*Functional arcs ending in "Satisfaction" indicate that "Satisfaction" is a utility function of "Weather Condition" and "Vacation Activity". In other words, their satisfaction can be quantified if they know what the weather is like and what their choice of activity is. (Note that they do not value "Weather Forecast" directly)
*Conditional arc ending in "Weather Forecast" indicates their belief that "Weather Forecast" and "Weather Condition" can be dependent.
*Informational arc ending in "Vacation Activity" indicates that they will only know "Weather Forecast", not "Weather Condition", when making their choice. In other words, actual weather will be known after they make their choice, and only forecast is what they can count on at this stage.
*It also follows semantically, for example, that "Vacation Activity" is independent on (irrelevant to) "Weather Condition" given "Weather Forecast" is known.
Applicability to value of information.
The above example highlights the power of the influence diagram in representing an extremely important concept in decision analysis known as the value of information. Consider the following three scenarios;
*Scenario 1: The decision-maker could make their "Vacation Activity" decision while knowing what "Weather Condition" will be like. This corresponds to adding extra informational arc from "Weather Condition" to "Vacation Activity" in the above influence diagram.
*Scenario 2: The original influence diagram as shown above.
*Scenario 3: The decision-maker makes their decision without even knowing the "Weather Forecast". This corresponds to removing informational arc from "Weather Forecast" to "Vacation Activity" in the above influence diagram.
Scenario 1 is the best possible scenario for this decision situation since there is no longer any uncertainty on what they care about ("Weather Condition") when making their decision. Scenario 3, however, is the worst possible scenario for this decision situation since they need to make their decision without any hint ("Weather Forecast") on what they care about ("Weather Condition") will turn out to be.
The decision-maker is usually better off (definitely no worse off, on average) to move from scenario 3 to scenario 2 through the acquisition of new information. The most they should be willing to pay for such move is called the value of information on "Weather Forecast", which is essentially the value of imperfect information on "Weather Condition".
The applicability of this simple ID and the value of information concept is tremendous, especially in medical decision making when most decisions have to be made with imperfect information about their patients, diseases, etc.
Related concepts.
Influence diagrams are hierarchical and can be defined either in terms of their structure or in greater detail in terms of the functional and numerical relation between diagram elements. An ID that is consistently defined at all levels—structure, function, and number—is a well-defined mathematical representation and is referred to as a "well-formed influence diagram" (WFID). WFIDs can be evaluated using reversal and removal operations to yield answers to a large class of probabilistic, inferential, and decision questions. More recent techniques have been developed by artificial intelligence researchers concerning Bayesian network inference (belief propagation).
An influence diagram having only uncertainty nodes (i.e., a Bayesian network) is also called a relevance diagram. An arc connecting node "A" to "B" implies not only that ""A" is relevant to "B", but also that "B" is relevant to "A"" (i.e., relevance is a symmetric relationship).
See also.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "Y"
},
{
"math_id": 3,
"text": "Z"
}
] | https://en.wikipedia.org/wiki?curid=1194259 |
11944410 | Trimaximal mixing | Fermion mixing configuration
Trimaximal mixing (also known as threefold maximal mixing) refers to the highly symmetric, maximally CP-violating, formula_0 fermion mixing configuration, characterised by a unitary matrix (formula_1) having all its elements equal in modulus
(formula_2, formula_3) as may be written, e.g.:
formula_4
where formula_5 and formula_6
are the complex cube roots of unity. In the standard PDG convention, trimaximal mixing corresponds to: formula_7, formula_8 and formula_9. The Jarlskog formula_10-violating parameter formula_11 takes its extremal value formula_12.
Originally proposed as a candidate lepton mixing matrix, and actively studied as such (and even as a candidate quark mixing matrix), trimaximal mixing is now definitively ruled-out as a phenomenologically viable lepton mixing scheme by neutrino oscillation experiments, especially the Chooz reactor experiment, in favour of the no longer tenable (related) tribimaximal mixing scheme.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "3 \\times 3"
},
{
"math_id": 1,
"text": "U"
},
{
"math_id": 2,
"text": " |U_{ai}|=1/\\sqrt{3}"
},
{
"math_id": 3,
"text": "a,i=1,2,3"
},
{
"math_id": 4,
"text": "\nU= \n\\begin{bmatrix}\n\\frac{1}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}} \\\\\n\\frac{\\omega}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}} & \\frac{\\bar{\\omega}}{\\sqrt{3}} \\\\ \n\\frac{\\bar{\\omega}}{\\sqrt{3}} & \\frac{1}{\\sqrt{3}} & \\frac{\\omega}{\\sqrt{3}} \n\\end{bmatrix}\n\\Rightarrow (|U_{i\\alpha}|^2)=\n\\begin{bmatrix}\n\\frac{1}{3} & \\frac{1}{3} & \\frac{1}{3} \\\\\n\\frac{1}{3} & \\frac{1}{3} & \\frac{1}{3} \\\\ \n\\frac{1}{3} & \\frac{1}{3} & \\frac{1}{3} \n\\end{bmatrix}\n"
},
{
"math_id": 5,
"text": "\\omega=\\exp(i2\\pi/3)"
},
{
"math_id": 6,
"text": "\\bar{\\omega}=\\exp(-i2\\pi/3)"
},
{
"math_id": 7,
"text": "\\theta_{12}=\\theta_{23}=\\pi/4"
},
{
"math_id": 8,
"text": "\\theta_{13}=\\sin^{-1}(1/\\sqrt{3})"
},
{
"math_id": 9,
"text": "\\delta=\\pi/2"
},
{
"math_id": 10,
"text": "CP"
},
{
"math_id": 11,
"text": "J"
},
{
"math_id": 12,
"text": "|J|=1/(6\\sqrt{3})"
}
] | https://en.wikipedia.org/wiki?curid=11944410 |
11944793 | Telegraph process | Memoryless continuous-time stochastic process that shows two distinct values
In probability theory, the telegraph process is a memoryless continuous-time stochastic process that shows two distinct values. It models burst noise (also called popcorn noise or random telegraph signal). If the two possible values that a random variable can take are "formula_0" and "formula_1", then the process can be described by the following master equations:
formula_2
and
formula_3
where formula_4 is the transition rate for going from state formula_0 to state formula_1 and formula_5 is the transition rate for going from going from state formula_1 to state formula_0. The process is also known under the names Kac process (after mathematician Mark Kac), and dichotomous random process.
Solution.
The master equation is compactly written in a matrix form by introducing a vector formula_6,
formula_7
where
formula_8
is the transition rate matrix. The formal solution is constructed from the initial condition formula_9 (that defines that at formula_10, the state is formula_11) by
formula_12.
It can be shown that
formula_13
where formula_14 is the identity matrix and formula_15 is the average transition rate. As formula_16, the solution approaches a stationary distribution formula_17 given by
formula_18
Properties.
Knowledge of an initial state decays exponentially. Therefore, for a time formula_19, the process will reach the following stationary values, denoted by subscript "s":
Mean:
formula_20
Variance:
formula_21
One can also calculate a correlation function:
formula_22
Application.
This random process finds wide application in model building: | [
{
"math_id": 0,
"text": "c_1"
},
{
"math_id": 1,
"text": "c_2"
},
{
"math_id": 2,
"text": "\\partial_t P(c_1, t|x, t_0)=-\\lambda_1 P(c_1, t|x, t_0)+\\lambda_2 P(c_2, t|x, t_0)"
},
{
"math_id": 3,
"text": "\\partial_t P(c_2, t|x, t_0)=\\lambda_1 P(c_1, t|x, t_0)-\\lambda_2 P(c_2, t|x, t_0)."
},
{
"math_id": 4,
"text": "\\lambda_1"
},
{
"math_id": 5,
"text": "\\lambda_2"
},
{
"math_id": 6,
"text": "\\mathbf{P}=[P(c_1, t|x, t_0),P(c_2, t|x, t_0)]"
},
{
"math_id": 7,
"text": "\\frac{d\\mathbf P}{dt}=W\\mathbf P"
},
{
"math_id": 8,
"text": "W=\\begin{pmatrix}\n-\\lambda_1 & \\lambda_2 \\\\\n\\lambda_1 & -\\lambda_2\n\\end{pmatrix}"
},
{
"math_id": 9,
"text": "\\mathbf{P}(0)"
},
{
"math_id": 10,
"text": "t=t_0"
},
{
"math_id": 11,
"text": "x"
},
{
"math_id": 12,
"text": "\\mathbf{P}(t) = e^{Wt}\\mathbf{P}(0)"
},
{
"math_id": 13,
"text": "e^{Wt}= I+ W\\frac{(1-e^{-2\\lambda t})}{2\\lambda}"
},
{
"math_id": 14,
"text": "I"
},
{
"math_id": 15,
"text": "\\lambda=(\\lambda_1+\\lambda_2)/2"
},
{
"math_id": 16,
"text": "t\\rightarrow \\infty"
},
{
"math_id": 17,
"text": "\\mathbf{P}(t\\rightarrow \\infty)=\\mathbf{P}_s"
},
{
"math_id": 18,
"text": "\\mathbf{P}_s= \\frac{1}{2\\lambda}\\begin{pmatrix}\n\\lambda_2 \\\\\n\\lambda_1\n\\end{pmatrix}"
},
{
"math_id": 19,
"text": "t\\gg (2\\lambda)^{-1}"
},
{
"math_id": 20,
"text": "\\langle X \\rangle_s = \\frac {c_1\\lambda_2+c_2\\lambda_1}{\\lambda_1+\\lambda_2}."
},
{
"math_id": 21,
"text": " \\operatorname{var} \\{ X \\}_s = \\frac {(c_1-c_2)^2\\lambda_1\\lambda_2}{(\\lambda_1+\\lambda_2)^2}."
},
{
"math_id": 22,
"text": "\\langle X(t),X(u)\\rangle_s = e^{-2\\lambda |t-u|}\\operatorname{var} \\{ X \\}_s."
}
] | https://en.wikipedia.org/wiki?curid=11944793 |
11945196 | Hydrogen isocyanide | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Hydrogen isocyanide is a chemical with the molecular formula HNC. It is a minor tautomer of hydrogen cyanide (HCN). Its importance in the field of astrochemistry is linked to its ubiquity in the interstellar medium.
Nomenclature.
Both "hydrogen isocyanide" and "azanylidyniummethanide" are correct IUPAC names for HNC. There is no preferred IUPAC name. The second one is according to the "substitutive nomenclature rules", derived from the "parent hydride" azane () and the anion methanide ().
Molecular properties.
Hydrogen isocyanide (HNC) is a linear triatomic molecule with C∞v point group symmetry. It is a zwitterion and an isomer of hydrogen cyanide (HCN). Both HNC and HCN have large, similar dipole moments, with "μ"HNC = 3.05 Debye and "μ"HCN = 2.98 Debye respectively. These large dipole moments facilitate the easy observation of these species in the interstellar medium.
HNC−HCN tautomerism.
As HNC is higher in energy than HCN by 3920 cm−1 (46.9 kJ/mol), one might assume that the two would have an equilibrium ratioformula_0 at temperatures below 100 Kelvin of 10−25. However, observations show a very different conclusion; formula_1 is much higher than 10−25, and is in fact on the order of unity in cold environments. This is because of the potential energy path of the tautomerization reaction; there is an activation barrier on the order of roughly 12,000 cm−1 for the tautomerization to occur, which corresponds to a temperature at which HNC would already have been destroyed by neutral-neutral reactions.
Spectral properties.
In practice, HNC is almost exclusively observed astronomically using the "J" = 1→0 transition. This transition occurs at ~90.66 GHz, which is a point of good visibility in the atmospheric window, thus making astronomical observations of HNC particularly simple. Many other related species (including HCN) are observed in roughly the same window.
Significance in the interstellar medium.
HNC is intricately linked to the formation and destruction of numerous other molecules of importance in the interstellar medium—aside from the obvious partners HCN, protonated hydrogen cyanide (HCNH+), and cyanide (CN), HNC is linked to the abundances of many other compounds, either directly or through a few degrees of separation. As such, an understanding of the chemistry of HNC leads to an understanding of countless other species—HNC is an integral piece in the complex puzzle representing interstellar chemistry.
Furthermore, HNC (alongside HCN) is a commonly used tracer of dense gas in molecular clouds. Aside from the potential to use HNC to investigate gravitational collapse as the means of star formation, HNC abundance (relative to the abundance of other nitrogenous molecules) can be used to determine the evolutionary stage of protostellar cores.
The HCO+/HNC line ratio is used to good effect as a measure of density of gas. This information provides great insight into the mechanisms of the formation of (Ultra-)Luminous Infrared Galaxies ((U)LIRGs), as it provides data on the nuclear environment, star formation, and even black hole fueling. Furthermore, the HNC/HCN line ratio is used to distinguish between photodissociation regions and X-ray-dissociation regions on the basis that [HNC]/[HCN] is roughly unity in the former, but greater than unity in the latter.
The study of HNC is relatively straightforward, which is a major motivation for its research. Its J = 1→0 transition occurs in a clear portion of the atmospheric window, and it has numerous isotopomers that are easily studied. Additionally, its large dipole moment makes observations particularly simple. Moreover, HNC is a fundamentally simple molecule in its molecular nature. This makes the study of the reaction pathways that lead to its formation and destruction a good means of obtaining insight to the workings of these reactions in space. Furthermore, the study of the tautomerization of HNC to HCN (and vice versa), which has been studied extensively, has been suggested as a model by which more complicated isomerization reactions can be studied.
Chemistry in the interstellar medium.
HNC is found primarily in dense molecular clouds, though it is ubiquitous in the interstellar medium. Its abundance is closely linked to the abundances of other nitrogen-containing compounds. HNC is formed primarily through the dissociative recombination of HNCH+ and H2NC+, and it is destroyed primarily through ion-neutral reactions with H3+ and C+. Rate calculations were done at 3.16 × 105 years, which is considered early time, and at 20 K, which is a typical temperature for dense molecular clouds.
These four reactions are merely the four most dominant, and thus the most significant in the formation of the HNC abundances in dense molecular clouds; there are dozens more reactions for the formation and destruction of HNC. Though these reactions primarily lead to various protonated species, HNC is linked closely to the abundances of many other nitrogen containing molecules, for example, NH3 and CN. The abundance HNC is also inexorably linked to the abundance of HCN, and the two tend to exist in a specific ratio based on the environment. This is because the reactions that form HNC can often also form HCN, and vice versa, depending on the conditions in which the reaction occurs, and also that there exist isomerization reactions for the two species.
Astronomical detections.
HCN (not HNC) was first detected in June 1970 by L. E. Snyder and D. Buhl using the 36-foot radio telescope of the National Radio Astronomy Observatory. The main molecular isotope, H12C14N, was observed via its "J" = 1→0 transition at 88.6 GHz in six different sources: W3 (OH), Orion A, Sgr A(NH3A), W49, W51, DR 21(OH). A secondary molecular isotope, H13C14N, was observed via its "J" = 1→0 transition at 86.3 GHz in only two of these sources: Orion A and Sgr A(NH3A). HNC was then later detected extragalactically in 1988 using the IRAM 30-m telescope at the Pico de Veleta in Spain. It was observed via its "J" = 1→0 transition at 90.7 GHz toward IC 342.
A number of detections have been made towards the end of confirming the temperature dependence of the abundance ratio of [HNC]/[HCN]. A strong fit between temperature and the abundance ratio would allow observers to spectroscopically detect the ratio and then extrapolate the temperature of the environment, thus gaining great insight into the environment of the species. The abundance ratio of rare isotopes of HNC and HCN along the OMC-1 varies by more than an order of magnitude in warm regions versus cold regions. In 1992, the abundances of HNC, HCN, and deuterated analogs along the OMC-1 ridge and core were measured and the temperature dependence of the abundance ratio was confirmed. A survey of the W 3 Giant Molecular Cloud in 1997 showed over 24 different molecular isotopes, comprising over 14 distinct chemical species, including HNC, HN13C, and H15NC. This survey further confirmed the temperature dependence of the abundance ratio, [HNC]/[HCN], this time even confirming the dependence of the isotopomers.
These are not the only detections of importance of HNC in the interstellar medium. In 1997, HNC was observed along the TMC-1 ridge and its abundance relative to HCO+ was found to be constant along the ridge—this led credence to the reaction pathway that posits that HNC is derived initially from HCO+. One significant astronomical detection that demonstrated the practical use of observing HNC occurred in 2006, when abundances of various nitrogenous compounds (including HN13C and H15NC) were used to determine the stage of evolution of the protostellar core Cha-MMS1 based on the relative magnitudes of the abundances.
On 11 August 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, H2CO, and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left ( \\frac{[HNC]}{[HCN]} \\right )_{eq}"
},
{
"math_id": 1,
"text": "\\left ( \\frac{[HNC]}{[HCN]} \\right )_{observed}"
}
] | https://en.wikipedia.org/wiki?curid=11945196 |
11945602 | Cascade algorithm | In the mathematical topic of wavelet theory, the cascade algorithm is a numerical method for calculating function values of the basic scaling and wavelet functions of a discrete wavelet transform using an iterative algorithm. It starts from values on a coarse sequence of sampling points and produces values for successively more densely spaced sequences of sampling points. Because it applies the same operation over and over to the output of the previous application, it is known as the "cascade algorithm".
Successive approximation.
The iterative algorithm generates successive approximations to ψ("t") or φ("t") from {"h"} and {"g"} filter coefficients. If the algorithm converges to a fixed point, then that fixed point is the basic scaling function or wavelet.
The iterations are defined by
formula_0
For the "k"th iteration, where an initial φ(0)("t") must be given.
The frequency domain estimates of the basic scaling function is given by
formula_1
and the limit can be viewed as an infinite product in the form
formula_2
If such a limit exists, the spectrum of the scaling function is
formula_3
The limit does not depends on the initial shape assume for φ(0)("t"). This algorithm converges reliably to φ("t"), even if it is discontinuous.
From this scaling function, the wavelet can be generated from
formula_4
Successive approximation can also be derived in the frequency domain. | [
{
"math_id": 0,
"text": "\\varphi^{(k+1)}(t)=\\sum_{n=0}^{N-1} h[n] \\sqrt 2 \\varphi^{(k)} (2t-n)"
},
{
"math_id": 1,
"text": "\\Phi^{(k+1)}(\\omega)= \\frac {1} {\\sqrt 2} H\\left( \\frac {\\omega} {2}\\right) \\Phi^{(k)}\\left(\\frac {\\omega} {2}\\right)"
},
{
"math_id": 2,
"text": "\\Phi^{(\\infty)}(\\omega)= \\prod_{k=1}^{\\infty} \\frac {1} {\\sqrt 2} H\\left( \\frac {\\omega} {2^k}\\right) \\Phi^{(\\infty)}(0)."
},
{
"math_id": 3,
"text": "\\Phi(\\omega)= \\prod_{k=1}^\\infty \\frac {1} {\\sqrt 2} H\\left( \\frac {\\omega} {2^k}\\right) \\Phi^{(\\infty)}(0)"
},
{
"math_id": 4,
"text": "\\psi(t)= \\sum_{n=- \\infty}^{\\infty} g[n]{\\sqrt 2} \\varphi^{(k)} (2t-n)."
}
] | https://en.wikipedia.org/wiki?curid=11945602 |
11945733 | Radial turbine | Type of turbine
A radial turbine is a turbine in which the flow of the working fluid is radial to the shaft. The difference between axial and radial turbines consists in the way the fluid flows through the components (compressor and turbine). Whereas for an axial turbine the rotor is 'impacted' by the fluid flow, for a radial turbine, the flow is smoothly orientated perpendicular to the rotation axis, and it drives the turbine in the same way water drives a watermill. The result is less mechanical stress (and less thermal stress, in case of hot working fluids) which enables a radial turbine to be simpler, more robust, and more efficient (in a similar power range) when compared to axial turbines. When it comes to high power ranges (above 5 MW) the radial turbine is no longer competitive (due to its heavy and expensive rotor) and the efficiency becomes similar to that of the axial turbines.
Advantages and challenges.
Compared to an axial flow turbine, a radial turbine can employ a relatively higher pressure ratio (≈4) per stage with lower flow rates. Thus these machines fall in the lower specific speed and power ranges. For high temperature applications rotor blade cooling in radial stages is not as easy as in axial turbine stages. Variable angle nozzle blades can give higher stage efficiencies in a radial turbine stage even at off-design point operation. In the family of water turbines, the Francis turbine is a very well-known IFR turbine which generates much greater power with a relatively large impeller.
Components of radial turbines.
The radial and tangential components of the absolute velocity c2 are cr2 and cq2, respectively. The relative velocity of the flow and the peripheral speed of the rotor are w2 and u2 respectively. The air angle at the rotor blade entry is given by
formula_0
Enthalpy and entropy diagram.
The stagnation state of the gas at the nozzle entry is represented by point 01. The gas expands adiabatically in the nozzles from a pressure p1 to p2 with an increase in its velocity from c1 to c2. Since this is an energy transformation process, the stagnation enthalpy remains constant but the stagnation pressure decreases (p01 > p02) due to losses.
The energy transfer accompanied by an energy transformation process occurs in the rotor.
Spouting velocity.
A reference velocity (c0) known as the isentropic velocity, spouting velocity or stage terminal velocity is defined as that velocity which will be obtained during an isentropic expansion of the gas between the entry and exit pressures of the stage.
formula_1
Stage efficiency.
The total-to-static efficiency is based on this value of work.
formula_2
Degree of reaction.
The relative pressure or enthalpy drop in the nozzle and rotor blades are determined by the degree of reaction of the stage. This is defined by
formula_3
The two quantities within the parentheses in the numerator may have the same or opposite signs. This, besides other factors, would also govern the value of reaction. The stage reaction decreases as Cθ2 increases because this results in a large proportion of the stage enthalpy drop to occur in the nozzle ring.
Stage losses.
The stage work is less than the isentropic stage enthalpy drop on account of aerodynamic losses in the stage. The actual output at the turbine shaft is equal to the stage work minus the losses due to rotor disc and bearing friction.
Blade to gas speed ratio.
The blade-to-gas speed ratio can be expressed in terms of the isentropic stage terminal velocity c0.
formula_4
for
β2 = 90o
σs ≈ 0.707
Outward-flow radial stages.
In outward flow radial turbine stages, the flow of the gas or steam occurs from smaller to larger diameters. The stage consists of a pair of fixed and moving blades. The increasing area of cross-section at larger diameters accommodates the expanding gas.
This configuration did not become popular with the steam and gas turbines. The only one which is employed more commonly is the Ljungstrom double rotation type turbine. It consists of rings of cantilever blades projecting from two discs rotating in opposite directions. The relative peripheral velocity of blades in two adjacent rows, with respect to each other, is high. This gives a higher value of enthalpy drop per stage.
Nikola Tesla's bladeless radial turbine.
In the early 1900s, Nikola Tesla developed and patented his bladeless Tesla turbine. One of the difficulties with bladed turbines is the complex and highly precise requirements for balancing and manufacturing the bladed rotor which has to be very well balanced. The blades are subject to corrosion and cavitation. Tesla attacked this problem by substituting a series of closely spaced disks for the blades of the rotor. The working fluid flows between the disks and transfers its energy to the rotor by means of the boundary layer effect or adhesion and viscosity rather than by impulse or reaction. Tesla stated his turbine could realize incredibly high efficiencies by steam. There has been no documented evidence of Tesla turbines achieving the efficiencies Tesla claimed. They have been found to have low overall efficiencies in the role of a turbine or pump. In recent decades there has been further research into bladeless turbine and development of patented designs that work with corrosive/abrasive and hard to pump material such as ethylene glycol, fly ash, blood, rocks, and even live fish. | [
{
"math_id": 0,
"text": "\\,\\tan{\\beta_2} =\\frac{c_{r2}}{c_{\\theta 2} - u_2}"
},
{
"math_id": 1,
"text": "\\,C_0 = \\sqrt{2C_p\\,T_{01}\\,\\left(1 - \\left(\\frac{p_3}{p_{01}}\\right)^\\frac{\\gamma - 1}{\\gamma}\\right)}"
},
{
"math_id": 2,
"text": "\\begin{align}\n \\eta_\\text{ts} &= \\frac{h_{01} - h_{03}}{h_{01} - h_{3ss}}\n = \\frac{\\psi\\,u_2^2}{C_p\\,T_{01}\\left(1 - \\left(\\frac{p_3}{p_{01}}\\right)^\\frac{\\gamma - 1}{\\gamma}\\right)}\n\\end{align}"
},
{
"math_id": 3,
"text": "R = \\frac{\\text{static enthalpy drop in rotor}}{\\text{stagnation enthalpy drop in stage}}"
},
{
"math_id": 4,
"text": "\\,\\sigma_s = \\frac{u_2}{c_0} = [2 (1 + \\phi_2 \\cot{\\beta_2})]^{-\\frac{1}{2}}"
}
] | https://en.wikipedia.org/wiki?curid=11945733 |
1194729 | Heine–Cantor theorem | In mathematics, the Heine–Cantor theorem states that a continuous function between two metric spaces is uniformly continuous if its domain is compact.
The theorem is named after Eduard Heine and Georg Cantor.
<templatestyles src="Math_theorem/styles.css" />
Heine–Cantor theorem — If formula_0 is a continuous function between two metric spaces formula_1 and formula_2, and formula_1 is compact, then formula_3 is uniformly continuous.
An important special case of the Cantor theorem is that every continuous function from a closed bounded interval to the real numbers is uniformly continuous.
<templatestyles src="Math_proof/styles.css" />Proof of Heine–Cantor theorem
Suppose that formula_1 and formula_2 are two metric spaces with metrics formula_4 and formula_5, respectively. Suppose further that a function formula_6 is continuous and formula_7 is compact. We want to show that formula_3 is uniformly continuous, that is, for every positive real number formula_8 there exists a positive real number formula_9 such that for all points formula_10 in the function domain formula_1, formula_11 implies that formula_12.
Consider some positive real number formula_8. By continuity, for any point formula_13 in the domain formula_1, there exists some positive real number formula_14 such that formula_15 when formula_16, i.e., a fact that formula_17 is within formula_18 of formula_13 implies that formula_19 is within formula_20 of formula_21.
Let formula_22 be the open formula_23-neighborhood of formula_13, i.e. the set
formula_24
Since each point formula_13 is contained in its own formula_22, we find that the collection formula_25 is an open cover of formula_1. Since formula_1 is compact, this cover has a finite subcover formula_26 where formula_27. Each of these open sets has an associated radius formula_28. Let us now define formula_29, i.e. the minimum radius of these open sets. Since we have a finite number of positive radii, this minimum formula_30 is well-defined and positive. We now show that this formula_30 works for the definition of uniform continuity.
Suppose that formula_11 for any two formula_10 in formula_1. Since the sets formula_31 form an open (sub)cover of our space formula_1, we know that formula_13 must lie within one of them, say formula_31. Then we have that formula_32. The triangle inequality then implies that
formula_33
implying that formula_13 and formula_17 are both at most formula_34 away from formula_35. By definition of formula_34, this implies that formula_36 and formula_37 are both less than formula_38. Applying the triangle inequality then yields the desired
formula_39
For an alternative proof in the case of formula_40, a closed interval, see the article Non-standard calculus. | [
{
"math_id": 0,
"text": "f \\colon M \\to N"
},
{
"math_id": 1,
"text": "M"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "d_M"
},
{
"math_id": 5,
"text": "d_N"
},
{
"math_id": 6,
"text": "f: M \\to N"
},
{
"math_id": 7,
"text": " M "
},
{
"math_id": 8,
"text": "\\varepsilon > 0"
},
{
"math_id": 9,
"text": "\\delta > 0"
},
{
"math_id": 10,
"text": "x, y"
},
{
"math_id": 11,
"text": "d_M(x,y) < \\delta"
},
{
"math_id": 12,
"text": "d_N(f(x), f(y)) < \\varepsilon"
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "\\delta_x > 0"
},
{
"math_id": 15,
"text": "d_N(f(x),f(y)) < \\varepsilon/2"
},
{
"math_id": 16,
"text": "d_M(x,y) < \\delta _x"
},
{
"math_id": 17,
"text": "y"
},
{
"math_id": 18,
"text": "\\delta_x"
},
{
"math_id": 19,
"text": "f(y)"
},
{
"math_id": 20,
"text": "\\varepsilon / 2"
},
{
"math_id": 21,
"text": "f(x)"
},
{
"math_id": 22,
"text": "U_x"
},
{
"math_id": 23,
"text": "\\delta_x/2"
},
{
"math_id": 24,
"text": "U_x = \\left\\{ y \\mid d_M(x,y) < \\frac 1 2 \\delta_x \\right\\}."
},
{
"math_id": 25,
"text": "\\{U_x \\mid x \\in M\\}"
},
{
"math_id": 26,
"text": "\\{U_{x_1}, U_{x_2}, \\ldots, U_{x_n}\\}"
},
{
"math_id": 27,
"text": "x_1, x_2, \\ldots, x_n \\in M"
},
{
"math_id": 28,
"text": "\\delta_{x_i}/2"
},
{
"math_id": 29,
"text": "\\delta = \\min_{1 \\leq i \\leq n} \\delta_{x_i}/2"
},
{
"math_id": 30,
"text": "\\delta"
},
{
"math_id": 31,
"text": "U_{x_i}"
},
{
"math_id": 32,
"text": "d_M(x, x_i) < \\frac{1}{2}\\delta_{x_i}"
},
{
"math_id": 33,
"text": "d_M(x_i, y) \\leq d_M(x_i, x) + d_M(x, y) < \\frac{1}{2} \\delta_{x_i} + \\delta \\leq \\delta_{x_i},"
},
{
"math_id": 34,
"text": "\\delta_{x_i}"
},
{
"math_id": 35,
"text": "x_i"
},
{
"math_id": 36,
"text": "d_N(f(x_i),f(x))"
},
{
"math_id": 37,
"text": "d_N(f(x_i), f(y))"
},
{
"math_id": 38,
"text": "\\varepsilon/2"
},
{
"math_id": 39,
"text": "d_N(f(x), f(y)) \\leq d_N(f(x_i), f(x)) + d_N(f(x_i), f(y)) < \\frac{\\varepsilon}{2} + \\frac{\\varepsilon}{2} = \\varepsilon."
},
{
"math_id": 40,
"text": "M = [a, b]"
}
] | https://en.wikipedia.org/wiki?curid=1194729 |
1195063 | Convertible security | Stock trading reference
A convertible security is a financial instrument whose holder has the right to convert it into another security of the same issuer. Most convertible securities are convertible bonds or preferred stocks that pay regular interest and can be converted into shares of the issuer's common stock. Convertible securities typically include other embedded options, such as call or put options. Consequently, determining the value of convertible securities can be a complex exercise. The complex valuation issue may attract specialized professional investors, including arbitrageurs and hedge funds who try to exploit disparities in the relationship between the price of the convertible security and the underlying common stock.
Types.
Types of convertible securities include:
Characteristics of convertible bonds and preferred stock.
Because convertibles are a hybrid security, their market price can be affected by both movements in interest rates (like a conventional bond) and the company's stock price (because of the embedded option to convert to the underlying stock). The minimum price at which a convertible bond will trade is based on its fixed income characteristics: the stream of coupon payments and eventual maturity at par value. This is known as its "bond equivalent" or "straight bond" value. The price of the convertible bond will not drop below straight value if the stock price declines. In return for this degree of protection, investors who purchase a convertible bond rather than the underlying stock typically pay a premium over the stock's current market price.
The price that the convertible investor effectively pays for the right to convert to common stock is called the "market conversion price", and is calculated as shown below. The "conversion ratio" - the number of shares the investor receives when exchanging the bond for common stock - is specified in the bond's indenture.
formula_0
Once the actual market price of the underlying stock exceeds the market conversion price embedded in the convertible, any further rise in the stock price will drive up the convertible security's price by at least the same percentage. Thus, the market conversion price can be thought of as a "break-even point."
If the price of the stock decreases to the point that the straight bond value is much greater than the conversion value, the convertible will trade much like a straight bond. This is referred to as a "bond equivalent" or "busted convertible."
Advantages to the investor.
Convertible bonds generally provide a higher current yield than common stock due to their fixed income features and superior claim to the assets of the company in the event of default. If the value of the underlying common stock rises, the value of the convertible should rise as well. The investor can benefit from the stock's upside movement by selling the bond without converting it to stock. Alternatively, the value of the common stock could fall, but in that case the convertible's price will decline only to the point where it provides an acceptable return as a bond equivalent.
Disadvantages to the investor.
Most convertibles contain a call provision that allows the issuer to force conversion to the common stock. Such a provision limits the value associated with potential growth in the stock price.
Convertibles typically have a lower yield than a nonconvertible, because the investor is receiving an additional right: that of conversion to the underlying stock. However, if the issuer's business does not grow and prosper, the investor has an opportunity cost associated with lost current yield compared to a nonconvertible, and a capital loss if the convertible security's price drops below the price the investor paid to purchase it.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{market\\ conversion\\ price} = \\mathsf{market\\ price\\ of\\ convertible\\ bond} / \\mathsf{conversion\\ ratio}"
}
] | https://en.wikipedia.org/wiki?curid=1195063 |
1195146 | Radionuclide angiography | Nuclear medicine imaging the ventricles of the heart
Radionuclide angiography is an area of nuclear medicine which specialises in imaging to show the functionality of the right and left ventricles of the heart, thus allowing informed diagnostic intervention in heart failure. It involves use of a radiopharmaceutical, injected into a patient, and a gamma camera for acquisition. A MUGA scan (multigated acquisition) involves an acquisition triggered (gated) at different points of the cardiac cycle. MUGA scanning is also called equilibrium radionuclide angiocardiography, radionuclide ventriculography (RNVG), or gated blood pool imaging, as well as SYMA scanning (synchronized multigated acquisition scanning).
This mode of imaging uniquely provides a cine type of image of the beating heart, and allows the interpreter to determine the efficiency of the individual heart valves and chambers. MUGA/Cine scanning represents a robust adjunct to the now more common echocardiogram. Mathematics regarding acquisition of cardiac output ("Q") is well served by both of these methods as well as other inexpensive models supporting ejection fraction as a product of the heart/myocardium in systole. The advantage of a MUGA scan over an echocardiogram or an angiogram is its accuracy. An echocardiogram measures the shortening fraction of the ventricle and is limited by the user's ability. Furthermore, an angiogram is invasive and, often, more expensive. A MUGA scan provides a more accurate representation of cardiac ejection fraction.
History.
The MUGA scan was first introduced in the early 1970s and quickly became accepted as the preferred technique for measurement of left ventricular ejection fraction (LVEF) with a high degree of accuracy. Several early studies demonstrated an excellent correlation of MUGA-derived LVEF with values obtained by cardiac catheterization contrast ventriculography.
Purpose.
Radionuclide ventriculography is done to evaluate coronary artery disease (CAD), valvular heart disease, congenital heart diseases, cardiomyopathy, and other cardiac disorders. MUGA is typically ordered for the following patients:
Radionuclide ventriculography gives a much more precise measurement of left ventricular ejection fraction (LVEF) than a transthoracic echocardiogram (TTE). Transthoracic echocardiogram is highly operator dependant, therefore radionuclide ventriculography is a more reproducible measurement of LVEF. Its primary use today is in monitoring cardiac function in patients receiving certain chemotherapeutic agents (anthracyclines: doxorubicin or daunorubicin) which are cardiotoxic. The chemotherapy dose is often determined by the patient's cardiac function. In this setting, a much more accurate measurement of ejection fraction, than a transthoracic echocardiogram can provide, is necessary.
Procedure.
The MUGA scan is performed by labeling the patient's red blood pool with a radioactive tracer, technetium-99m-pertechnetate (Tc-99m), and measuring radioactivity over the anterior chest as the radioactive blood flows through the large vessels and the heart chambers.
The introduction of the radioactive marker can either take place "in vivo" or "in vitro".
In the in vivo method, stannous (tin) ions are injected into the patient's bloodstream. A subsequent intravenous injection of the radioactive substance, technetium-99m-pertechnetate, labels the red blood cells "in vivo". With an administered activity of about 800 MBq, the effective radiation dose is about 6 mSv.
In the "in vitro" method, some of the patient's blood is drawn and the stannous ions (in the form of stannous chloride) are injected into the drawn blood. The technetium is subsequently added to the mixture as in the "in vivo" method.
In both cases, the stannous chloride reduces the technetium ion and prevents it from leaking out of the red blood cells during the procedure.
The "in vivo" technique is more convenient for the majority of patients since it is less time-consuming and less costly and more than 80 percent of the injected radionuclide usually binds to red blood cells with this approach. Red blood cell binding of the radioactive tracer is generally more efficient than "in vitro" labeling, and it is preferred in patients with indwelling intravenous catheters to decrease the adherence of Tc-99m to the catheter wall and increase the efficiency of blood pool labeling.
The patient is placed under a gamma camera, which detects the low-level 140 keV gamma radiation being given off by Technetium-99m (99mTc). As the gamma camera images are acquired, the patient's heart beat is used to 'gate' the acquisition. The final result is a series of images of the heart (usually sixteen), one at each stage of the cardiac cycle.
Depending on the objectives of the test, the doctor may decide to perform either a resting or a stress MUGA. During the resting MUGA, the patient lies stationary, whereas during a stress MUGA, the patient is asked to exercise during the scan. The stress MUGA measures the heart performance during exercise and is usually performed to assess the impact of a suspected coronary artery disease. In some cases, a nitroglycerin MUGA may be performed, where nitroglycerin (a vasodilator) is administered prior to the scan.
The resulting images show that the volumetrically derived blood pools in the chambers of the heart and timed images may be computationally interpreted to calculate the ejection fraction and injection fraction of the heart. The Massardo method can be used to calculate ventricle volumes. This nuclear medicine scan yields an accurate, inexpensive and easily reproducible means of measuring and monitoring the ejection and injection fractions of the ventricles, which are one of many of the important clinical metrics in assessing global heart performance.
Radiation exposure.
It exposes patients to less radiation than do comparable chest x-ray studies. However, the radioactive material is retained in the patient for several days after the test, during which sophisticated radiation alarms may be triggered, such as in airports. Radionuclide ventriculography has largely been replaced by echocardiography, which is less expensive, and does not require radiation exposure.
Results.
Normal results.
In normal subjects, the left ventricular ejection fraction (LVEF) should be about 50%(range, 50-80%). There should be no area of abnormal wall motion (hypokinesis, akinesis or dyskinesis). Abnormalities in cardiac function may be manifested as a decrease in LVEF and/or the presence of abnormalities in global and regional wall motion. For normal subjects, peak filling rates should be between 2.4 and 3.6 end diastolic volume (EDV) per second, and the time to peak filling rate should be 135-212 ms.
Abnormal results.
An uneven distribution of technetium in the heart indicates that the patient has coronary artery disease, a cardiomyopathy, or blood shunting within the heart. Abnormalities in a resting MUGA usually indicate a heart attack, while those that occur during exercise usually indicate ischemia. In a stress MUGA, patients with coronary artery disease may exhibit a decrease in ejection fraction.
For a patient that has had a heart attack, or is suspected of having another disease that affects the heart muscle, this scan can help pinpoint the position in the heart that has sustained damage as well as assess the degree of damage. MUGA scans are also used to evaluate heart function prior to and while receiving certain chemotherapies (e.g. doxorubicin (Adriamycin)) or immunotherapy (specifically, herceptin) that have a known effect on heart function.
Massardo method.
The Massardo method is one of a number of approaches for estimating the volume of the ventricles and thus ultimately the ejection fraction. Recall that a MUGA scan is a nuclear imaging method involving the injection of a radioactive isotope (Tc-99m) that acquires gated 2D images of the heart using a SPECT scanner. The pixel values in such an image represent the number of counts (nuclear decays) detected from within that region in a given time interval. The Massardo method enables a 3D volume to be estimated from such a 2D image of decay counts via:
formula_0 ,
where formula_1 is the pixel dimension and formula_2 is the ratio of total counts within the ventricle to the number of counts within the brightest (hottest) pixel. The Massardo method relies on two assumptions: (i) the ventricle is spherical and (ii) the radioactivity is homogeneously distributed.
The ejection fraction, formula_3, can then be calculated:
formula_4,
where the EDV (end-diastolic volume) is the volume of blood within the ventricle immediately before a contraction and the ESV (end-systolic volume) is the volume of blood remaining in the ventricle at the end of a contraction. The ejection fraction is hence the fraction of the end-diastolic volume that is ejected with each beat.
The Siemens Intevo SPECT scanners employ the Massardo method in their MUGA scans. Other methods for estimating ventricular volume exist, but the Massardo method is sufficiently accurate and simple to perform, avoiding the need for blood samples, attenuation corrections or decay corrections.
Derivation.
Define the ratio formula_5 as the ratio of counts within the chamber of the heart to the counts in the hottest pixel:
formula_6 .
Assuming that the activity is homogeneously distributed, the total count is proportional to the volume. The maximum pixel count is thus proportional to the length of the longest axis perpendicular to the collimator, formula_7, times the cross-sectional area of a pixel, formula_8. We can thus write:
formula_9 ,
where formula_10 is some constant of proportionality with units counts/cmformula_11. The total counts, formula_12, can be written formula_13 where formula_14 is the volume of the ventricle and formula_10 is the same constant of proportionality since we are assuming a homogeneous distribution of activity. The Massardo method now makes the simplification that the ventricle is spherical in shape, giving
formula_15 ,
where formula_16 is the diameter of the sphere and is thus equivalent to formula_7 above. This allows us to express the ratio formula_5 as
formula_17 ,
finally giving the diameter of the ventricle in terms of formula_5, i.e. counts, alone:
formula_18.
From this, the volume of the ventricle in terms of counts alone is simply
formula_19.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " V = 1.38 M^3 r^{\\frac{3}{2}} "
},
{
"math_id": 1,
"text": " M "
},
{
"math_id": 2,
"text": " r "
},
{
"math_id": 3,
"text": " E_f "
},
{
"math_id": 4,
"text": " E_f(\\%) = \\frac{ \\text{EDV - ESV} }{ \\text{EDV} } \\times 100 "
},
{
"math_id": 5,
"text": "r"
},
{
"math_id": 6,
"text": " r = \\frac{ \\text{Total counts within the chamber} }{ \\text{Total counts in the hottest pixel} } = \\frac{ N_t }{N_m} "
},
{
"math_id": 7,
"text": "D_m"
},
{
"math_id": 8,
"text": "M^2"
},
{
"math_id": 9,
"text": " N_m = K M^2 D_m "
},
{
"math_id": 10,
"text": "K"
},
{
"math_id": 11,
"text": "^3"
},
{
"math_id": 12,
"text": "N_t"
},
{
"math_id": 13,
"text": "N_t = K V_t "
},
{
"math_id": 14,
"text": "V_t"
},
{
"math_id": 15,
"text": " N_t = K \\left( \\frac{\\pi}{6} \\right) D^3 "
},
{
"math_id": 16,
"text": "D"
},
{
"math_id": 17,
"text": " r = \\frac{N_t}{N_m} = \\frac{\\pi D^2}{6 M^2} "
},
{
"math_id": 18,
"text": " D^2 = \\left( \\frac{6}{\\pi} \\right) M^2 r "
},
{
"math_id": 19,
"text": " V_t = \\sqrt{ \\frac{6}{\\pi} } M^3 r^{\\frac{3}{2}} \\approx 1.38 M^3 r^{\\frac{3}{2}} "
}
] | https://en.wikipedia.org/wiki?curid=1195146 |
11952246 | ECF grading system | The ECF grading system was the rating system formerly used by the English Chess Federation. A rating produced by the system was known as an ECF grading.
The English Chess Federation did not switch to the international standard Elo rating system until 2020.
History.
ECF was first published in 1958, devised by Richard W. B. Clarke, father of politician Charles Clarke. Grades were updated on a six monthly cycle between 2012 and 2020, based on results towards the end of June and December; before 2012 grades were published annually. In July 2020 the English Chess Federation moved to publishing ratings monthly using a modified Elo system.
Calculation of rating.
Every competitive game played under the ECF system gives a "performance grade" that is "score" or "points" for each player (later used as the basis for an averaged cyclical or yearly grade, their "personal grade" they take into all matches in that cycle or year):<br>
formula_0
Thus if Player A who is graded 160 beats Player B graded 140, then, Player A's score for noting for later annual or cyclical averaging is 190; Player B's is 110. A whole series of 30 games drawing against a player results in a swap of personal grades. ECF grades appear to be zero-sum when looking at a game in isolation; however, negative scores are deemed zero. This means a player's grade after every six months is only then calculated.
The more games played, the more the end-of-cycle re-grading is affected directly or indirectly by this tiny inflationary effect at the bottom of the league so ECF grades are nonzero-sum. Countering this, a retiring or deceased master who had not appreciably shed points loses their accumulation available, otherwise open to competitors. The Federation has to recalibrate grades based on this discrete variable, and looks at the very approximate other-systems conversion formulae in so doing.
The "unless" part describes a tempering of the grade for future totting up as to disparate matches: players of grades facing each other of more than 40 points apart are deemed be exactly 40 points different. Had Player B's grade been 100, Player A would have scored: 170, and Player B: 90. This prevents players increasing their grade by losing to much higher-graded players and also means that the stronger player notes 10 points higher for their annual points tally (grade) when winning.
At the end of a cycle, each player's performance grades for that cycle are averaged to give the personal grade used for the following period. If fewer than 30 games have been played, games from last cycle(s) are usually included in the average to make the number up to 30.
A weakness of many other systems is treatment of juniors. Juniors tend to improve and therefore their rating/grading lags their current strength. The ECF grading system addresses this by changing the mathematical frame of reference for those aged under 18. The system above uses each player's grade from the previous "cycle" to calculate the personal grade. For juniors this uses "year" instead of cycle (including the recalculation of grades of junior opponents). It is this recalculation that becomes the performance grade for the final calculation for all players.
In theory a non-chess player would have a personal grade of 0; in practice negative grades exist but are set to 0 on the grading list. The weakest adult club players come in at about 40. A three-figure grade is a source of prestige among casual players, while those who seriously study the game may try to achieve a personal grade of 150. A player graded over 200 is usually well-known to rival circuits and might consider aiming for a master title. Grades far above 200 lose much of their significance as very strong players tend to play mostly in internationally rated tournaments.
The 150 Attack, a no-nonsense response to the Pirc Defence popularised by British players, derives from this system. Per Sam Collins in "Understanding the Chess Openings" this is because "even a 150-rated player could handle the White side".
Due to the inherent simplicity, a benefits it has over the Elo rating system used by FIDE, is scores are simple after each result without coded software or a calculator, and retention of personal grades over a cycle of typically at least 30 games.
Before 2005 all personal grades were confirmed by the former British Chess Federation: BCF grades.
Conversion to and from Elo ratings.
Although the ECF grading system is mechanically very different from the Elo rating system, the ECF publishes formulae that can be used to estimate the equivalent ECF grade of a FIDE Elo rating, and vice versa:
formula_1
formula_2
In the formula above E is the ECF rating (not Elo) and F is the FIDE rating.
The ECF grading system was recalibrated in 2009. Various other conversion formulae have been used, but usually relate to the forerunner.
This grades some games outside Federation matches where opponents lack an ECF grade. It is designed to give a best estimate conversion. It is often used by organisers of English congresses to determine qualification for grade-restricted events when a player has an Elo rating only.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Performance Grade} = \\text{Opponent's grade for the cycle or year}\n\\begin{cases}\n+50 & \\text{if game won} \\\\\n\\pm0 & \\text{if game drawn} \\\\\n-50 & \\text{if game lost}\n\\end{cases}\n\\qquad,\\text{unless difference in grades is}\n\\ge40\n"
},
{
"math_id": 1,
"text": "E = \\frac{F - 700}{7.5}"
},
{
"math_id": 2,
"text": "F = 7.5E + 700"
}
] | https://en.wikipedia.org/wiki?curid=11952246 |
1195234 | Multiplicative group of integers modulo n | Group of units of the ring of integers modulo n
In modular arithmetic, the integers coprime (relatively prime) to "n" from the set formula_0 of "n" non-negative integers form a group under multiplication modulo "n", called the multiplicative group of integers modulo "n". Equivalently, the elements of this group can be thought of as the congruence classes, also known as "residues" modulo "n", that are coprime to "n".
Hence another name is the group of primitive residue classes modulo "n".
In the theory of rings, a branch of abstract algebra, it is described as the group of units of the ring of integers modulo "n". Here "units" refers to elements with a multiplicative inverse, which, in this ring, are exactly those coprime to "n".
This group, usually denoted formula_1, is fundamental in number theory. It is used in cryptography, integer factorization, and primality testing. It is an abelian, finite group whose order is given by Euler's totient function: formula_2 For prime "n" the group is cyclic, and in general the structure is easy to describe, but no simple general formula for finding generators is known.
Group axioms.
It is a straightforward exercise to show that, under multiplication, the set of congruence classes modulo "n" that are coprime to "n" satisfy the axioms for an abelian group.
Indeed, "a" is coprime to "n" if and only if gcd("a", "n") = 1. Integers in the same congruence class "a" ≡ "b" (mod "n") satisfy gcd("a", "n") = gcd("b", "n"); hence one is coprime to "n" if and only if the other is. Thus the notion of congruence classes modulo "n" that are coprime to "n" is well-defined.
Since gcd("a", "n") = 1 and gcd("b", "n") = 1 implies gcd("ab", "n") = 1, the set of classes coprime to "n" is closed under multiplication.
Integer multiplication respects the congruence classes, that is, "a" ≡ "a' " and "b" ≡ "b' " (mod "n") implies "ab" ≡ "a'b' " (mod "n").
This implies that the multiplication is associative, commutative, and that the class of 1 is the unique multiplicative identity.
Finally, given "a", the multiplicative inverse of "a" modulo "n" is an integer "x" satisfying "ax" ≡ 1 (mod "n").
It exists precisely when "a" is coprime to "n", because in that case gcd("a", "n") = 1 and by Bézout's lemma there are integers "x" and "y" satisfying "ax" + "ny" = 1. Notice that the equation "ax" + "ny" = 1 implies that "x" is coprime to "n", so the multiplicative inverse belongs to the group.
Notation.
The set of (congruence classes of) integers modulo "n" with the operations of addition and multiplication is a ring.
It is denoted formula_3 or formula_4 (the notation refers to taking the quotient of integers modulo the ideal formula_5 or formula_6 consisting of the multiples of "n").
Outside of number theory the simpler notation formula_7 is often used, though it can be confused with the p-adic integers when "n" is a prime number.
The multiplicative group of integers modulo "n", which is the group of units in this ring, may be written as (depending on the author) formula_8 formula_9 formula_10 formula_11 (for German "Einheit", which translates as "unit"), formula_12, or similar notations. This article uses formula_13
The notation formula_14 refers to the cyclic group of order "n".
It is isomorphic to the group of integers modulo "n" under addition.
Note that formula_3 or formula_7 may also refer to the group under addition.
For example, the multiplicative group formula_15 for a prime "p" is cyclic and hence isomorphic to the additive group formula_16, but the isomorphism is not obvious.
Structure.
The order of the multiplicative group of integers modulo "n" is the number of integers in formula_0 coprime to "n". It is given by Euler's totient function: formula_17 (sequence in the OEIS).
For prime "p", formula_18.
Cyclic case.
The group formula_1 is cyclic if and only if "n" is 1, 2, 4, "p""k" or 2"p""k", where "p" is an odd prime and "k" > 0. For all other values of "n" the group is not cyclic.
This was first proved by Gauss.
This means that for these "n":
formula_19 where formula_20
By definition, the group is cyclic if and only if it has a generator "g" (a generating set {"g"} of size one), that is, the powers formula_21 give all possible residues modulo "n" coprime to "n" (the first formula_22 powers formula_23 give each exactly once).
A generator of formula_1 is called a primitive root modulo "n".
If there is any generator, then there are formula_24 of them.
Powers of 2.
Modulo 1 any two integers are congruent, i.e., there is only one congruence class, [0], coprime to 1. Therefore, formula_25 is the trivial group with φ(1) = 1 element. Because of its trivial nature, the case of congruences modulo 1 is generally ignored and some authors choose not to include the case of "n" = 1 in theorem statements.
Modulo 2 there is only one coprime congruence class, [1], so formula_26 is the trivial group.
Modulo 4 there are two coprime congruence classes, [1] and [3], so formula_27 the cyclic group with two elements.
Modulo 8 there are four coprime congruence classes, [1], [3], [5] and [7]. The square of each of these is 1, so formula_28 the Klein four-group.
Modulo 16 there are eight coprime congruence classes [1], [3], [5], [7], [9], [11], [13] and [15]. formula_29 is the 2-torsion subgroup (i.e., the square of each element is 1), so formula_30 is not cyclic. The powers of 3, formula_31 are a subgroup of order 4, as are the powers of 5, formula_32 Thus formula_33
The pattern shown by 8 and 16 holds for higher powers 2"k", "k" > 2: formula_34 is the 2-torsion subgroup, so formula_35 cannot be cyclic, and the powers of 3 are a cyclic subgroup of order 2"k" − 2, so:formula_36
General composite numbers.
By the fundamental theorem of finite abelian groups, the group formula_1 is isomorphic to a direct product of cyclic groups of prime power orders.
More specifically, the Chinese remainder theorem says that if formula_37 then the ring formula_3 is the direct product of the rings corresponding to each of its prime power factors:
formula_38
Similarly, the group of units formula_1 is the direct product of the groups corresponding to each of the prime power factors:
formula_39
For each odd prime power formula_40 the corresponding factor formula_41 is the cyclic group of order formula_42, which may further factor into cyclic groups of prime-power orders.
For powers of 2 the factor formula_43 is not cyclic unless "k" = 0, 1, 2, but factors into cyclic groups as described above.
The order of the group formula_22 is the product of the orders of the cyclic groups in the direct product.
The exponent of the group, that is, the least common multiple of the orders in the cyclic groups, is given by the Carmichael function formula_44 (sequence in the OEIS).
In other words, formula_44 is the smallest number such that for each "a" coprime to "n", formula_45 holds.
It divides formula_22 and is equal to it if and only if the group is cyclic.
Subgroup of false witnesses.
If "n" is composite, there exists a proper subgroup of formula_46, called the "group of false witnesses", comprising the solutions of the equation formula_47, the elements which, raised to the power "n" − 1, are congruent to 1 modulo "n". Fermat's Little Theorem states that for "n = p" a prime, this group consists of all formula_48; thus for "n" composite, such residues "x" are "false positives" or "false witnesses" for the primality of "n". The number "x =" 2 is most often used in this basic primality check, and "n =" 341 = 11 × 31 is notable since formula_49, and "n =" 341 is the smallest composite number for which "x =" 2 is a false witness to primality. In fact, the false witnesses subgroup for 341 contains 100 elements, and is of index 3 inside the 300-element group formula_50.
Examples.
"n" = 9.
The smallest example with a nontrivial subgroup of false witnesses is 9 = 3 × 3. There are 6 residues coprime to 9: 1, 2, 4, 5, 7, 8. Since 8 is congruent to −1 modulo 9, it follows that 88 is congruent to 1 modulo 9. So 1 and 8 are false positives for the "primality" of 9 (since 9 is not actually prime). These are in fact the only ones, so the subgroup {1,8} is the subgroup of false witnesses. The same argument shows that "n" − 1 is a "false witness" for any odd composite "n".
"n" = 91.
For "n" = 91 (= 7 × 13), there are formula_51 residues coprime to 91, half of them (i.e., 36 of them) are false witnesses of 91, namely 1, 3, 4, 9, 10, 12, 16, 17, 22, 23, 25, 27, 29, 30, 36, 38, 40, 43, 48, 51, 53, 55, 61, 62, 64, 66, 68, 69, 74, 75, 79, 81, 82, 87, 88, and 90, since for these values of "x", "x"90 is congruent to 1 mod 91.
"n" = 561.
"n" = 561 (= 3 × 11 × 17) is a Carmichael number, thus "s"560 is congruent to 1 modulo 561 for any integer "s" coprime to 561. The subgroup of false witnesses is, in this case, not proper; it is the entire group of multiplicative units modulo 561, which consists of 320 residues.
Examples.
This table shows the cyclic decomposition of formula_1 and a generating set for "n" ≤ 128. The decomposition and generating sets are not unique; for example,
formula_52
(but formula_53). The table below lists the shortest decomposition (among those, the lexicographically first is chosen – this guarantees isomorphic groups are listed with the same decompositions). The generating set is also chosen to be as short as possible, and for "n" with primitive root, the smallest primitive root modulo "n" is listed.
For example, take formula_54. Then formula_55 means that the order of the group is 8 (i.e., there are 8 numbers less than 20 and coprime to it); formula_56 means the order of each element divides 4, that is, the fourth power of any number coprime to 20 is congruent to 1 (mod 20). The set {3,19} generates the group, which means that every element of formula_54 is of the form 3"a" × 19"b" (where "a" is 0, 1, 2, or 3, because the element 3 has order 4, and similarly "b" is 0 or 1, because the element 19 has order 2).
Smallest primitive root mod "n" are (0 if no root exists)
0, 1, 2, 3, 2, 5, 3, 0, 2, 3, 2, 0, 2, 3, 0, 0, 3, 5, 2, 0, 0, 7, 5, 0, 2, 7, 2, 0, 2, 0, 3, 0, 0, 3, 0, 0, 2, 3, 0, 0, 6, 0, 3, 0, 0, 5, 5, 0, 3, 3, 0, 0, 2, 5, 0, 0, 0, 3, 2, 0, 2, 3, 0, 0, 0, 0, 2, 0, 0, 0, 7, 0, 5, 5, 0, 0, 0, 0, 3, 0, 2, 7, 2, 0, 0, 3, 0, 0, 3, 0, ... (sequence in the OEIS)
Numbers of the elements in a minimal generating set of mod "n" are
1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 1, 2, 2, 1, 1, 1, 2, 2, 1, 1, 3, 1, 1, 1, 2, 1, 2, 1, 2, 2, 1, 2, 2, 1, 1, 2, 3, 1, 2, 1, 2, 2, 1, 1, 3, 1, 1, 2, 2, 1, 1, 2, 3, 2, 1, 1, 3, 1, 1, 2, 2, 2, 2, 1, 2, 2, 2, 1, 3, 1, 1, 2, 2, 2, 2, 1, 3, 1, 1, 1, 3, 2, 1, 2, 3, 1, 2, ... (sequence in the OEIS)
Notes.
<templatestyles src="Reflist/styles.css" />
References.
The "Disquisitiones Arithmeticae" has been translated from Gauss's Ciceronian Latin into English and German. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes. | [
{
"math_id": 0,
"text": "\\{0,1,\\dots,n-1\\}"
},
{
"math_id": 1,
"text": "(\\mathbb{Z}/n\\mathbb{Z})^\\times"
},
{
"math_id": 2,
"text": "|(\\mathbb{Z}/n\\mathbb{Z})^\\times|=\\varphi(n)."
},
{
"math_id": 3,
"text": "\\mathbb{Z}/n\\mathbb{Z}"
},
{
"math_id": 4,
"text": "\\mathbb{Z}/(n)"
},
{
"math_id": 5,
"text": "n\\mathbb{Z}"
},
{
"math_id": 6,
"text": "(n)"
},
{
"math_id": 7,
"text": "\\mathbb{Z}_n"
},
{
"math_id": 8,
"text": "(\\mathbb{Z}/n\\mathbb{Z})^\\times,"
},
{
"math_id": 9,
"text": "(\\mathbb{Z}/n\\mathbb{Z})^*,"
},
{
"math_id": 10,
"text": "\\mathrm{U}(\\mathbb{Z}/n\\mathbb{Z}),"
},
{
"math_id": 11,
"text": "\\mathrm{E}(\\mathbb{Z}/n\\mathbb{Z})"
},
{
"math_id": 12,
"text": "\\mathbb{Z}_n^*"
},
{
"math_id": 13,
"text": "(\\mathbb{Z}/n\\mathbb{Z})^\\times."
},
{
"math_id": 14,
"text": "\\mathrm{C}_n"
},
{
"math_id": 15,
"text": "(\\mathbb{Z}/p\\mathbb{Z})^\\times"
},
{
"math_id": 16,
"text": "\\mathbb{Z}/(p-1)\\mathbb{Z}"
},
{
"math_id": 17,
"text": "| (\\mathbb{Z}/n\\mathbb{Z})^\\times|=\\varphi(n)"
},
{
"math_id": 18,
"text": "\\varphi(p)=p-1"
},
{
"math_id": 19,
"text": " (\\mathbb{Z}/n\\mathbb{Z})^\\times \\cong \\mathrm{C}_{\\varphi(n)},"
},
{
"math_id": 20,
"text": "\\varphi(p^k)=\\varphi(2 p^k)=p^k - p^{k-1}."
},
{
"math_id": 21,
"text": "g^0,g^1,g^2,\\dots,"
},
{
"math_id": 22,
"text": "\\varphi(n)"
},
{
"math_id": 23,
"text": "g^0,\\dots,g^{\\varphi(n)-1}"
},
{
"math_id": 24,
"text": "\\varphi(\\varphi(n))"
},
{
"math_id": 25,
"text": "(\\mathbb{Z}/1\\,\\mathbb{Z})^\\times \\cong \\mathrm{C}_1"
},
{
"math_id": 26,
"text": "(\\mathbb{Z}/2\\mathbb{Z})^\\times \\cong \\mathrm{C}_1"
},
{
"math_id": 27,
"text": "(\\mathbb{Z}/4\\mathbb{Z})^\\times \\cong \\mathrm{C}_2,"
},
{
"math_id": 28,
"text": "(\\mathbb{Z}/8\\mathbb{Z})^\\times \\cong \\mathrm{C}_2 \\times \\mathrm{C}_2,"
},
{
"math_id": 29,
"text": "\\{\\pm 1, \\pm 7\\}\\cong \\mathrm{C}_2 \\times \\mathrm{C}_2,"
},
{
"math_id": 30,
"text": "(\\mathbb{Z}/16\\mathbb{Z})^\\times"
},
{
"math_id": 31,
"text": "\\{1, 3, 9, 11\\}"
},
{
"math_id": 32,
"text": "\\{1, 5, 9, 13\\}."
},
{
"math_id": 33,
"text": "(\\mathbb{Z}/16\\mathbb{Z})^\\times \\cong \\mathrm{C}_2 \\times \\mathrm{C}_4."
},
{
"math_id": 34,
"text": "\\{\\pm 1, 2^{k-1} \\pm 1\\}\\cong \\mathrm{C}_2 \\times \\mathrm{C}_2,"
},
{
"math_id": 35,
"text": "(\\mathbb{Z}/2^k\\mathbb{Z})^\\times "
},
{
"math_id": 36,
"text": "(\\mathbb{Z}/2^k\\mathbb{Z})^\\times \\cong \\mathrm{C}_2 \\times \\mathrm{C}_{2^{k-2}}."
},
{
"math_id": 37,
"text": "\\;\\;n=p_1^{k_1}p_2^{k_2}p_3^{k_3}\\dots, \\;"
},
{
"math_id": 38,
"text": "\\mathbb{Z}/n\\mathbb{Z} \\cong \\mathbb{Z}/{p_1^{k_1}}\\mathbb{Z}\\; \\times \\;\\mathbb{Z}/{p_2^{k_2}}\\mathbb{Z} \\;\\times\\; \\mathbb{Z}/{p_3^{k_3}}\\mathbb{Z}\\dots\\;\\;"
},
{
"math_id": 39,
"text": "(\\mathbb{Z}/n\\mathbb{Z})^\\times\\cong (\\mathbb{Z}/{p_1^{k_1}}\\mathbb{Z})^\\times \\times (\\mathbb{Z}/{p_2^{k_2}}\\mathbb{Z})^\\times \\times (\\mathbb{Z}/{p_3^{k_3}}\\mathbb{Z})^\\times \\dots\\;."
},
{
"math_id": 40,
"text": "p^{k}"
},
{
"math_id": 41,
"text": "(\\mathbb{Z}/{p^{k}}\\mathbb{Z})^\\times"
},
{
"math_id": 42,
"text": "\\varphi(p^k)=p^k - p^{k-1}"
},
{
"math_id": 43,
"text": "(\\mathbb{Z}/{2^{k}}\\mathbb{Z})^\\times"
},
{
"math_id": 44,
"text": "\\lambda(n)"
},
{
"math_id": 45,
"text": "a^{\\lambda(n)} \\equiv 1 \\pmod n"
},
{
"math_id": 46,
"text": "\\mathbb{Z}_n^\\times"
},
{
"math_id": 47,
"text": "x^{n-1}=1"
},
{
"math_id": 48,
"text": "x\\in \\mathbb{Z}_p^\\times"
},
{
"math_id": 49,
"text": "2^{341-1}\\equiv 1 \\mod 341"
},
{
"math_id": 50,
"text": "\\mathbb{Z}_{341}^\\times"
},
{
"math_id": 51,
"text": "\\varphi(91)=72"
},
{
"math_id": 52,
"text": " \\displaystyle \\begin{align}(\\mathbb{Z}/35\\mathbb{Z})^\\times & \\cong (\\mathbb{Z}/5\\mathbb{Z})^\\times \\times (\\mathbb{Z}/7\\mathbb{Z})^\\times \\cong \\mathrm{C}_4 \\times \\mathrm{C}_6 \\cong \\mathrm{C}_4 \\times \\mathrm{C}_2 \\times \\mathrm{C}_3 \\cong \\mathrm{C}_2 \\times \\mathrm{C}_{12} \\cong (\\mathbb{Z}/4\\mathbb{Z})^\\times \\times (\\mathbb{Z}/13\\mathbb{Z})^\\times \\\\ & \\cong (\\mathbb{Z}/52\\mathbb{Z})^\\times \\end{align} "
},
{
"math_id": 53,
"text": "\\not\\cong \\mathrm{C}_{24} \\cong \\mathrm{C}_8 \\times \\mathrm{C}_3"
},
{
"math_id": 54,
"text": "(\\mathbb{Z}/20\\mathbb{Z})^\\times"
},
{
"math_id": 55,
"text": "\\varphi(20)=8"
},
{
"math_id": 56,
"text": "\\lambda(20)=4"
}
] | https://en.wikipedia.org/wiki?curid=1195234 |
11953 | History of geometry | Historical development of geometry
Geometry (from the ; "geo-" "earth", "-metron" "measurement") arose as the field of knowledge dealing with spatial relationships. Geometry was one of the two fields of pre-modern mathematics, the other being the study of numbers (arithmetic).
Classic geometry was focused in compass and straightedge constructions. Geometry was revolutionized by Euclid, who introduced mathematical rigor and the axiomatic method still in use today. His book, "The Elements" is widely considered the most influential textbook of all time, and was known to all educated people in the West until the middle of the 20th century.
In modern times, geometric concepts have been generalized to a high level of abstraction and complexity, and have been subjected to the methods of calculus and abstract algebra, so that many modern branches of the field are barely recognizable as the descendants of early geometry. (See Areas of mathematics and Algebraic geometry.)
Early geometry.
The earliest recorded beginnings of geometry can be traced to early peoples, such as the ancient Indus Valley (see Harappan mathematics) and ancient Babylonia (see Babylonian mathematics) from around 3000 BC. Early geometry was a collection of empirically discovered principles concerning lengths, angles, areas, and volumes, which were developed to meet some practical need in surveying, construction, astronomy, and various crafts. Among these were some surprisingly sophisticated principles, and a modern mathematician might be hard put to derive some of them without the use of calculus and algebra. For example, both the Egyptians and the Babylonians were aware of versions of the Pythagorean theorem about 1500 years before Pythagoras and the Indian Sulba Sutras around 800 BC contained the first statements of the theorem; the Egyptians had a correct formula for the volume of a frustum of a square pyramid.
Egyptian geometry.
The ancient Egyptians knew that they could approximate the area of a circle as follows:
Area of Circle ≈ [ (Diameter) x 8/9 ]2.
Problem 50 of the Ahmes papyrus uses these methods to calculate the area of a circle, according to a rule that the area is equal to the square of 8/9 of the circle's diameter. This assumes that π is 4×(8/9)2 (or 3.160493...), with an error of slightly over 0.63 percent. This value was slightly less accurate than the calculations of the Babylonians (25/8 = 3.125, within 0.53 percent), but was not otherwise surpassed until Archimedes' approximation of 211875/67441 = 3.14163, which had an error of just over 1 in 10,000.
Ahmes knew of the modern 22/7 as an approximation for π, and used it to split a hekat, hekat x 22/x x 7/22 = hekat; however, Ahmes continued to use the traditional 256/81 value for π for computing his hekat volume found in a cylinder.
Problem 48 involved using a square with side 9 units. This square was cut into a 3x3 grid. The diagonal of the corner squares were used to make an irregular octagon with an area of 63 units. This gave a second value for π of 3.111...
The two problems together indicate a range of values for π between 3.11 and 3.16.
Problem 14 in the Moscow Mathematical Papyrus gives the only ancient example finding the volume of a frustum of a pyramid, describing the correct formula:
formula_0
where "a" and "b" are the base and top side lengths of the truncated pyramid and "h" is the height.
Babylonian geometry.
The Babylonians may have known the general rules for measuring areas and volumes. They measured the circumference of a circle as three times the diameter and the area as one-twelfth the square of the circumference, which would be correct if "π" is estimated as 3. The volume of a cylinder was taken as the product of the base and the height, however, the volume of the frustum of a cone or a square pyramid was incorrectly taken as the product of the height and half the sum of the bases. The Pythagorean theorem was also known to the Babylonians. Also, there was a recent discovery in which a tablet used "π" as 3 and 1/8. The Babylonians are also known for the Babylonian mile, which was a measure of distance equal to about seven miles today. This measurement for distances eventually was converted to a time-mile used for measuring the travel of the Sun, therefore, representing time. There have been recent discoveries showing that ancient Babylonians may have discovered astronomical geometry nearly 1400 years before Europeans did.
Vedic India geometry.
The Indian Vedic period had a tradition of geometry, mostly expressed in the construction of elaborate altars.
Early Indian texts (1st millennium BC) on this topic include the "Satapatha Brahmana" and the "Śulba Sūtras".
The "Śulba Sūtras" has been described as "the earliest extant verbal expression of the Pythagorean Theorem in the world, although it had already been known to the Old Babylonians." They make use of Pythagorean triples, which are particular cases of Diophantine equations.
According to mathematician S. G. Dani, the Babylonian cuneiform tablet Plimpton 322 written c. 1850 BC "contains fifteen Pythagorean triples with quite large entries, including (13500, 12709, 18541) which is a primitive triple, indicating, in particular, that there was sophisticated understanding on the topic" in Mesopotamia in 1850 BC. "Since these tablets predate the Sulbasutras period by several centuries, taking into account the contextual appearance of some of the triples, it is reasonable to expect that similar understanding would have been there in India." Dani goes on to say:
As the main objective of the "Sulvasutras" was to describe the constructions of altars and the geometric principles involved in them, the subject of Pythagorean triples, even if it had been well understood may still not have featured in the "Sulvasutras". The occurrence of the triples in the "Sulvasutras" is comparable to mathematics that one may encounter in an introductory book on architecture or another similar applied area, and would not correspond directly to the overall knowledge on the topic at that time. Since, unfortunately, no other contemporaneous sources have been found it may never be possible to settle this issue satisfactorily.
Greek geometry.
Thales and Pythagoras.
Thales (635–543 BC) of Miletus (now in southwestern Turkey), was the first to whom deduction in mathematics is attributed. There are five geometric propositions for which he wrote deductive proofs, though his proofs have not survived. Pythagoras (582–496 BC) of Ionia, and later, Italy, then colonized by Greeks, may have been a student of Thales, and traveled to Babylon and Egypt. The theorem that bears his name may not have been his discovery, but he was probably one of the first to give a deductive proof of it. He gathered a group of students around him to study mathematics, music, and philosophy, and together they discovered most of what high school students learn today in their geometry courses. In addition, they made the profound discovery of incommensurable lengths and irrational numbers.
Plato.
Plato (427–347 BC) was a philosopher, highly esteemed by the Greeks. There is a story that he had inscribed above the entrance to his famous school, "Let none ignorant of geometry enter here." However, the story is considered to be untrue. Though he was not a mathematician himself, his views on mathematics had great influence. Mathematicians thus accepted his belief that geometry should use no tools but compass and straightedge – never measuring instruments such as a marked ruler or a protractor, because these were a workman's tools, not worthy of a scholar. This dictum led to a deep study of possible compass and straightedge constructions, and three classic construction problems: how to use these tools to trisect an angle, to construct a cube twice the volume of a given cube, and to construct a square equal in area to a given circle. The proofs of the impossibility of these constructions, finally achieved in the 19th century, led to important principles regarding the deep structure of the real number system. Aristotle (384–322 BC), Plato's greatest pupil, wrote a treatise on methods of reasoning used in deductive proofs (see Logic) which was not substantially improved upon until the 19th century.
Hellenistic geometry.
Euclid.
Euclid (c. 325–265 BC), of Alexandria, probably a student at the Academy founded by Plato, wrote a treatise in 13 books (chapters), titled "The Elements of Geometry", in which he presented geometry in an ideal axiomatic form, which came to be known as Euclidean geometry. The treatise is not a compendium of all that the Hellenistic mathematicians knew at the time about geometry; Euclid himself wrote eight more advanced books on geometry. We know from other references that Euclid's was not the first elementary geometry textbook, but it was so much superior that the others fell into disuse and were lost. He was brought to the university at Alexandria by Ptolemy I, King of Egypt.
"The Elements" began with definitions of terms, fundamental geometric principles (called "axioms" or "postulates"), and general quantitative principles (called "common notions") from which all the rest of geometry could be logically deduced. Following are his five axioms, somewhat paraphrased to make the English easier to read.
Concepts, that are now understood as algebra, were expressed geometrically by Euclid, a method referred to as Greek geometric algebra.
Archimedes.
Archimedes (287–212 BC), of Syracuse, Sicily, when it was a Greek city-state, was one of the most famous mathematicians of the Hellenistic period. He is known for his formulation of a hydrostatic principle (known as Archimedes' principle) and for his works on geometry, including Measurement of the Circle and On Conoids and Spheroids. His work On Floating Bodies is the first known work on hydrostatics, of which Archimedes is recognized as the founder. Renaissance translations of his works, including the ancient commentaries, were enormously influential in the work of some of the best mathematicians of the 17th century, notably René Descartes and Pierre de Fermat.
After Archimedes.
After Archimedes, Hellenistic mathematics began to decline. There were a few minor stars yet to come, but the golden age of geometry was over. Proclus (410–485), author of "Commentary on the First Book of Euclid", was one of the last important players in Hellenistic geometry. He was a competent geometer, but more importantly, he was a superb commentator on the works that preceded him. Much of that work did not survive to modern times, and is known to us only through his commentary. The Roman Republic and Empire that succeeded and absorbed the Greek city-states produced excellent engineers, but no mathematicians of note.
The great Library of Alexandria was later burned. There is a growing consensus among historians that the Library of Alexandria likely suffered from several destructive events, but that the destruction of Alexandria's pagan temples in the late 4th century was probably the most severe and final one. The evidence for that destruction is the most definitive and secure. Caesar's invasion may well have led to the loss of some 40,000–70,000 scrolls in a warehouse adjacent to the port (as Luciano Canfora argues, they were likely copies produced by the Library intended for export), but it is unlikely to have affected the Library or Museum, given that there is ample evidence that both existed later.
Civil wars, decreasing investments in maintenance and acquisition of new scrolls and generally declining interest in non-religious pursuits likely contributed to a reduction in the body of material available in the Library, especially in the 4th century. The Serapeum was certainly destroyed by Theophilus in 391, and the Museum and Library may have fallen victim to the same campaign.
Classical Indian geometry.
In the Bakhshali manuscript, there is a handful of geometric problems (including problems about volumes of irregular solids). The Bakhshali manuscript also "employs a decimal place value system with a dot for zero." Aryabhata's "Aryabhatiya" (499) includes the computation of areas and volumes.
Brahmagupta wrote his astronomical work "" in 628. Chapter 12, containing 66 Sanskrit verses, was divided into two sections: "basic operations" (including cube roots, fractions, ratio and proportion, and barter) and "practical mathematics" (including mixture, mathematical series, plane figures, stacking bricks, sawing of timber, and piling of grain). In the latter section, he stated his famous theorem on the diagonals of a cyclic quadrilateral:
Brahmagupta's theorem: If a cyclic quadrilateral has diagonals that are perpendicular to each other, then the perpendicular line drawn from the point of intersection of the diagonals to any side of the quadrilateral always bisects the opposite side.
Chapter 12 also included a formula for the area of a cyclic quadrilateral (a generalization of Heron's formula), as well as a complete description of rational triangles ("i.e." triangles with rational sides and rational areas).
Brahmagupta's formula: The area, "A", of a cyclic quadrilateral with sides of lengths "a", "b", "c", "d", respectively, is given by
formula_1
where "s", the semiperimeter, given by: formula_2
Brahmagupta's Theorem on rational triangles: A triangle with rational sides formula_3 and rational area is of the form:
formula_4
for some rational numbers formula_5 and formula_6.
Parameshvara Nambudiri was the first mathematician to give a formula for the radius of the circle circumscribing a cyclic quadrilateral. The expression is sometimes attributed to Lhuilier [1782], 350 years later. With the sides of the cyclic quadrilateral being "a, b, c," and "d", the radius "R" of the circumscribed circle is:
formula_7
Chinese geometry.
The first definitive work (or at least oldest existent) on geometry in China was the "Mo Jing", the Mohist canon of the early philosopher Mozi (470–390 BC). It was compiled years after his death by his followers around the year 330 BC. Although the "Mo Jing" is the oldest existent book on geometry in China, there is the possibility that even older written material existed. However, due to the infamous Burning of the Books in a political maneuver by the Qin Dynasty ruler Qin Shihuang (r. 221–210 BC), multitudes of written literature created before his time were purged. In addition, the "Mo Jing" presents geometrical concepts in mathematics that are perhaps too advanced not to have had a previous geometrical base or mathematic background to work upon.
The "Mo Jing" described various aspects of many fields associated with physical science, and provided a small wealth of information on mathematics as well. It provided an 'atomic' definition of the geometric point, stating that a line is separated into parts, and the part which has no remaining parts (i.e. cannot be divided into smaller parts) and thus forms the extreme end of a line is a point. Much like Euclid's first and third definitions and Plato's 'beginning of a line', the "Mo Jing" stated that "a point may stand at the end (of a line) or at its beginning like a head-presentation in childbirth. (As to its invisibility) there is nothing similar to it." Similar to the atomists of Democritus, the "Mo Jing" stated that a point is the smallest unit, and cannot be cut in half, since 'nothing' cannot be halved. It stated that two lines of equal length will always finish at the same place, while providing definitions for the "comparison of lengths" and for "parallels", along with principles of space and bounded space. It also described the fact that planes without the quality of thickness cannot be piled up since they cannot mutually touch. The book provided definitions for circumference, diameter, and radius, along with the definition of volume.
The Han Dynasty (202 BC – 220 AD) period of China witnessed a new flourishing of mathematics. One of the oldest Chinese mathematical texts to present geometric progressions was the "Suàn shù shū" of 186 BC, during the Western Han era. The mathematician, inventor, and astronomer Zhang Heng (78–139 AD) used geometrical formulas to solve mathematical problems. Although rough estimates for pi (π) were given in the "Zhou Li" (compiled in the 2nd century BC), it was Zhang Heng who was the first to make a concerted effort at creating a more accurate formula for pi. Zhang Heng approximated pi as 730/232 (or approx 3.1466), although he used another formula of pi in finding a spherical volume, using the square root of 10 (or approx 3.162) instead. Zu Chongzhi (429–500 AD) improved the accuracy of the approximation of pi to between 3.1415926 and 3.1415927, with 355⁄113 (密率, Milü, detailed approximation) and 22⁄7 (约率, Yuelü, rough approximation) being the other notable approximation. In comparison to later works, the formula for pi given by the French mathematician Franciscus Vieta (1540–1603) fell halfway between Zu's approximations.
"The Nine Chapters on the Mathematical Art".
"The Nine Chapters on the Mathematical Art", the title of which first appeared by 179 AD on a bronze inscription, was edited and commented on by the 3rd century mathematician Liu Hui from the Kingdom of Cao Wei. This book included many problems where geometry was applied, such as finding surface areas for squares and circles, the volumes of solids in various three-dimensional shapes, and included the use of the Pythagorean theorem. The book provided illustrated proof for the Pythagorean theorem, contained a written dialogue between of the earlier Duke of Zhou and Shang Gao on the properties of the right angle triangle and the Pythagorean theorem, while also referring to the astronomical gnomon, the circle and square, as well as measurements of heights and distances. The editor Liu Hui listed pi as 3.141014 by using a 192 sided polygon, and then calculated pi as 3.14159 using a 3072 sided polygon. This was more accurate than Liu Hui's contemporary Wang Fan, a mathematician and astronomer from Eastern Wu, would render pi as 3.1555 by using 142⁄45. Liu Hui also wrote of mathematical surveying to calculate distance measurements of depth, height, width, and surface area. In terms of solid geometry, he figured out that a wedge with rectangular base and both sides sloping could be broken down into a pyramid and a tetrahedral wedge. He also figured out that a wedge with trapezoid base and both sides sloping could be made to give two tetrahedral wedges separated by a pyramid. Furthermore, Liu Hui described Cavalieri's principle on volume, as well as Gaussian elimination. From the "Nine Chapters", it listed the following geometrical formulas that were known by the time of the Former Han Dynasty (202 BCE – 9 CE).
Areas for the
<templatestyles src="Col-begin/styles.css"/>
Volumes for the
<templatestyles src="Col-begin/styles.css"/>
Continuing the geometrical legacy of ancient China, there were many later figures to come, including the famed astronomer and mathematician Shen Kuo (1031–1095 CE), Yang Hui (1238–1298) who discovered Pascal's Triangle, Xu Guangqi (1562–1633), and many others.
Islamic Golden Age.
Thābit ibn Qurra, using what he called the method of reduction and composition, provided two different general proofs of the Pythagorean theorem for all triangles, before which proofs only existed for the theorem for the special cases of a special right triangle.
A 2007 paper in the journal "Science" suggested that girih tiles possessed properties consistent with self-similar fractal quasicrystalline tilings such as the Penrose tilings.
Renaissance.
The transmission of the Greek Classics to medieval Europe via the Arabic literature of the 9th to 10th century "Islamic Golden Age" began in the 10th century and culminated in the Latin translations of the 12th century.
A copy of Ptolemy's "Almagest" was brought back to Sicily by Henry Aristippus (d. 1162), as a gift from the Emperor to King William I (r. 1154–1166). An anonymous student at Salerno travelled to Sicily and translated the "Almagest" as well as several works by Euclid from Greek to Latin. Although the Sicilians generally translated directly from the Greek, when Greek texts were not available, they would translate from Arabic. Eugenius of Palermo (d. 1202) translated Ptolemy's "Optics" into Latin, drawing on his knowledge of all three languages in the task.
The rigorous deductive methods of geometry found in Euclid's "Elements of Geometry" were relearned, and further development of geometry in the styles of both Euclid (Euclidean geometry) and Khayyam (algebraic geometry) continued, resulting in an abundance of new theorems and concepts, many of them very profound and elegant.
Advances in the treatment of perspective were made in Renaissance art of the 14th to 15th century which went beyond what had been achieved in antiquity.
In Renaissance architecture of the "Quattrocento", concepts of architectural order were explored and rules were formulated. A prime example of is the Basilica di San Lorenzo in Florence by Filippo Brunelleschi (1377–1446).
In c. 1413 Filippo Brunelleschi demonstrated the geometrical method of perspective, used today by artists, by painting the outlines of various Florentine buildings onto a mirror.
Soon after, nearly every artist in Florence and in Italy used geometrical perspective in their paintings, notably Masolino da Panicale and Donatello. Melozzo da Forlì first used the technique of upward foreshortening (in Rome, Loreto, Forlì and others), and was celebrated for that. Not only was perspective a way of showing depth, it was also a new method of composing a painting. Paintings began to show a single, unified scene, rather than a combination of several.
As shown by the quick proliferation of accurate perspective paintings in Florence, Brunelleschi likely understood (with help from his friend the mathematician Toscanelli), but did not publish, the mathematics behind perspective. Decades later, his friend Leon Battista Alberti wrote "De pictura" (1435/1436), a treatise on proper methods of showing distance in painting based on Euclidean geometry. Alberti was also trained in the science of optics through the school of Padua and under the influence of Biagio Pelacani da Parma who studied Alhazen's "Optics".
Piero della Francesca elaborated on Della Pittura in his "De Prospectiva Pingendi" in the 1470s. Alberti had limited himself to figures on the ground plane and giving an overall basis for perspective. Della Francesca fleshed it out, explicitly covering solids in any area of the picture plane. Della Francesca also started the now common practice of using illustrated figures to explain the mathematical concepts, making his treatise easier to understand than Alberti's. Della Francesca was also the first to accurately draw the Platonic solids as they would appear in perspective.
Perspective remained, for a while, the domain of Florence. Jan van Eyck, among others, was unable to create a consistent structure for the converging lines in paintings, as in London's The Arnolfini Portrait, because he was unaware of the theoretical breakthrough just then occurring in Italy. However he achieved very subtle effects by manipulations of scale in his interiors. Gradually, and partly through the movement of academies of the arts, the Italian techniques became part of the training of artists across Europe, and later other parts of the world.
The culmination of these Renaissance traditions finds its ultimate synthesis in the research of the architect, geometer, and optician Girard Desargues on perspective, optics and projective geometry.
The "Vitruvian Man" by Leonardo da Vinci (c. 1490) depicts a man in two superimposed positions with his arms and legs apart and inscribed in a circle and square. The drawing is based on the correlations of ideal human proportions with geometry described by the ancient Roman architect Vitruvius in Book III of his treatise "De Architectura".
Modern geometry.
The 17th century.
In the early 17th century, there were two important developments in geometry. The first and most important was the creation of analytic geometry, or geometry with coordinates and equations, by René Descartes (1596–1650) and Pierre de Fermat (1601–1665). This was a necessary precursor to the development of calculus and a precise quantitative science of physics. The second geometric development of this period was the systematic study of projective geometry by Girard Desargues (1591–1661). Projective geometry is the study of geometry without measurement, just the study of how points align with each other. There had been some early work in this area by Hellenistic geometers, notably Pappus (c. 340). The greatest flowering of the field occurred with Jean-Victor Poncelet (1788–1867).
In the late 17th century, calculus was developed independently and almost simultaneously by Isaac Newton (1642–1727) and Gottfried Wilhelm Leibniz (1646–1716). This was the beginning of a new field of mathematics now called analysis. Though not itself a branch of geometry, it is applicable to geometry, and it solved two families of problems that had long been almost intractable: finding tangent lines to odd curves, and finding areas enclosed by those curves. The methods of calculus reduced these problems mostly to straightforward matters of computation.
The 18th and 19th centuries.
Non-Euclidean geometry.
The very old problem of proving Euclid's Fifth Postulate, the "Parallel Postulate", from his first four postulates had never been forgotten. Beginning not long after Euclid, many attempted demonstrations were given, but all were later found to be faulty, through allowing into the reasoning some principle which itself had not been proved from the first four postulates. Though Omar Khayyám was also unsuccessful in proving the parallel postulate, his criticisms of Euclid's theories of parallels and his proof of properties of figures in non-Euclidean geometries contributed to the eventual development of non-Euclidean geometry. By 1700 a great deal had been discovered about what can be proved from the first four, and what the pitfalls were in attempting to prove the fifth. Saccheri, Lambert, and Legendre each did excellent work on the problem in the 18th century, but still fell short of success. In the early 19th century, Gauss, Johann Bolyai, and Lobachevsky, each independently, took a different approach. Beginning to suspect that it was impossible to prove the Parallel Postulate, they set out to develop a self-consistent geometry in which that postulate was false. In this they were successful, thus creating the first non-Euclidean geometry. By 1854, Bernhard Riemann, a student of Gauss, had applied methods of calculus in a ground-breaking study of the intrinsic (self-contained) geometry of all smooth surfaces, and thereby found a different non-Euclidean geometry. This work of Riemann later became fundamental for Einstein's theory of relativity.
It remained to be proved mathematically that the non-Euclidean geometry was just as self-consistent as Euclidean geometry, and this was first accomplished by Beltrami in 1868. With this, non-Euclidean geometry was established on an equal mathematical footing with Euclidean geometry.
While it was now known that different geometric theories were mathematically possible, the question remained, "Which one of these theories is correct for our physical space?" The mathematical work revealed that this question must be answered by physical experimentation, not mathematical reasoning, and uncovered the reason why the experimentation must involve immense (interstellar, not earth-bound) distances. With the development of relativity theory in physics, this question became vastly more complicated.
Introduction of mathematical rigor.
All the work related to the Parallel Postulate revealed that it was quite difficult for a geometer to separate his logical reasoning from his intuitive understanding of physical space, and, moreover, revealed the critical importance of doing so. Careful examination had uncovered some logical inadequacies in Euclid's reasoning, and some unstated geometric principles to which Euclid sometimes appealed. This critique paralleled the crisis occurring in calculus and analysis regarding the meaning of infinite processes such as convergence and continuity. In geometry, there was a clear need for a new set of axioms, which would be complete, and which in no way relied on pictures we draw or on our intuition of space. Such axioms, now known as Hilbert's axioms, were given by David Hilbert in 1894 in his dissertation "Grundlagen der Geometrie" ("Foundations of Geometry").
Analysis situs, or topology.
In the mid-18th century, it became apparent that certain progressions of mathematical reasoning recurred when similar ideas were studied on the number line, in two dimensions, and in three dimensions. Thus the general concept of a metric space was created so that the reasoning could be done in more generality, and then applied to special cases. This method of studying calculus- and analysis-related concepts came to be known as analysis situs, and later as topology. The important topics in this field were properties of more general figures, such as connectedness and boundaries, rather than properties like straightness, and precise equality of length and angle measurements, which had been the focus of Euclidean and non-Euclidean geometry. Topology soon became a separate field of major importance, rather than a sub-field of geometry or analysis.
Geometry of more than 3 dimensions.
The 19th century saw the development of the general concept of Euclidean space by Ludwig Schläfli, who extended Euclidean geometry beyond three dimensions. He discovered all the higher-dimensional analogues of the Platonic solids, finding that there are exactly six such regular convex polytopes in dimension four, and three in all higher dimensions.
In 1878 William Kingdon Clifford introduced what is now termed geometric algebra, unifying William Rowan Hamilton's quaternions with Hermann Grassmann's algebra and revealing the geometric nature of these systems, especially in four dimensions. The operations of geometric algebra have the effect of mirroring, rotating, translating, and mapping the geometric objects that are being modeled to new positions.
The 20th century.
Developments in algebraic geometry included the study of curves and surfaces over finite fields as demonstrated by the works of among others André Weil, Alexander Grothendieck, and Jean-Pierre Serre as well as over the real or complex numbers. Finite geometry itself, the study of spaces with only finitely many points, found applications in coding theory and cryptography. With the advent of the computer, new disciplines such as computational geometry or digital geometry deal with geometric algorithms, discrete representations of geometric data, and so forth.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V = \\frac{1}{3} h(a^2 + ab + b^2)"
},
{
"math_id": 1,
"text": " A = \\sqrt{(s-a)(s-b)(s-c)(s-d)}"
},
{
"math_id": 2,
"text": " s=\\frac{a+b+c+d}{2}."
},
{
"math_id": 3,
"text": "a, b, c "
},
{
"math_id": 4,
"text": "a = \\frac{u^2}{v}+v, \\ \\ b=\\frac{u^2}{w}+w, \\ \\ c=\\frac{u^2}{v}+\\frac{u^2}{w} - (v+w) "
},
{
"math_id": 5,
"text": "u, v, "
},
{
"math_id": 6,
"text": " w "
},
{
"math_id": 7,
"text": " R = \\sqrt {\\frac{(ab + cd)(ac + bd)(ad + bc)}{(- a + b + c + d)(a - b + c + d)(a + b - c + d)(a + b + c - d)}}."
}
] | https://en.wikipedia.org/wiki?curid=11953 |
1195300 | Degree (temperature) | Term used in defining several scales of temperature
The term degree is used in several scales of temperature, with the notable exception of kelvin, primary unit of temperature for engineering and the physical sciences. The degree symbol ° is usually used, followed by the initial letter of the unit; for example, "°C" for degree Celsius. A degree can be defined as a set change in temperature measured against a given scale; for example, one degree Celsius is one-hundredth of the temperature change between the point at which water starts to change state from solid to liquid state and the point at which it starts to change from its liquid to gaseous state.
Scales of temperature measured in degrees.
Common scales of temperature measured in degrees:
Unlike the degree Fahrenheit and degree Celsius, the kelvin is no longer referred to or written as a degree (but was before 1967). The kelvin is the primary unit of temperature measurement in the physical sciences, but is often used in conjunction with the degree Celsius, which has the same magnitude.
Other scales of temperature:
Kelvin.
The "degree Kelvin" (°K) is a former name and symbol for the SI unit of temperature on the thermodynamic (absolute) temperature scale. Since 1967, it has been known simply as the kelvin, with symbol K (without a degree symbol). Degree absolute (°A) is obsolete terminology, often referring specifically to the kelvin but sometimes the degree Rankine as well.
Temperature conversions.
All three of the major temperature scales are related through a linear equation, and so the conversion between any of them is relatively straightforward. For instance, any Celsius temperature "c" °C can be calculated from a corresponding Fahrenheit temperature "f" °F or absolute temperature "k" K.
formula_0
The equations above may also be rearranged to solve for formula_1 or formula_2, to give
formula_3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\nc \\;=\\; \\frac{5}{9}(f - 32) \\;=\\; k-273.15\n\\end{align}"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "\\begin{align}\nf \\;&=\\; \\frac{9}{5}c + 32 \\;=\\; \\frac{9}{5} (k-273.15) + 32\\\\\nk \\;&=\\; c+273.15 \\;=\\; \\frac{5}{9}(f - 32) + 273.15\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=1195300 |
1195441 | David Wright | American baseball player (born 1982)
David Allen Wright (born December 20, 1982) is an American former professional baseball third baseman who spent his entire 14-year Major League Baseball (MLB) career with the New York Mets. Chosen by the Mets in the 2001 MLB draft, he made his MLB debut on July 21, 2004 at Shea Stadium. Wright was nicknamed "Captain America" after his performance in the 2013 World Baseball Classic where he led the tournament with 10 RBI and a .438 batting average.
Wright is a seven-time All-Star, a two-time Gold Glove Award winner, a two-time Silver Slugger Award winner, and a member of the 30–30 club. One of the most beloved players in franchise history, Wright is the Mets' all time leader in career plate appearances and holds many other franchise records for position players. He was named captain of the Mets in 2013, becoming the fourth captain in the team's history. Wright is the third player to play in at least 10 MLB seasons and play his entire MLB career with the Mets.
Throughout the latter half of Wright's career, he was plagued by injuries, most notably spinal stenosis, as well as additional ailments in his neck and shoulder. After missing significant time from 2015 to 2018 and receiving word from doctors that his spinal stenosis would not improve, Wright announced that 2018 would be his final season as an active player. Wright finished his major league career with a .296 career batting average, 242 home runs, and 970 runs batted in. Upon completion of his playing career, Wright was named a special advisor in the Mets front office.
Early life.
Wright was born in Norfolk, Virginia, the oldest of four sons of Rhon, a police officer in the Norfolk Police Department, and Elisa Wright. Wright grew up a Mets fan due to his proximity to the Class AAA Norfolk Tides, whose stadium was ten minutes from his home. Wright took hitting lessons alongside fellow future Major Leaguer Michael Cuddyer in elementary school and played on teams with Ryan Zimmerman, Mark Reynolds, B.J. Upton and Justin Upton during high school. Wright played baseball at Hickory High School in Chesapeake, Virginia. He committed to play college baseball at Georgia Tech before his senior year of high school. As a high school senior, he had a .538 batting average.
Professional career.
Minor leagues.
The New York Mets selected Wright in the 2001 MLB draft during the supplemental round as compensation for the Mets' loss of Mike Hampton to the Colorado Rockies in free agency. Wright was selected after future teammate Aaron Heilman who had been selected in the first round. According to then-Mets manager Bobby Valentine, Wright had caught the attention of coach Tom Robson who had actually been sent "down to Virginia to scout someone else."
Wright progressed steadily in his first three years of minor league play, winning the Sterling award for best player on the class A St. Lucie Mets in 2003. In 2004, he quickly rose from the Double-A Binghamton Mets to the Triple-A Norfolk Tides to the major leagues.
New York Mets.
2004.
On July 21, 2004, Wright made his major league debut starting at third base against the Montreal Expos. The next day, on July 22, Wright picked up his first career hit, a double, off of the Montreal Expos' pitcher Zach Day. Wright finished his rookie season with a .293 batting average, 14 home runs and 40 RBI in 263 at-bats in 69 games, and was voted as the This Year in Baseball Awards Rookie of the Year.
2005.
In 2005, the 22formula_0-year-old Wright played in 160 games and batted .306 with 27 home runs, 102 RBIs, 99 runs scored, 42 doubles, and 17 stolen bases, leading the team in average, runs, on-base percentage, slugging percentage, RBI, doubles, and finishing second in home runs to Cliff Floyd (34). Wright was also in the top 10 in the National League for average, hits, total bases, RBI, extra-base hits, and runs. Wright's 24 errors tied him with Troy Glaus for the most errors by a third baseman in the major leagues.
Wright made an over-the-shoulder barehanded catch during the seventh inning of a game at Petco Park against the San Diego Padres on August 9, 2005. With one out in the inning, Brian Giles hit a broken-bat blooper beyond the edge of the outfield grass. Wright, retreating quickly with his back to home plate, extended his bare right hand and caught the ball cleanly while crashing to the field. Wright maintained control of the ball after landing hard on the outfield grass. The sellout crowd at Petco Park acknowledged the splendor of the catch with a standing ovation lasting several minutes. This play was voted the "This Year in Baseball Play of the Year."
2006.
In 2006, Wright was named National League Co-Player of the Week for June 12–18 along with teammate José Reyes. For the month, Wright batted .327 with 10 home runs and 29 RBIs.
Wright also provided his share of heroics throughout the 2006 season. His first game-winning hit occurred on May 5 with a 2-out double just out of the reach of a chasing Andruw Jones in the bottom of the 14th inning off Jorge Sosa to defeat the Atlanta Braves, 8–7. Two weeks later on May 19, he hit a walk-off single off Yankees closer Mariano Rivera that just sailed over the head of center fielder Johnny Damon as the Mets rallied to beat the Yankees in the first game of the 2006 Subway Series, 7–6. He capped off the month on Memorial Day, May 29, with a single to the wall in left-center field off Arizona Diamondbacks closer José Valverde scoring José Reyes from first base as the Mets defeated Arizona, 8–7. Wright also made a game-saving stop at third base of a would-be game-tying single by Mike Lieberthal for the final out of a 4–3 Mets victory over Philadelphia on August 5.
Wright was voted on to his first MLB All-Star Game as the starting third baseman for the NL. During the 2006 season, Wright collected 74 RBIs before the All-Star Break, breaking the Mets record previously held by Mike Piazza, who had 72 RBIs in 2000. Wright also participated in the 2006 Home Run Derby, reaching the final round but finishing second to Ryan Howard of the Philadelphia Phillies. He hit 22 home runs in the contest, including 16 in the first round, the third-highest total in any one round in the history of the Home Run Derby. The following night, he hit a home run in his first All-Star Game at-bat off American League starting pitcher Kenny Rogers.
Wright ranked among the club's top three hitters in all offensive categories for the 2006 Mets, who were the second most run-scoring team in the National League. Fans at Shea Stadium have routinely greeted Wright's performances with chants of "M-V-P, M-V-P." According to then-teammate Tom Glavine, "He's probably been our most clutch hitter over the first half of the season and he's certainly thrown his hat into the MVP talks."
On August 6, 2006, Wright signed a 6-year contract extension with the Mets worth $55 million, as well as a $1.5 million signing bonus. The contract paid him $1 million in 2007, $5 million in 2008, $7.5 million in 2009, $10 million in 2010, $14 million in 2011, and $15 million in 2012. The contract also contained a club option for 2013 worth $16 million. Wright has already announced that he will donate $1.5 million to the Mets Foundation throughout the course of this contract.
The Mets captured the NL East title in 2006 and returned to the playoffs for the first time since 2000. Wright struggled in his first postseason, going 4–25 (.160) in the Mets' NLCS loss to the St. Louis Cardinals, and batting a mere .216 in 10 postseason games.
Wright participated in the 2006 Major League Baseball Japan All-Star Series along with teammates José Reyes, Julio Franco, and John Maine.
2007.
As of April 21, 2007, Wright had a hitting streak of 26 regular-season games; the previous team record was 24, held by Mike Piazza and Hubie Brooks. He had a hit in the 12 final regular-season games of the 2006 season, and had a hit in all of the first 14 games of the 2007 season. Wright's hit streak of 26 regular-season games ended on April 21, 2007, against the Atlanta Braves at Shea Stadium.
On September 16, 2007, Wright became the 29th player in baseball history to join the 30–30 club, after hitting a 7th inning solo home run against the Philadelphia Phillies at Shea Stadium. He is only the third player to reach this milestone before his 25th birthday, and only the third Met to reach this milestone in club history, the other two being Howard Johnson and Darryl Strawberry.
Wright finished the 2007 season with a .325 batting average, 30 home runs, 107 RBI, and 34 stolen bases. Wright was awarded the 2007 Gold Glove and the Silver Slugger Award at third base. He also was fourth in the NL MVP voting receiving 182 votes.
2008.
Wright began the year with two doubles, including a bases-clearing double, in finishing 2–4 with three RBIs in the Mets' Opening Day victory over the Marlins, 7–2. With the RBIs, Wright already halfway matched his RBI production from the preceding April. In the final game of the series, Wright went 3–5 with a 3-run home run. On April 13, Wright hit his 100th career home run, a solo shot off of Milwaukee Brewers pitcher Jeff Suppan.
Wright hit the first walk-off home run of his career on August 7, 2008.
On November 5, 2008, Wright was announced as the recipient of the Rawlings' Gold Glove Award for third basemen. It was the second consecutive year in which Wright won the award. His teammate, Carlos Beltrán, also won the award for center fielders. He also won his second Silver Slugger Award.
On December 22, 2008, Wright was announced as a member of Team USA in the 2009 World Baseball Classic (WBC) to be held in March 2009. The third base position was taken by Alex Rodriguez in the 2006 WBC.
Wright finished seventh in the voting for the 2008 NL MVP award after finishing the season with a .302 batting average with 33 home runs and 124 RBI.
2009.
Wright hit the first Mets home run in Citi Field history on Monday, April 13, 2009, Citi Field's Opening Day. The three-run home run was hit off San Diego Padres pitcher Walter Silva in the 5th inning.
In mid-August, when 10 Mets players were on the disabled list, Wright was added to the list after sustaining an injury on August 15. Wright suffered a concussion when he was hit in the head with a fastball by San Francisco Giants pitcher Matt Cain. He was admitted to the Hospital for Special Surgery, where he underwent a precautionary CT scan, which turned out negative. The following day he left the hospital diagnosed with post-concussion symptoms. He was then placed on the disabled list for the first time in his career.
Despite the injury to Wright, Mets General Manager Omar Minaya stated that there were no plans to shut him down for the remainder of the season. In fact, Wright was activated from the disabled list on September 1 and started at third base against the Colorado Rockies that evening. In that game, Wright wore a new style of batting helmet (the Rawlings S100). He abandoned that helmet after wearing it in that one game. Wright explained that he found the helmet uncomfortable. "It's the last thing I need to be worrying about in the box is trying to shove it on my head. So I wanted to go back to the old one and just wait to see if there's going to be any adjustments made." Wright experienced a decline in production after a potent campaign in 2008. His home run total dropped to 10, while his RBI total fell to 72, after hitting 33 home runs and 124 RBIs the previous season.
2010.
Wright, along with José Reyes, arrived at the Mets' spring training camp in Port St. Lucie, Florida two weeks early to get a head start on preparing themselves after a disappointing 2009 campaign. Wright came into camp heavier than he was in previous seasons, adding more muscle to his body.
On Opening Day, Wright hit a two-run home run off of the Marlins' Josh Johnson in his first at-bat of the season. On April 27 in the second game of a doubleheader against the Los Angeles Dodgers, Wright reached the 1,000 hit mark against pitcher Ramón Troncoso in the bottom of the 5th inning as he hit a two-out RBI single scoring Center Fielder Ángel Pagán and giving the Mets a 4–3 lead. In the following inning, Wright hit a three-run triple to the right-center field wall, scoring Pagán, Luis Castillo, and José Reyes, and giving the Mets a 10–3 lead at the time. The Mets won the game 10–5. On May 20, he hit a three-run double after Mets manager Jerry Manuel gave Wright a day off. By June 25, Wright had 12 home runs, which led the team, and was batting .294 with 57 RBIs to lead the N.L. On July 4, 2010, Wright was named the starting third baseman for the National League in the 2010 Major League Baseball All-Star Game, making it Wright's fifth consecutive All-Star Game appearance. On July 6, Wright was named the June 2010 National League Player of the Month after he hit .404 with 11 doubles, 6 home runs, and 29 RBIs. On July 13, 2010, Wright collected 2 hits and a stolen base in the 81st All-Star Game in Anaheim, bringing him to 6-for-13 in All-Star Game at-bats and tying him for fifth all-time in All-Star Game batting average. As of August 20, he more than doubled his home run total from 2009.
Wright finished the 2010 season with a .283 batting average, .354 on-base percentage, 29 home runs, 103 RBIs, 69 walks, and 19 stolen bases.
2011.
On April 5, 2011, Wright singled against the Phillies' Cole Hamels for his 90th career game-winning RBI, surpassing Mike Piazza for the most in Mets history. Then on May 16, 2011, after undergoing examination by team doctors, it was announced Wright had a stress fracture in his lower back. The injury (caused by a diving tag on the Astros' Carlos Lee) forced him to spend more than two months on the disabled list. Wright was activated from the DL on July 22, 2011. That day, he went 2-for-5 and had two RBIs and scored two runs against the Florida Marlins. In his first series coming back from the DL, Wright hit 6-for-14, with one home run, three doubles and six RBIs. Wright enjoyed a career first on August 7, 2011, against the Atlanta Braves, playing shortstop for the first time in his professional career due to injuries to José Reyes (hamstring) and Daniel Murphy (MCL). In only 102 games, Wright finished the season with a .254 batting average, .345 on-base percentage, 14 home runs, 61 RBIs, 52 walks, and 13 stolen bases.
2012.
On April 5, Wright went 2-for-3 with a walk on as the Mets' Opening Day third-baseman against the Braves, where Wright drove in Andrés Torres with a single for the game-winning run in the 6th inning off of Tommy Hanson, giving the Mets the 1–0 win. Less than a week later, on April 11, Wright fractured his right pinkie while diving into first base on a pick-off attempt. After missing just three games, Wright returned to the lineup, going 3-for-5 against the Phillies. Then on April 25, Wright hit a two-run home run in a 5–1 victory against the Miami Marlins, giving him 735 career RBIs, passing Darryl Strawberry for the most in Mets franchise history. Wright broke another franchise record on June 5, when he hit a solo-shot off of Washington Nationals pitcher Jordan Zimmermann, driving himself in and reaching 736 runs. The previous record-holder was José Reyes at 735 runs. On July 1, it was announced that Wright had made his 6th All-Star team, but as a backup to Pablo Sandoval. Wright led the All-Star vote for most of the year but was overtaken in the last week of voting.
For the first half of the season, Wright was either atop or close to the top of the league in both batting average and on-base percentage, and led NL third basemen in average, OBP, slugging, hits and runs scored. After the All-Star break, the Mets had their worst stretch to that point in the season, losing six straight, but on July 19, Wright hit two home runs and had five RBIs to help the Mets end their losing streak. He hit his 200th career home run in a loss on August 24 to the Houston Astros. On September 25, 2012, Wright tied the all time hit record with the Mets Ed Kranepool with 1,418 hits in a game at home against the Pittsburgh Pirates. Then on September 26, 2012, Wright surpassed Kranepool as Mets all-time hit record holder with an infield single also at home against the Pittsburgh Pirates.
Wright finished 6th in the voting for the 2012 NL MVP Award.
Contract extension.
On November 30, 2012, Ed Coleman and WFAN reported that Wright and the Mets agreed to a 7-year contract extension worth $138 million (7 years for $122 million plus a club option for $16 million that the club picked up for the 2013 season). The contract became official on December 4 after Wright passed a physical. Wright was represented in negotiations by Seth Levinson and Sam Levinson of ACES Inc.
Wright's $138 million deal was contrary to the Mets budget-conscious policy of not giving huge contracts to players in their 30s; nonetheless general manager Sandy Alderson made an exception as he viewed Wright as a leader and role model, both on and off the field. At the same time, many agents and front-office executives suggested that had Wright waited a year and became a free agent, he could have received a deal close to $200 million. Although it had been six years since the Mets' last playoff appearance and four years since their last winning season, Alderson managed to persuade Wright to stay as the Mets' farm system had young talented pitchers.
Wright's contract had deferments which would be paid out through 2025. The Mets obtained injury insurance on the contract which kicked in after he missed 60 days, allowing the team to recoup 75 percent of his salary while he is unable to play.
2013.
After a spring training game on March 21, the Mets announced that Wright had been named the fourth team captain in Mets history, joining Keith Hernandez, Gary Carter, and John Franco.
Wright got his 1,500th career hit on June 18 against the Atlanta Braves off of Cory Gearrin.
Wright was named the National League's Home Run Derby team captain for the 2013 MLB All-Star Game. He selected Carlos González, Michael Cuddyer, and Bryce Harper as the other three participants for the National League. Gonzalez, who was injured at the time, was later replaced by Pedro Alvarez. Wright would hit five home runs in the derby, but would not move on.
On July 6, Wright was named the starting third baseman for the National League team in the 2013 MLB All-Star Game. In the 2013 MLB All-Star Game, Wright went 1-for-3. Wright also became the fourth Mets player to appear in at least seven All-Star games.
On August 3, 2013, Wright was placed on the 15-day disabled list a day after he strained his right hamstring. Upon his return from the disabled list, Wright hit two home runs in his first two games, surpassing Mike Piazza for the second-most home runs hit by a player in a Mets uniform, behind Darryl Strawberry. Wright played in 112 games in 2013, batting .307 with 18 home runs, 58 RBIs and 17 steals.
2014.
Wright was chosen as the "Face of MLB" in a contest online in February 2014. He narrowly beat out A's infielder Eric Sogard.
Wright finished second in All-Star game voting for third base to the Brewers' Aramis Ramírez, making it only the third time in 10 seasons he had been left out. Second baseman Daniel Murphy represented the Mets at the 2014 All-Star Game.
Wright's 2014 performance declined from previous years. He hit at a .269 clip, his lowest batting average since his 2011 season was shortened due to injury. Wright's .269/.324/.374 slash line was attributed to a recurring shoulder injury. Early in the season, he sustained a left rotator cuff contusion which slowed his offensive production and hurt his defense. Wright finished the year playing in 134 games with a career-low eight home runs and committed 15 errors, tied for most on the team.
2015.
On April 14, 2015, Wright suffered a strained right hamstring while stealing second base. He was placed on the 15-day disabled list. He was later diagnosed with spinal stenosis, and was expected to return towards the end of the season. David Wright was placed on the 60-day DL for his spinal stenosis on July 24, 2015, to clear up space on the 40-man roster. On August 10, the Mets sent Wright on a rehab assignment to the St. Lucie Mets.
Following Chase Utley's August trade from the Philadelphia Phillies to the Los Angeles Dodgers, Wright became the longest-tenured active player to have played his entire career with one team. His 1,516 games with the Mets had previously only trailed Utley's 1,551 games with the Phillies.
Wright returned from his injury for the Mets' August 24 game in Philadelphia. In his first at-bat, he hit a home run off Phillies pitcher Adam Morgan, the first of a club record eight home runs the Mets would hit in a 16–7 victory. His return helped propel the Mets to the 2015 World Series against the Kansas City Royals. In his only World Series appearance, he committed an error in the 14th inning that led to the winning run as the Royals took a 5–4 victory in Game 1. In Game 3, he went 2-for-4 with 4 RBIs, including his only World Series home run, a two-run home run, off Yordano Ventura as part of the Mets' 9–3 victory in their only win of the series. Wright played in only 38 games.
2016.
As a result of his spinal stenosis, Wright had to complete an extensive pre-game workout routine consisting of physical therapy, exercise, and some minimal batting and fielding drills. The entire process takes about 4–5 hours. On May 21, 2016, Wright hit a bases-loaded single off of Milwaukee Brewers pitcher Michael Blazek to break a 4–4 tie in the bottom of the 9th inning and give the Mets the victory. On June 3, Wright was placed on the disabled list due to a herniated disc in his neck. On June 16, he underwent neck surgery to repair the herniated disc. He missed the remainder of the 2016 season, having played a career-low 37 games and batted .226/.350/.438, the lowest batting average of his major league career.
2017.
On February 28, after rehabbing from his neck surgery, Wright was diagnosed with a right shoulder impingement which, at the time, jeopardized his ability to play on Opening Day. He was, however, allowed to continue hitting as the injury only affected his ability to throw. This was yet another serious injury for Wright, which prompted people to ask General Manager Sandy Alderson if he thought that Wright would ever be able to play competitively again. He responded, "I don't think we're at that point, the point where that concern is at a more heightened level. This is all part of the process of rehabilitation from the neck surgery and it's taking longer than I'm sure David would have hoped and we would hope – but it's all part of the process." On September 4, it was revealed that Wright would undergo rotator cuff surgery on his right shoulder, which prevented him from playing in the Majors all year in 2017. He underwent the surgery on September 5.
2018.
On March 13, 2018, Wright experienced even more setbacks due to lingering back and shoulder injuries. At that point, he was shut down from baseball activities completely. On June 1, he was cleared to start baseball activities again. On June 25, he participated in batting practice at Citi Field. On August 12, Wright was slated to play five innings with the Single-A St. Lucie team at Clearwater. On August 28, Wright was promoted to AAA Las Vegas to continue his rehab, and singled in his first at bat.
On September 13, the Mets announced that Wright would be activated from the disabled list for the Mets' last homestand of the season and start the September 29 game against the Marlins, his last appearance before retirement. On September 28, he grounded out as a pinch hitter against the Marlins, his first MLB appearance in over two years. In his September 29 start, he batted 0-for-1 with a walk and fielded one ground ball before being removed in the fifth inning to a lengthy ovation by fans and players alike. The Mets went on to win the ballgame, 1–0, with Austin Jackson hitting a walk-off single in the 13th inning. After the game, Wright addressed the Citi Field crowd, thanking them for supporting him throughout his career and for helping him achieve his dreams.
Career statistics.
In 1585 games over 14 seasons, Wright posted a .296 batting average (1777-for-5998) with 949 runs scored, 390 doubles, 26 triples, 242 home runs, 970 runs batted in, 196 stolen bases, 762 walks, .376 on-base percentage, .491 slugging percentage, and .867 on-base plus slugging. Defensively, he recorded a .955 fielding percentage as a third baseman. In 24 postseason games, he hit only .198 (18-for-91) with 5 doubles, 2 home runs, 13 RBI, and 15 walks.
World Baseball Classic.
Wright was selected to play third base for the United States in the 2009 World Baseball Classic. In the second round, with the United States facing elimination against Puerto Rico, Wright delivered a 9th inning walk-off hit against Fernando Cabrera to win the game for the Americans. The win guaranteed Team USA a spot in the semifinal round.
He was again selected to play third base in the 2013 World Baseball Classic. In the 2013 WBC, Wright hit a grand slam in the United States's game against Italy. It was the second time a United States player hit a grand slam in WBC play. In the second-round opener against Puerto Rico, Wright had 5 RBIs. Wright ended the tournament with the most total RBIs of any player and earned the nickname "Captain America". Wright sat out during the US loss to the Dominican Republic in the following game, citing soreness. He was later diagnosed with sore ribs and sent back to New York for further examination, ending his participation for the rest of the 2013 tournament. The US team went on to lose to Puerto Rico 4–3, resulting in an elimination. Wright was selected as the third baseman for the All-WBC team, the only American player to earn the honor.
Personal life.
Wright has maintained homes in Manhattan, Manhattan Beach, California and Chesapeake, Virginia wherein he owns a boxer dog named Homer. His clubhouse nicknames include "Visine" and "Hollywood".
In May 2007, Vitamin Water was sold to The Coca-Cola Company for $4.1 billion. As part of his endorsement deal, Wright was given 0.5% of the company, and thus netted approximately $20 million from the deal.
After dating model Molly Beers for several years, Wright announced in January 2013 that he and Beers were engaged to be married during the holidays. The couple were married in La Jolla, California, on December 26, 2013. They have two daughters and a son.
Wright's memoir, "The Captain", was published on October 13, 2020, by Dutton - Penguin Books. It was co-written by Wright and Anthony DiComo, who is a Mets beat writer for MLB.com.
Charitable organizations.
David Wright Foundation.
In 2005, Wright began his own charitable organization, the David Wright Foundation. Its mission is to increase awareness about multiple sclerosis and to raise money for multiple sclerosis organizations and projects. The Foundation hosted its first annual gala at the New York Stock Exchange Members' Club on December 16, 2005, donating the proceeds to two multiple sclerosis centers.
During the 2009 season, Wright and Yankees shortstop Derek Jeter represented their foundations in a competition sponsored by Delta Air Lines. Jeter had the higher batting average and received $100,000 for his foundation from Delta while Wright's foundation received $50,000.
Big League Impact.
Wright is the New York City host for Big League Impact, an eight-city fantasy football network created and led by longtime St. Louis Cardinals pitcher Adam Wainwright. In 2015, the organization raised more than $1 million in total for various charitable organizations.
Media appearances.
Delta Air Lines named an MD-88 airplane "The Wright Flight", after Wright. The plane's name, along with Wright's signature and jersey number (5), are next to the boarding door. The plane shuttles between New York, Boston and Washington. Wright is noted for his unaffected politeness and work ethic. He has been known to help participate with the Boys & Girls Clubs of America. He has developed a reputation for arriving very early to the park for games and being uncommonly accommodating with fans and reporters.
Wright was featured on the cover of "", as well as a TV commercial advertisement for the game on the PlayStation 3 game console. He has also appeared in a television commercial for Fathead, promoting the company's wall graphics.
In 2006, Wright appeared on MTV's "Total Request Live" with then-teammate Cliff Floyd. He also made an appearance on the "Late Show with David Letterman" on July 12, 2006. That same day he appeared on the cover of "Sports Illustrated" along with Mets teammates Carlos Beltrán, Paul Lo Duca, Carlos Delgado, and José Reyes.
On January 3, 2008, Wright appeared on "Celebrity Apprentice" to purchase hot dogs for charity.
Wright is a celebrity spokesman for Ford in the New York/New Jersey market.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": ""
}
] | https://en.wikipedia.org/wiki?curid=1195441 |
11956019 | Blaschke selection theorem | Any sequence of convex sets contained in a bounded set has a convergent subsequence
The Blaschke selection theorem is a result in topology and convex geometry about sequences of convex sets. Specifically, given a sequence formula_0 of convex sets contained in a bounded set, the theorem guarantees the existence of a subsequence formula_1 and a convex set formula_2 such that formula_3 converges to formula_2 in the Hausdorff metric. The theorem is named for Wilhelm Blaschke.
Application.
As an example of its use, the isoperimetric problem can be shown to have a solution. That is, there exists a curve of fixed length that encloses the maximum area possible. Other problems likewise can be shown to have a solution: | [
{
"math_id": 0,
"text": "\\{K_n\\}"
},
{
"math_id": 1,
"text": "\\{K_{n_m}\\}"
},
{
"math_id": 2,
"text": "K"
},
{
"math_id": 3,
"text": "K_{n_m}"
}
] | https://en.wikipedia.org/wiki?curid=11956019 |
11956089 | Seismic migration | Measurement process
Seismic migration is the process by which seismic events are geometrically re-located in either space or time to the location the event occurred in the subsurface rather than the location that it was recorded at the surface, thereby creating a more accurate image of the subsurface. This process is necessary to overcome the limitations of geophysical methods imposed by areas of complex geology, such as: faults, salt bodies, folding, etc.
Migration moves dipping reflectors to their true subsurface positions and collapses diffractions, resulting in a migrated image that typically has an increased spatial resolution and resolves areas of complex geology much better than non-migrated images. A form of migration is one of the standard data processing techniques for reflection-based geophysical methods (seismic reflection and ground-penetrating radar)
The need for migration has been understood since the beginnings of seismic exploration and the very first seismic reflection data from 1921 were migrated. Computational migration algorithms have been around for many years but they have only entered wide usage in the past 20 years because they are extremely resource-intensive. Migration can lead to a dramatic uplift in image quality so algorithms are the subject of intense research, both within the geophysical industry as well as academic circles.
Rationale.
Seismic waves are elastic waves that propagate through the Earth with a finite velocity, governed by the elastic properties of the rock in which they are travelling. At an interface between two rock types, with different acoustic impedances, the seismic energy is either refracted, reflected back towards the surface or attenuated by the medium. The reflected energy arrives at the surface and is recorded by geophones that are placed at a known distance away from the source of the waves. When a geophysicist views the recorded energy from the geophone, they know both the travel time and the distance between the source and the receiver, but not the distance down to the reflector.
In the simplest geological setting, with a single horizontal reflector, a constant velocity and a source and receiver at the same location (referred to as zero-offset, where offset is the distance between the source and receiver), the geophysicist can determine the location of the reflection event by using the relationship:
formula_0
where d is the distance, v is the seismic velocity (or rate of travel) and t is the measured time from the source to the receiver.
In this case, the distance is halved because it can be assumed that it only took one-half of the total travel time to reach the reflector from the source, then the other half to return to the receiver.
The result gives us a single scalar value, which actually represents a half-sphere of distances, from the source/receiver, which the reflection could have originated from. It is a half-sphere, and not a full sphere, because we can ignore all possibilities that occur above the surface as unreasonable.
In the simple case of a horizontal reflector, it can be assumed that the reflection is located vertically below the source/receiver point (see diagram).
The situation is more complex in the case of a dipping reflector, as the first reflection originates from further up the direction of dip (see diagram) and therefore the travel-time plot will show a reduced dip that is defined the “migrator’s equation” :
formula_1
where ξa is the "apparent dip" and ξ is the "true dip".
Zero-offset data is important to a geophysicist because the migration operation is much simpler, and can be represented by spherical surfaces. When data is acquired at non-zero offsets, the sphere becomes an ellipsoid and is much more complex to represent (both geometrically, as well as computationally).
Use.
For a geophysicist, complex geology is defined as anywhere there is an abrupt or sharp contrast in lateral and/or vertical velocity (e.g. a sudden change in rock type or lithology which causes a sharp change in seismic wave velocity).
Some examples of what a geophysicist considers complex geology are: faulting, folding, (some) fracturing, salt bodies, and unconformities. In these situations a form of migration is used called pre-stack migration (PreSM), in which all traces are migrated before being moved to zero-offset. Consequently, much more information is used, which results in a much better image, along with the fact that PreSM honours velocity changes more accurately than post-stack migration.
Types of migration.
Depending on budget, time restrictions and the subsurface geology, geophysicists can employ 1 of 2 fundamental types of migration algorithms, defined by the domain in which they are applied: time migration and depth migration.
Time migration.
Time migration is applied to seismic data in time coordinates. This type of migration makes the assumption of only mild lateral velocity variations and this breaks down in the presence of most interesting and complex subsurface structures, particularly salt. Some popularly used time migration algorithms are: Stolt migration, Gazdag and Finite-difference migration.
Depth migration.
Depth Migration is applied to seismic data in depth (regular Cartesian) coordinates, which must be calculated from seismic data in time coordinates. This method does therefore require a velocity model, making it resource-intensive because building a seismic velocity model is a long and iterative process. The significant advantage to this migration method is that it can be successfully used in areas with lateral velocity variations, which tend to be the areas that are most interesting to petroleum geologists. Some of the popularly used depth migration algorithms are Kirchhoff depth migration, Reverse Time Migration (RTM), Gaussian Beam Migration and Wave-equation migration.
Resolution.
The goal of migration is to ultimately increase spatial resolution and one of the basic assumptions made about the seismic data is that it only shows primary reflections and all noise has been removed. In order to ensure maximum resolution (and therefore maximum uplift in image quality) the data should be sufficiently pre-processed before migration. Noise that may be easy to distinguish pre-migration could be smeared across the entire aperture length during migration, reducing image sharpness and clarity.
A further basic consideration is whether to use 2D or 3D migration. If the seismic data has an element of cross-dip (a layer that dips perpendicular to the line of acquisition) then the primary reflection will originate from out-of-plane and 2D migration cannot put the energy back to its origin. In this case, 3D migration is needed to attain the best possible image.
Modern seismic processing computers are more capable of performing 3D migration, so the question of whether to allocate resources to performing 3D migration is less of a concern.
Graphical migration.
The simplest form of migration is that of graphical migration. Graphical migration assumes a constant velocity world and zero-offset data, in which a geophysicist draws spheres or circles from the receiver to the event location for all events. The intersection of the circles then form the reflector's "true" location in time or space. An example of such can be seen in the diagram.
Technical details.
Migration of seismic data is the correction of the flat-geological-layer assumption by a numerical, grid-based spatial convolution of the seismic data to account for dipping events (where geological layers are not flat). There are many approaches, such as the popular Kirchhoff migration, but it is generally accepted that processing large spatial sections (apertures) of the data at a time introduces fewer errors, and that depth migration is far superior to time migration with large dips and with complex salt bodies.
Basically, it repositions/moves the energy (seismic data) from the recorded locations to the locations with the correct common midpoint (CMP). While the seismic data is received at the proper locations originally (according to the laws of nature), these locations do not correspond with the assumed CMP for that location. Though stacking the data without the migration corrections yields a somewhat inaccurate picture of the subsurface, migration is preferred for better most imaging recorder to drill and maintain oilfields. This process is a central step in the creation of an image of the subsurface from active source seismic data collected at the surface, seabed, boreholes, etc., and therefore is used on industrial scales by oil and gas companies and their service providers on digital computers.
Explained in another way, this process attempts to account for wave dispersion from dipping reflectors and also for the spatial and directional seismic wave speed (heterogeneity) variations, which cause wavefields (modelled by ray paths) to bend, wave fronts to cross (caustics), and waves to be recorded at positions different from those that would be expected under straight ray or other simplifying assumptions. Finally, this process often attempts to also preserve and extract the formation interface reflectivity information imbedded in the seismic data amplitudes, so that they can be used to reconstruct the elastic properties of the geological formations (amplitude preservation, seismic inversion). There are a variety of migration algorithms, which can be classified by their output domain into the broad categories of time migration or depth migration, and pre-stack migration or post-stack migration (orthogonal) techniques. Depth migration begins with time data converted to depth data by a spatial geological velocity profile. Post-stack migration begins with seismic data which has already been stacked, and thus already lost valuable velocity analysis information.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d=\\frac{v t}{2},"
},
{
"math_id": 1,
"text": "\\tan \\xi_a = \\sin \\xi,"
}
] | https://en.wikipedia.org/wiki?curid=11956089 |
1196 | Angle | Figure formed by two rays meeting at a common point
In Euclidean geometry, an angle is the figure formed by two rays, called the "sides" of the angle, sharing a common endpoint, called the "vertex" of the angle.
Angles formed by two rays are also known as plane angles as they lie in the plane that contains the rays. Angles are also formed by the intersection of two planes; these are called "dihedral angles". Two intersecting curves may also define an angle, which is the angle of the rays lying tangent to the respective curves at their point of intersection.
The magnitude of an angle is called an angular measure or simply "angle". "Angle of rotation" is a measure conventionally defined as the ratio of a circular arc length to its radius, and may be a negative number. In the case of a geometric angle, the arc is centered at the vertex and delimited by the sides. In the case of a rotation, the arc is centered at the center of the rotation and delimited by any other point and its image by the rotation.
History and etymology.
The word "angle" comes from the Latin word , meaning "corner". Cognate words include the Greek () meaning "crooked, curved" and the English word "ankle". Both are connected with the Proto-Indo-European root "*ank-", meaning "to bend" or "bow".
Euclid defines a plane angle as the inclination to each other, in a plane, of two lines that meet each other and do not lie straight with respect to each other. According to the Neoplatonic metaphysician Proclus, an angle must be either a quality, a quantity, or a relationship. The first concept, angle as quality, was used by Eudemus of Rhodes, who regarded an angle as a deviation from a straight line; the second, angle as quality, by Carpus of Antioch, who regarded it as the interval or space between the intersecting lines; Euclid adopted the third: angle as a relationship.
Identifying angles.
In mathematical expressions, it is common to use Greek letters (α, β, γ, θ, φ, . . . ) as variables denoting the size of some angle (the symbol π is typically not used for this purpose to avoid confusion with the constant denoted by that symbol). Lower case Roman letters ("a", "b", "c", . . . ) are also used. In contexts where this is not confusing, an angle may be denoted by the upper case Roman letter denoting its vertex. See the figures in this article for examples.
The three defining points may also identify angles in geometric figures. For example, the angle with vertex A formed by the rays AB and AC (that is, the half-lines from point A through points B and C) is denoted ∠BAC or formula_0. Where there is no risk of confusion, the angle may sometimes be referred to by a single vertex alone (in this case, "angle A").
In other ways, an angle denoted as, say, ∠BAC might refer to any of four angles: the clockwise angle from B to C about A, the anticlockwise angle from B to C about A, the clockwise angle from C to B about A, or the anticlockwise angle from C to B about A, where the direction in which the angle is measured determines its sign (see ""). However, in many geometrical situations, it is evident from the context that the positive angle less than or equal to 180 degrees is meant, and in these cases, no ambiguity arises. Otherwise, to avoid ambiguity, specific conventions may be adopted so that, for instance, ∠BAC always refers to the anticlockwise (positive) angle from B to C about A and ∠CAB the anticlockwise (positive) angle from C to B about A.
Types.
Individual angles.
There is some common terminology for angles, whose measure is always non-negative (see ""):
The names, intervals, and measuring units are shown in the table below:
Vertical and <templatestyles src="Template:Visible anchor/styles.css" />adjacent angle pairs.
When two straight lines intersect at a point, four angles are formed. Pairwise, these angles are named according to their location relative to each other.
A transversal is a line that intersects a pair of (often parallel) lines and is associated with "exterior angles", "interior angles", "alternate exterior angles", "alternate interior angles", "corresponding angles", and "consecutive interior angles".
Combining angle pairs.
The angle addition postulate states that if B is in the interior of angle AOC, then
formula_1
I.e., the measure of the angle AOC is the sum of the measure of angle AOB and the measure of angle BOC.
Three special angle pairs involve the summation of angles:
Measuring angles.
The size of a geometric angle is usually characterized by the magnitude of the smallest rotation that maps one of the rays into the other. Angles of the same size are said to be "equal" "congruent" or "equal in measure".
In some contexts, such as identifying a point on a circle or describing the "orientation" of an object in two dimensions relative to a reference orientation, angles that differ by an exact multiple of a full turn are effectively equivalent. In other contexts, such as identifying a point on a spiral curve or describing an object's "cumulative rotation" in two dimensions relative to a reference orientation, angles that differ by a non-zero multiple of a full turn are not equivalent.
To measure an angle θ, a circular arc centered at the vertex of the angle is drawn, e.g., with a pair of compasses. The ratio of the length s of the arc by the radius r of the circle is the number of radians in the angle:
formula_2
Conventionally, in mathematics and the SI, the radian is treated as being equal to the dimensionless unit 1, thus being normally omitted.
The angle expressed by another angular unit may then be obtained by multiplying the angle by a suitable conversion constant of the form , where "k" is the measure of a complete turn expressed in the chosen unit (for example, "k" = 360° for degrees or 400 grad for gradians):
formula_3
The value of "θ" thus defined is independent of the size of the circle: if the length of the radius is changed, then the arc length changes in the same proportion, so the ratio "s"/"r" is unaltered.
Units.
Throughout history, angles have been measured in various units. These are known as angular units, with the most contemporary units being the degree ( ° ), the radian (rad), and the gradian (grad), though many others have been used throughout history. Most units of angular measurement are defined such that one turn (i.e., the angle subtended by the circumference of a circle at its centre) is equal to "n" units, for some whole number "n". Two exceptions are the radian (and its decimal submultiples) and the diameter part.
In the International System of Quantities, an angle is defined as a dimensionless quantity, and in particular, the radian unit is dimensionless. This convention impacts how angles are treated in dimensional analysis.
The following table lists some units used to represent angles.
Signed angles.
It is frequently helpful to impose a convention that allows positive and negative angular values to represent orientations and/or rotations in opposite directions or "sense" relative to some reference.
In a two-dimensional Cartesian coordinate system, an angle is typically defined by its two sides, with its vertex at the origin. The "initial side" is on the positive x-axis, while the other side or "terminal side" is defined by the measure from the initial side in radians, degrees, or turns, with "positive angles" representing rotations toward the positive y-axis and "negative angles" representing rotations toward the negative "y"-axis. When Cartesian coordinates are represented by "standard position", defined by the "x"-axis rightward and the "y"-axis upward, positive rotations are anticlockwise, and negative cycles are clockwise.
In many contexts, an angle of −"θ" is effectively equivalent to an angle of "one full turn minus "θ"". For example, an orientation represented as −45° is effectively equal to an orientation defined as 360° − 45° or 315°. Although the final position is the same, a physical rotation (movement) of −45° is not the same as a rotation of 315° (for example, the rotation of a person holding a broom resting on a dusty floor would leave visually different traces of swept regions on the floor).
In three-dimensional geometry, "clockwise" and "anticlockwise" have no absolute meaning, so the direction of positive and negative angles must be defined in terms of an orientation, which is typically determined by a normal vector passing through the angle's vertex and perpendicular to the plane in which the rays of the angle lie.
In navigation, bearings or azimuth are measured relative to north. By convention, viewed from above, bearing angles are positive clockwise, so a bearing of 45° corresponds to a north-east orientation. Negative bearings are not used in navigation, so a north-west orientation corresponds to a bearing of 315°.
Related quantities.
For an angular unit, it is definitional that the angle addition postulate holds. Some quantities related to angles where the angle addition postulate does not hold include:
Angles between curves.
The angle between a line and a curve (mixed angle) or between two intersecting curves (curvilinear angle) is defined to be the angle between the tangents at the point of intersection. Various names (now rarely, if ever, used) have been given to particular cases:—"amphicyrtic" (Gr. , on both sides, κυρτός, convex) or "cissoidal" (Gr. κισσός, ivy), biconvex; "xystroidal" or "sistroidal" (Gr. ξυστρίς, a tool for scraping), concavo-convex; "amphicoelic" (Gr. κοίλη, a hollow) or "angulus lunularis", biconcave.
Bisecting and trisecting angles.
The ancient Greek mathematicians knew how to bisect an angle (divide it into two angles of equal measure) using only a compass and straightedge but could only trisect certain angles. In 1837, Pierre Wantzel showed that this construction could not be performed for most angles.
Dot product and generalisations.
In the Euclidean space, the angle "θ" between two Euclidean vectors u and v is related to their dot product and their lengths by the formula
formula_4
This formula supplies an easy method to find the angle between two planes (or curved surfaces) from their normal vectors and between skew lines from their vector equations.
Inner product.
To define angles in an abstract real inner product space, we replace the Euclidean dot product ( · ) by the inner product formula_5, i.e.
formula_6
In a complex inner product space, the expression for the cosine above may give non-real values, so it is replaced with
formula_7
or, more commonly, using the absolute value, with
formula_8
The latter definition ignores the direction of the vectors. It thus describes the angle between one-dimensional subspaces formula_9 and formula_10 spanned by the vectors formula_11 and formula_12 correspondingly.
Angles between subspaces.
The definition of the angle between one-dimensional subspaces formula_9 and formula_10 given by
formula_13
in a Hilbert space can be extended to subspaces of finite dimensions. Given two subspaces formula_14, formula_15 with formula_16, this leads to a definition of formula_17 angles called canonical or principal angles between subspaces.
Angles in Riemannian geometry.
In Riemannian geometry, the metric tensor is used to define the angle between two tangents. Where "U" and "V" are tangent vectors and "g""ij" are the components of the metric tensor "G",
formula_18
Hyperbolic angle.
A hyperbolic angle is an argument of a hyperbolic function just as the "circular angle" is the argument of a circular function. The comparison can be visualized as the size of the openings of a hyperbolic sector and a circular sector since the areas of these sectors correspond to the angle magnitudes in each case. Unlike the circular angle, the hyperbolic angle is unbounded. When the circular and hyperbolic functions are viewed as infinite series in their angle argument, the circular ones are just alternating series forms of the hyperbolic functions. This comparison of the two series corresponding to functions of angles was described by Leonhard Euler in "Introduction to the Analysis of the Infinite" (1748).
Angles in geography and astronomy.
In geography, the location of any point on the Earth can be identified using a "geographic coordinate system". This system specifies the latitude and longitude of any location in terms of angles subtended at the center of the Earth, using the equator and (usually) the Greenwich meridian as references.
In astronomy, a given point on the celestial sphere (that is, the apparent position of an astronomical object) can be identified using any of several "astronomical coordinate systems", where the references vary according to the particular system. Astronomers measure the "angular separation" of two stars by imagining two lines through the center of the Earth, each intersecting one of the stars. The angle between those lines and the angular separation between the two stars can be measured.
In both geography and astronomy, a sighting direction can be specified in terms of a vertical angle such as altitude /elevation with respect to the horizon as well as the azimuth with respect to north.
Astronomers also measure objects' "apparent size" as an angular diameter. For example, the full moon has an angular diameter of approximately 0.5° when viewed from Earth. One could say, "The Moon's diameter subtends an angle of half a degree." The small-angle formula can convert such an angular measurement into a distance/size ratio.
Other astronomical approximations include:
These measurements depend on the individual subject, and the above should be treated as rough rule of thumb approximations only.
In astronomy, right ascension and declination are usually measured in angular units, expressed in terms of time, based on a 24-hour day.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\widehat{\\rm BAC}"
},
{
"math_id": 1,
"text": " m\\angle \\mathrm{AOC} = m\\angle \\mathrm{AOB} + m\\angle \\mathrm{BOC} "
},
{
"math_id": 2,
"text": " \\theta = \\frac{s}{r} \\, \\mathrm{rad}. "
},
{
"math_id": 3,
"text": " \\theta = \\frac{k}{2\\pi} \\cdot \\frac{s}{r}. "
},
{
"math_id": 4,
"text": " \\mathbf{u} \\cdot \\mathbf{v} = \\cos(\\theta) \\left\\| \\mathbf{u} \\right\\| \\left\\| \\mathbf{v} \\right\\| ."
},
{
"math_id": 5,
"text": " \\langle \\cdot , \\cdot \\rangle "
},
{
"math_id": 6,
"text": " \\langle \\mathbf{u} , \\mathbf{v} \\rangle = \\cos(\\theta)\\ \\left\\| \\mathbf{u} \\right\\| \\left\\| \\mathbf{v} \\right\\| ."
},
{
"math_id": 7,
"text": " \\operatorname{Re} \\left( \\langle \\mathbf{u} , \\mathbf{v} \\rangle \\right) = \\cos(\\theta) \\left\\| \\mathbf{u} \\right\\| \\left\\| \\mathbf{v} \\right\\| ."
},
{
"math_id": 8,
"text": " \\left| \\langle \\mathbf{u} , \\mathbf{v} \\rangle \\right| = \\left| \\cos(\\theta) \\right| \\left\\| \\mathbf{u} \\right\\| \\left\\| \\mathbf{v} \\right\\| ."
},
{
"math_id": 9,
"text": "\\operatorname{span}(\\mathbf{u})"
},
{
"math_id": 10,
"text": "\\operatorname{span}(\\mathbf{v})"
},
{
"math_id": 11,
"text": "\\mathbf{u}"
},
{
"math_id": 12,
"text": "\\mathbf{v}"
},
{
"math_id": 13,
"text": " \\left| \\langle \\mathbf{u} , \\mathbf{v} \\rangle \\right| = \\left| \\cos(\\theta) \\right| \\left\\| \\mathbf{u} \\right\\| \\left\\| \\mathbf{v} \\right\\| "
},
{
"math_id": 14,
"text": " \\mathcal{U} "
},
{
"math_id": 15,
"text": " \\mathcal{W} "
},
{
"math_id": 16,
"text": " \\dim ( \\mathcal{U}) := k \\leq \\dim ( \\mathcal{W}) := l "
},
{
"math_id": 17,
"text": "k"
},
{
"math_id": 18,
"text": "\n\\cos \\theta = \\frac{g_{ij} U^i V^j}{\\sqrt{ \\left| g_{ij} U^i U^j \\right| \\left| g_{ij} V^i V^j \\right|}}.\n"
}
] | https://en.wikipedia.org/wiki?curid=1196 |
11960848 | D-ary heap | The "d"-ary heap or "d"-heap is a priority queue data structure, a generalization of the binary heap in which the nodes have "d" children instead of 2. Thus, a binary heap is a 2-heap, and a ternary heap is a 3-heap. According to Tarjan and Jensen et al., "d"-ary heaps were invented by Donald B. Johnson in 1975.
This data structure allows decrease priority operations to be performed more quickly than binary heaps, at the expense of slower delete minimum operations. This tradeoff leads to better running times for algorithms such as Dijkstra's algorithm in which decrease priority operations are more common than delete min operations. Additionally, "d"-ary heaps have better memory cache behavior than binary heaps, allowing them to run more quickly in practice despite having a theoretically larger worst-case running time. Like binary heaps, "d"-ary heaps are an in-place data structure that uses no additional storage beyond that needed to store the array of items in the heap.
Data structure.
The "d"-ary heap consists of an array of "n" items, each of which has a priority associated with it. These items may be viewed as the nodes in a complete "d"-ary tree, listed in breadth first traversal order: the item at position 0 of the array (using zero-based numbering) forms the root of the tree, the items at positions 1 through "d" are its children, the next "d"2 items are its grandchildren, etc. Thus, the parent of the item at position "i" (for any "i" > 0) is the item at position and its children are the items at positions "di" + 1 through "di" + "d". According to the heap property, in a min-heap, each item has a priority that is at least as large as its parent; in a max-heap, each item has a priority that is no larger than its parent.
The minimum priority item in a min-heap (or the maximum priority item in a max-heap) may always be found at position 0 of the array. To remove this item from the priority queue, the last item "x" in the array is moved into its place, and the length of the array is decreased by one. Then, while item "x" and its children do not satisfy the heap property, item "x" is swapped with one of its children (the one with the smallest priority in a min-heap, or the one with the largest priority in a max-heap), moving it downward in the tree and later in the array, until eventually the heap property is satisfied. The same downward swapping procedure may be used to increase the priority of an item in a min-heap, or to decrease the priority of an item in a max-heap.
To insert a new item into the heap, the item is appended to the end of the array, and then while the heap property is violated it is swapped with its parent, moving it upward in the tree and earlier in the array, until eventually the heap property is satisfied. The same upward-swapping procedure may be used to decrease the priority of an item in a min-heap, or to increase the priority of an item in a max-heap.
To create a new heap from an array of "n" items, one may loop over the items in reverse order, starting from the item at position "n" − 1 and ending at the item at position 0, applying the downward-swapping procedure for each item.
Analysis.
In a "d"-ary heap with "n" items in it, both the upward-swapping procedure and the downward-swapping procedure may perform as many as log"d" "n" = log "n" / log "d" swaps. In the upward-swapping procedure, each swap involves a single comparison of an item with its parent, and takes constant time. Therefore, the time to insert a new item into the heap, to decrease the priority of an item in a min-heap, or to increase the priority of an item in a max-heap, is O(log "n" / log "d"). In the downward-swapping procedure, each swap involves "d" comparisons and takes O("d") time: it takes "d" − 1 comparisons to determine the minimum or maximum of the children and then one more comparison against the parent to determine whether a swap is needed. Therefore, the time to delete the root item, to increase the priority of an item in a min-heap, or to decrease the priority of an item in a max-heap, is O("d" log "n" / log "d").
When creating a "d"-ary heap from a set of "n" items, most of the items are in positions that will eventually hold leaves of the "d"-ary tree, and no downward swapping is performed for those items. At most "n"/"d" + 1 items are non-leaves, and may be swapped downwards at least once, at a cost of O("d") time to find the child to swap them with. At most "n"/"d"2 + 1 nodes may be swapped downward two times, incurring an additional O("d") cost for the second swap beyond the cost already counted in the first term, etc. Therefore, the total amount of time to create a heap in this way is
formula_0
The exact value of the above (the worst-case number of comparisons during the construction of d-ary heap) is known to be equal to:
formula_1,
where sd(n) is the sum of all digits of the standard base-d representation of n and ed(n) is the exponent of d in the factorization of n.
This reduces to
formula_2,
for d = 2, and to
formula_3,
for d = 3.
The space usage of the "d"-ary heap, with insert and delete-min operations, is linear, as it uses no extra storage other than an array containing a list of the items in the heap. If changes to the priorities of existing items need to be supported, then one must also maintain pointers from the items to their positions in the heap, which again uses only linear storage.
Applications.
When operating on a graph with m edges and n vertices, both Dijkstra's algorithm for shortest paths and Prim's algorithm for minimum spanning trees use a min-heap in which there are "n" delete-min operations and as many as m decrease-priority operations. By using a d-ary heap with "d" = "m"/"n", the total times for these two types of operations may be balanced against each other, leading to a total time of O("m" log"m"/"n" "n") for the algorithm, an improvement over the O("m" log "n") running time of binary heap versions of these algorithms whenever the number of edges is significantly larger than the number of vertices. An alternative priority queue data structure, the Fibonacci heap, gives an even better theoretical running time of O("m" + "n" log "n"), but in practice d-ary heaps are generally at least as fast, and often faster, than Fibonacci heaps for this application.
4-heaps may perform better than binary heaps in practice, even for delete-min operations. Additionally,
a d-ary heap typically runs much faster than a binary heap for heap sizes that exceed the size of the computer's cache memory:
A binary heap typically requires more cache misses and virtual memory page faults than a d-ary heap, each one taking far more time than the extra work incurred by the additional comparisons a d-ary heap makes compared to a binary heap.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{i=1}^{\\log_d n} \\left(\\frac{n}{d^i}+1\\right) O(d) = O(n)."
},
{
"math_id": 1,
"text": " \\frac{d}{d-1} (n - s_d (n)) - (d-1 - (n \\bmod d)) \\left( e_d \\left( \\left\\lfloor \\frac{n}{d} \\right\\rfloor\\right) + 1\\right) "
},
{
"math_id": 2,
"text": " 2 n - 2 s_2 (n) - e_2 (n) "
},
{
"math_id": 3,
"text": " \\frac{3}{2} (n - s_3 (n)) - 2 e_3 (n) - e_3 (n-1) "
}
] | https://en.wikipedia.org/wiki?curid=11960848 |
1196186 | Superhard material | Material with Vickers hardness exceeding 40 gigapascals
A superhard material is a material with a hardness value exceeding 40 gigapascals (GPa) when measured by the Vickers hardness test. They are virtually incompressible solids with high electron density and high bond covalency. As a result of their unique properties, these materials are of great interest in many industrial areas including, but not limited to, abrasives, polishing and cutting tools, disc brakes, and wear-resistant and protective coatings.
Diamond is the hardest known material to date, with a Vickers hardness in the range of 70–150 GPa. Diamond demonstrates both high thermal conductivity and electrically insulating properties, and much attention has been put into finding practical applications of this material. However, diamond has several limitations for mass industrial application, including its high cost and oxidation at temperatures above 800 °C. In addition, diamond dissolves in iron and forms iron carbides at high temperatures and therefore is inefficient in cutting ferrous materials including steel. Therefore, recent research of superhard materials has been focusing on compounds which would be thermally and chemically more stable than pure diamond.
The search for new superhard materials has generally taken two paths. In the first approach, researchers emulate the short, directional covalent carbon bonds of diamond by combining light elements like boron, carbon, nitrogen, and oxygen. This approach became popular in the late 1980s with the exploration of C3N4 and B-C-N ternary compounds. The second approach towards designing superhard materials incorporates these lighter elements (B, C, N, and O), but also introduces transition metals with high valence electron densities to provide high incompressibility. In this way, metals with high bulk moduli but low hardness are coordinated with small covalent-forming atoms to produce superhard materials. Tungsten carbide is an industrially-relevant manifestation of this approach, although it is not considered superhard. Alternatively, borides combined with transition metals have become a rich area of superhard research and have led to discoveries such as ReB2, OsB2, and WB4.
Superhard materials can be generally classified into two categories: intrinsic compounds and extrinsic compounds. The intrinsic group includes diamond, cubic boron nitride (c-BN), carbon nitrides, and ternary compounds such as B-N-C, which possess an innate hardness. Conversely, extrinsic materials are those that have superhardness and other mechanical properties that are determined by their microstructure rather than composition. An example of extrinsic superhard material is nanocrystalline diamond known as aggregated diamond nanorods.
Definition and mechanics of hardness.
The hardness of a material is directly related to its incompressibility, elasticity and resistance to change in shape. A superhard material has high shear modulus, high bulk modulus, and does not deform plastically. Ideally superhard materials should have a defect-free, isotropic lattice. This greatly reduces structural deformations that can lower the strength of the material. However, defects can actually strengthen some covalent structures. Traditionally, high-pressure and high-temperature (HPHT) conditions have been used to synthesize superhard materials, but recent superhard material syntheses aim at using less energy and lower cost materials.
Historically, hardness was first defined as the ability of one material to scratch another and quantified by an integer (sometimes half-integer) from 0 to 10 on the Mohs scale. This scale was however quickly found too discrete and non-linear. Measuring the mechanical hardness of materials changed to using a nanoindenter (usually made of diamond) and evaluating bulk moduli, and the Brinell, Rockwell, Knoop, and Vickers scales have been developed. Whereas the Vickers scale is widely accepted as a most common test, there remain controversies on the weight load to be applied during the test. This is because Vickers hardness values are load-dependent. An indent made with 0.5N will indicate a higher hardness value than an indent made with 50N. This phenomenon is known as the indentation size effect (ISE). Thus, hardness values are not meaningful unless the load is also reported. Some argue that hardness values should consistently be reported in the asymptotic (high-load region), as this is a more standardized representation of a material's hardness.
Bulk moduli, shear moduli, and elasticity are the key factors in the superhard classification process. The incompressibility of a material is quantified by the bulk modulus B, which measures the resistance of a solid to volume compression under hydrostatic stress as B = −Vdp/dV. Here V is the volume, p is pressure, and dp/dV is the partial derivative of pressure with respect to the volume. The bulk modulus test uses an indenter tool to form a permanent deformation in a material. The size of the deformation depends on the material's resistance to the volume compression made by the tool. Elements with small molar volumes and strong interatomic forces usually have high bulk moduli. Bulk moduli was the first major test of hardness and originally shown to be correlated with the molar volume (Vm) and cohesive energy (Ec) as B ~ Ec/Vm.
Bulk modulus was believed to be a direct measure of a material's hardness but this no longer remains the dominant school of thought. For example, some alkali and noble metals (Pd, Ag) have anomalously high ratio of the bulk modulus to the Vickers or Brinell hardness. In the early 2000s, a direct relationship between bulk modulus and valence electron density was found as the more electrons were present the greater the repulsions within the structure were. Bulk modulus is still used as a preliminary measure of a material as superhard but it is now known that other properties must be taken into account.
In contrast to bulk modulus, shear modulus measures the resistance to shape change at a constant volume, taking into account the crystalline plane and direction of shear. The shear modulus G is defined as ratio of shear stress to shear strain: G = stress/strain = F·L/(A·dx), where F is the applied force, A is the area upon which the force acts, dx is the resulting displacement and L is the initial length. The larger the shear modulus, the greater the ability for a material to resist shearing forces. Therefore, the shear modulus is a measure of rigidity. Shear modulus is related to bulk modulus as 3/G
2B(1 − 2v)(1 + v), where v is the Poisson's ratio, which is typically ~0.1 in covalent materials. If a material contains highly directional bonds, the shear modulus will increase and give a low Poisson ratio.
A material is also considered hard if it resists plastic deformation. If a material has short covalent bonds, atomic dislocations that lead to plastic deformation are less likely to occur than in materials with longer, delocalized bonds. If a material contains many delocalized bonds it is likely to be soft. Somewhat related to hardness is another mechanical property fracture toughness, which is a material's ability to resist breakage from forceful impact (note that this concept is distinct from the notion of toughness). A superhard material is not necessarily "supertough". For example, the fracture toughness of diamond is about 7–10 MPa·m1/2, which is high compared to other gemstones and ceramic materials, but poor compared to many metals and alloys – common steels and aluminium alloys have the toughness values at least 5 times higher.
Several properties must be taken into account when evaluating a material as (super)hard. While hard materials have high bulk moduli, a high bulk modulus does not mean a material is hard. Inelastic characteristics must be considered as well, and shear modulus might even provide a better correlation with hardness than bulk modulus. Covalent materials generally have high bond-bending force constants and high shear moduli and are more likely to give superhard structures than, for example, ionic solids.
Diamond.
Diamond is an allotrope of carbon where the atoms are arranged in a modified version of face-centered cubic ("fcc") structure known as "diamond cubic". It is known for its hardness (see table above) and incompressibility and is targeted for some potential optical and electrical applications. The properties of individual natural diamonds or carbonado vary too widely for industrial purposes, and therefore synthetic diamond became a major research focus.
Synthetic diamond.
The high-pressure synthesis of diamond in 1953 in Sweden and in 1954 in the US, made possible by the development of new apparatus and techniques, became a milestone in synthesis of artificial superhard materials. The synthesis clearly showed the potential of high-pressure applications for industrial purposes and stimulated growing interest in the field. Four years after the first synthesis of artificial diamond, cubic boron nitride c-BN was obtained and found to be the second hardest solid.
Synthetic diamond can exist as a single, continuous crystal or as small polycrystals interconnected through the grain boundaries. The inherent spatial separation of these subunits causes the formation of grains, which are visible by the unaided eye due to the light absorption and scattering properties of the material.
The hardness of synthetic diamond (70–150 GPa) is very dependent on the relative purity of the crystal itself. The more perfect the crystal structure, the harder the diamond becomes. It has been reported that HPHT single crystals and nanocrystalline diamond aggregates (aggregated diamond nanorods) can be harder than natural diamond.
Historically, it was thought that synthetic diamond should be structurally perfect to be useful. This is because diamond was mainly preferred for its aesthetic qualities, and small flaws in structure and composition were visible by naked eye. Although this is true, the properties associated with these small changes has led to interesting new potential applications of synthetic diamond. For example, nitrogen doping can enhance mechanical strength of diamond, and heavy doping with boron (several atomic percent) makes it a superconductor.
In 2014, researchers reported on the synthesis of nano-twinned diamond with Vickers hardness values up to 200 GPa. The authors attribute the unprecedented hardness to the Hall-Petch effect, which predicts that smaller microstructural features can lead to enhanced hardness due to higher density of boundaries that stop dislocations. They achieve twins with an average thickness of 5 nm using a precursor of onion carbon nanoparticles subjected to high temperature and pressure. They also simultaneously achieve an oxidation temperature that is 200 °C higher than that of natural diamond. Higher thermal stability is relevant to industrial applications such as cutting tools, where high temperatures can lead to rapid diamond degradation.
Dense Amorphous Carbon.
A dense AM-III form of transparent amorphous carbon has a Vickers hardness of 113 GPa. This heat-treated fullerene is currently the hardest amorphous material.
Cubic boron nitride.
History.
Cubic boron nitride or c-BN was first synthesized in 1957 by Robert H. Wentorf at General Electric, shortly after the synthesis of diamond. The general process for c-BN synthesis is the dissolution of hexagonal boron nitride (h-BN) in a solvent-catalyst, usually alkali or alkaline earth metals or their nitrides, followed by spontaneous nucleation of c-BN under high pressure, high temperature (HPHT) conditions. The yield of c-BN is lower and substantially slower compared to diamond's synthetic route due to the complicated intermediate steps. Its insolubility in iron and other metal alloys makes it more useful for some industrial applications than diamond.
Pure cubic boron nitride is transparent or slightly amber. Different colors can be produced depending on defects or an excess of boron (less than 1%). Defects can be produced by doping solvent-catalysts (i.e. Li, Ca, or Mg nitrides) with Al, B, Ti, or Si. This induces a change in the morphology and color of c-BN crystals.
The result is darker and larger (500 μm) crystals with better shapes and a higher yield.
Structure and properties.
Cubic boron nitride adopts a sphalerite crystal structure, which can be constructed by replacing every two carbon atoms in diamond with one boron atom and one nitrogen atom. The short B-N (1.57 Å) bond is close to the diamond C-C bond length (1.54 Å), that results in strong covalent bonding between atoms in the same fashion as in diamond. The slight decrease in covalency for B-N bonds compared to C-C bonds reduces the hardness from ~100 GPa for diamond down to 48 GPa in c-BN. As diamond is less stable than graphite, c-BN is less stable than h-BN, but the conversion rate between those forms is negligible at room temperature.
Cubic boron nitride is insoluble in iron, nickel, and related alloys at high temperatures, but it binds well with metals due to formation of interlayers of metal borides and nitrides. It is also insoluble in most acids, but is soluble in alkaline molten salts and nitrides, such as LiOH, KOH, NaOH/Na2CO3, NaNO3 which are used to etch c-BN. Because of its stability with heat and metals, c-BN surpasses diamond in mechanical applications. The thermal conductivity of BN is among the highest of all electric insulators. In addition, c-BN consists of only light elements and has low X-ray absorptivity, capable of reducing the X-ray absorption background.
Research and development.
Due to its great chemical and mechanical robustness, c-BN has widespread application as an abrasive, such as on cutting tools and scratch resistant surfaces. Cubic boron nitride is also highly transparent to X-rays. This, along with its high strength, makes it possible to have very thin coatings of c-BN on structures that can be inspected using X-rays. Several hundred tonnes of c-BN are produced worldwide each year. By modification, Borazon, a US brand name of c-BN, is used in industrial applications to shape tools, as it can withstand temperatures greater than 2,000 °C. Cubic boron nitride-coated grinding wheels, referred to as Borazon wheels, are routinely used in the machining of hard ferrous metals, cast irons, and nickel-base and cobalt-base superalloys. Other brand names, such as Elbor and Cubonite, are marketed by Russian vendors.
New approaches in research focus on improving c-BN pressure capabilities of the devices used for c-BN synthesis. At present, the capabilities for the production of c-BN are restricted to pressures of about 6 GPa. Increasing the pressure limit will permit synthesis of larger single crystals than from the present catalytic synthesis. However, the use of solvents under supercritical conditions for c-BN synthesis has been shown to reduce pressure requirements. The high cost of c-BN still limits its application, which motivates the search for other superhard materials.
Carbon nitride.
The structure of beta carbon nitride (β-C3N4) was first proposed by Amy Liu and Marvin Cohen in 1989. It is isostructural with Si3N4 and was predicted to be harder than diamond. The calculated bond length was 1.47 Å, 5% shorter than the C-C bond length in diamond. Later calculations indicated that the shear modulus is 60% of that of diamond, and carbon nitride is less hard than c-BN.
Despite two decades of pursuit of this compound, no synthetic sample of C3N4 has validated the hardness predictions; this has been attributed to the difficulty in synthesis and the instability of C3N4. Carbon nitride is only stable at a pressure that is higher than that of the graphite-to-diamond transformation. The synthesis conditions would require extremely high pressures because carbon is four- and sixfold coordinated. In addition, C3N4 would pose problems of carbide formation if they were to be used to machine ferrous metals. Although publications have reported preparation of C3N4 at lower pressures than stated, synthetic C3N4 was not proved superhard.
Boron carbon nitride.
The similar atomic sizes of boron, carbon and nitrogen, as well as the similar structures of carbon and boron nitride polymorphs, suggest that it might be possible to synthesize diamond-like phase containing all three elements. It is also possible to make compounds containing B-C-O, B-O-N, or B-C-O-N under high pressure, but their synthesis would expect to require a complex chemistry and in addition, their elastic properties would be inferior to that of diamond.
Beginning in 1990, a great interest has been put in studying the possibility to synthesize dense B-C-N phases. They are expected to be thermally and chemically more stable than diamond, and harder than c-BN, and would therefore be excellent materials for high speed cutting and polishing of ferrous alloys. These characteristic properties are attributed to the diamond-like structure combined with the sp3 σ-bonds among carbon and the heteroatoms. BCxNy thin films were synthesized by chemical vapor deposition in 1972. However, data on the attempted synthesis of B-C-N dense phases reported by different authors have been contradictory. It is unclear whether the synthesis products are diamond-like solid solutions between carbon and boron nitride or just mechanical mixtures of highly dispersed diamond and c-BN. In 2001, a diamond-like-structured c-BC2N was synthesized at pressures >18 GPa and temperatures >2,200 K by a direct solid-state phase transition of graphite-like (BN)0.48C0.52. The reported Vickers and Knoop hardnesses were intermediate between diamond and c-BN, making the new phase the second hardest known material. Ternary B–C–N phases can also be made using shock-compression synthesis. It was further suggested to extend the B–C–N system to quaternary compounds with silicon included.
Metal borides.
Unlike carbon-based systems, metal borides can be easily synthesized in large quantities under ambient conditions, an important technological advantage. Most metal borides are hard; however, a few stand out among them for their particularly high hardnesses (for example, WB4, RuB2, OsB2 and ReB2).
These metal borides are still metals and not semiconductors or insulators (as indicated by their high electronic density of states at the Fermi Level); however, the additional covalent B-B and M-B bonding (M = metal) lead to high hardness. Dense heavy metals, such as osmium, rhenium, tungsten etc., are particularly apt at forming hard borides because of their high electron densities, small atomic radii, high bulk moduli, and ability to bond strongly with boron.
Osmium diboride.
Osmium diboride (OsB2) has a high bulk modulus of 395 GPa and therefore is considered as a candidate superhard material, but the maximum achieved Vickers hardness is 37 GPa, slightly below the 40 GPa limit of superhardness. A common way to synthesize OsB2 is by a solid-state metathesis reaction containing a 2:3 mixture of OsCl3:MgB2. After the MgCl2 product is washed away, X-ray diffraction indicates products of OsB2, OsB and Os. Heating this product at 1,000 °C for three days produces pure OsB2 crystalline product. OsB2 has an orthorhombic structure (space group P"mmn") with two planes of osmium atoms separated by a non-planar layer of hexagonally coordinated boron atoms; the lattice parameters are "a" = 4.684 Å, "b" = 2.872 Å and "c" = 4.096 Å. The "b" direction of the crystal is the most compressible and the "c" direction is the least compressible. This can be explained by the orthorhombic structure. When looking at the boron and osmium atoms in the "a" and "b" directions, they are arranged in a way that is offset from one another. Therefore, when they are compressed they are not pushed right up against one another. Electrostatic repulsion is the force that maximizes the materials incompressibility and so in this case the electrostatic repulsion is not taken full advantage of. When compressed in the "c" direction, the osmium and boron atoms are almost directly in line with one another and the electrostatic repulsion is therefore high, causing direction "c" to be the least compressible. This model implies that if boron is more evenly distributed throughout the lattice then incompressibility could be higher. Electron backscatter diffraction coupled with hardness measurements reveals that in the (010) plane, the crystal is 54% harder in the <100> than <001> direction. This is seen by looking at how long the indentation is along a certain direction (related to the indentations made with a Vickers hardness test). Along with the alignment of the atoms, this is also due to the short covalent B-B (1.80 Å) bonds in the <100> direction, which are absent in the <001> direction (B-B = 4.10 Å).
Rhenium borides.
Rhenium was targeted as a candidate for superhard metal borides because of its desirable physical and chemical characteristics. It has a high electron density, a small atomic radius and a high bulk modulus. When combined with boron, it makes a crystal with highly covalent bonding allowing it to be incompressible and potentially very hard. A wide array of rhenium borides have been investigated including Re3B, Re7B3, Re2B, ReB, Re2B3, Re3B7, Re2B5, ReB3 and ReB2. Each of these materials has their own set of properties and characteristics. Some show promise as superconductors and some have unique elastic and electronic properties, but the most relevant to superhard materials is ReB2.
Rhenium diboride (ReB2) is a refractory compound which was first synthesized in the 1960s, using arc melting, zone melting, or optical floating zone furnaces. An example synthesis of this material is the flux method, which is conducted by placing rhenium metal and amorphous boron in an alumina crucible with excess aluminium. This can be run with a ratio of 1:2:50 for Re:B:Al, with the excess aluminium as a growth medium. The crucible is placed in an alumina tube, inserted into a resistively heated furnace with flowing argon gas and sintered at 1,400 °C for several hours. After cooling, the aluminium is dissolved in NaOH. Each ReB2 synthesis route has its own drawbacks, and this one gives small inclusions of aluminium incorporated into the crystal lattice.
Rhenium diboride has a very high melting point approaching 2,400 °C and a highly anisotropic, layered crystal structure. Its symmetry is either hexagonal (space group P63"mc") or orthorhombic (C"mcm") depending on the phase. There, close-packed Re layers alternate with puckered triangular boron layers along the (001) plane. This can be seen above on the example of osmium diboride. The density of states for ReB2 has one of the lowest values among the metal borides, indicating strong covalent bonding and high hardness.
Owing to the anisotropic nature of this material, the hardness depends on the crystal orientation. The (002) plane contains the most covalent character and exhibits a maximum Vickers hardness value of 40.5 GPa, while the perpendicular planes were 6% lower at 38.1 GPa. These values decrease with increased load, settling at around 28 GPa each. The nanoindentation values were found to be 36.4 GPa and 34.0 GPa for the (002) and perpendicular planes respectively. The hardness values depend on the material purity and composition – the more boron the harder the boride – and the above values are for a Re:B ratio of approximately 1.00:1.85. Rhenium diboride also has a reported bulk modulus of 383 GPa and a shear modulus of 273 GPa. The hardness of rhenium diboride, and most other materials also depends on the load during the test. The above values of about 40 GPa were all measured with an effective load of 0.5–1 N. At such low load, the hardness values are also overestimated for other materials, for example it exceeds 100 GPa for c-BN. Other researchers, while having reproduced the high ReB2 hardness at low load, reported much lower values of 19–17 GPa at a more conventional load of 3–49 N, that makes ReB2 a hard, but not a superhard material.
Rhenium diboride exhibits metallic conductivity which increases as temperature decreases and can be explained by a nonzero density of states due to the d and p overlap of rhenium and boron respectively. At this point, it is the only superhard material with metallic behavior. The material also exhibits relatively high thermal stability. Depending on the heating method, it will maintain its mass up to temperatures of 600–800 °C, with any drop being due to loss of absorbed water. A small loss of mass can then be seen at temperatures approaching 1,000 °C. It performs better when a slower heat ramp is utilized. Part of this small drop at around 1,000 °C was explained by the formation of a dull B2O3 coating on the surface as boron is leached out of the solid, which serves as a protective coating, thereby reducing additional boron loss. This can be easily dissolved by methanol to restore the material to it native shiny state.
Tungsten borides.
The discovery of superhard tungsten tetraboride is further evidence for the promising design approach of covalently bonding incompressible transition metals with boron. While WB4 was first synthesized and identified as the highest boride of tungsten in 1966, it was only recognized as an inexpensive superhard material in 2011.
Interestingly, lower borides of tungsten such as tungsten diboride are not superhard. Higher boron content leads to higher hardness because of the increased density of short, covalent boron-boron and boron-metal bonds. However, researchers have been able to push WB2 into the superhard regime through minority additions of other transition metals such as niobium and tantalum in the crystal structure. This mechanism of hardness enhancement is called solid solution strengthening and arises because atoms of different sizes are incorporated into the parent lattice to impede dislocation motion.
Aluminium magnesium boride.
Aluminium magnesium boride or BAM is a chemical compound of aluminium, magnesium and boron. Whereas its nominal formula is AlMgB14, the chemical composition is closer to Al0.75Mg0.75B14. It is a ceramic alloy that is highly resistive to wear and has a low coefficient of sliding friction.
Other boron-rich superhard materials.
Other hard boron-rich compounds include B4C and B6O. Amorphous a-B4C has a hardness of about 50 GPa, which is in the range of superhardness. It can be looked at as consisting of boron icosahedra-like crystals embedded in an amorphous medium. However, when studying the crystalline form of B4C, the hardness is only about 30 GPa. This crystalline form has the same stoichiometry as B13C3, which consists of boron icosahedra connected by boron and carbon atoms. Boron suboxide (B6O) has a hardness of about 35 GPa. Its structure contains eight B12 icosahedra units, which are sitting at the vertices of a rhombohedral unit cell. There are two oxygen atoms located along the (111) rhombohedral direction.
Nanostructured superhard materials.
Nanosuperhard materials fall into the extrinsic category of superhard materials. Because molecular defects affect the superhard properties of bulk materials it is obvious that the microstructure of superhard materials give the materials their unique properties. Focus on synthesizing nano superhard materials is around minimizing microcracks occurring within the structure through grain boundary hardening. The elimination of microcracks can strengthen the material by 3 to 7 times its original strength. Grain boundary strengthening is described by the Hall-Petch equation
formula_0
Here σc is the critical fracture stress, d the crystallite size and σ0 and kgb are constants.
If a material is brittle its strength depends mainly on the resistance to forming microcracks. The critical stress which causes the growth of a microcrack of size a0 is given by a general formula
formula_1
Here E is the Young's modulus, kcrack is a constant dependent on the nature and shape of the microcrack and the stress applied and γs the surface cohesive energy.
The average hardness of a material decreases with d (crystallite size) decreasing below 10 nm. There have been many mechanisms proposed for grain boundary sliding and hence material softening, but the details are still not understood. Besides grain boundary strengthening, much attention has been put into building microheterostructures, or nanostructures of two materials with very large differences in elastic moduli. Heterostructures were first proposed in 1970 and contained such highly ordered thin layers that they could not theoretically be separated by mechanical means. These highly ordered heterostructures were believed to be stronger than simple mixtures. This theory was confirmed with Al/Cu and Al/Ag structures. After the formation of Al/Cu and Al/Ag, the research was extended to multilayer systems including Cu/Ni, TiN/VN, W/WN, Hf/HfN and more. In all cases, decreasing the lattice period increased the hardness. One common form of a nanostructured material is aggregated diamond nanorods, which is harder than bulk diamond and is currently the hardest (~150 GPa) material known.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sigma_c = \\sigma_0 + \\frac{k_\\text{gb}}{\\sqrt{d}}"
},
{
"math_id": 1,
"text": "\\sigma_c = k_\\text{crack} \\sqrt{\\frac{2E\\gamma_s}{\\pi a_0}} \\propto \\frac{1}{\\sqrt{d}}"
}
] | https://en.wikipedia.org/wiki?curid=1196186 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.