id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
58097665
Nu-transform
In the theory of stochastic processes, a ν-transform is an operation that transforms a measure or a point process into a different point process. Intuitively the ν-transform randomly relocates the points of the point process, with the type of relocation being dependent on the position of each point. Definition. For measures. Let formula_0 denote the Dirac measure on the point formula_1 and let formula_2 be a simple point measure on formula_3. This means that formula_4 for distinct formula_5 and formula_6 for every bounded set formula_7 in formula_3. Further, let formula_8 be a Markov kernel from formula_3 to formula_9. Let formula_10 be independent random elements with distribution formula_11. Then the point process formula_12 is called the ν-transform of the measure formula_2 if it is locally finite, meaning that formula_13 for every bounded set formula_7 For point processes. For a point process formula_14, a second point process formula_15 is called a formula_8-transform of formula_14 if, conditional on formula_16, the point process formula_15 is a formula_8-transform of formula_2. Properties. Stability. If formula_15 is a Cox process directed by the random measure formula_14, then the formula_8-transform of formula_15 is again a Cox-process, directed by the random measure formula_17 (see Transition kernel#Composition of kernels) Therefore, the formula_8-transform of a Poisson process with intensity measure formula_2 is a Cox process directed by a random measure with distribution formula_18. Laplace transform. It formula_15 is a formula_8-transform of formula_14, then the Laplace transform of formula_15 is given by formula_19 for all bounded, positive and measurable functions formula_20.
[ { "math_id": 0, "text": " \\delta_x " }, { "math_id": 1, "text": " x " }, { "math_id": 2, "text": " \\mu " }, { "math_id": 3, "text": " S " }, { "math_id": 4, "text": " \\mu= \\sum_k \\delta_{s_k} " }, { "math_id": 5, "text": " s_k \\in S" }, { "math_id": 6, "text": " \\mu(B)< \\infty " }, { "math_id": 7, "text": " B " }, { "math_id": 8, "text": " \\nu " }, { "math_id": 9, "text": " T " }, { "math_id": 10, "text": " \\tau_k " }, { "math_id": 11, "text": " \\nu_{s_k}=\\nu(s_k,\\cdot) " }, { "math_id": 12, "text": " \\zeta = \\sum_{k} \\delta_{\\tau_k} " }, { "math_id": 13, "text": " \\zeta(B) < \\infty " }, { "math_id": 14, "text": " \\xi " }, { "math_id": 15, "text": " \\zeta " }, { "math_id": 16, "text": " \\{ \\xi=\\mu\\} " }, { "math_id": 17, "text": " \\xi \\cdot \\nu " }, { "math_id": 18, "text": " \\mu \\cdot \\nu " }, { "math_id": 19, "text": " \\mathcal L_{\\zeta}(f)= \\exp \\left( \\int \\log \\left[ \\int \\exp(-f(t)) \\mu_s(\\mathrm dt)\\right] \\xi(\\mathrm ds)\\right)" }, { "math_id": 20, "text": " f " } ]
https://en.wikipedia.org/wiki?curid=58097665
5810
Classical guitar
Member of the guitar family used in classical music The classical guitar, also known as Spanish guitar, is a member of the guitar family used in classical music and other styles. An acoustic wooden string instrument with strings made of gut or nylon, it is a precursor of the modern steel-string acoustic and electric guitars, both of which use metal strings. Classical guitars derive from instruments such as the lute, the vihuela, the gittern (the name being a derivative of the Greek "kithara"), which evolved into the Renaissance guitar and into the 17th and 18th-century baroque guitar. Today's "modern classical guitar" was established by the late designs of the 19th-century Spanish luthier, Antonio Torres Jurado. For a right-handed player, the traditional classical guitar has 12 frets clear of the body and is properly held up by the left leg, so that the hand that plucks or strums the strings does so near the back of the sound hole (this is called the classical position). However, the right-hand may move closer to the fretboard to achieve different tonal qualities. The player typically holds the left leg higher by the use of a foot rest. The modern steel string guitar, on the other hand, usually has 14 frets clear of the body (see Dreadnought) and is commonly held with a strap around the neck and shoulder. The phrase "classical guitar" may refer to either of two concepts other than the instrument itself: The term "modern classical guitar" sometimes distinguishes the classical guitar from older forms of guitar, which are in their broadest sense also called "classical", or more specifically, "early guitars". Examples of early guitars include the six-string early romantic guitar (c. 1790 – 1880), and the earlier baroque guitars with five courses. The materials and the methods of classical guitar construction may vary, but the typical shape is either "modern classical guitar" or that "historic classical guitar" similar to the early romantic guitars of Spain, France and Italy. Classical guitar strings once made of gut are now made of materials such as nylon or fluoropolymers, typically with silver-plated copper fine wire wound about the acoustically lower (d-A-E in standard tuning) strings. A guitar family tree may be identified. The flamenco guitar derives from the modern classical, but has differences in material, construction and sound. Contexts. The classical guitar has a long history and one is able to distinguish various: Both instrument and repertoire can be viewed from a combination of various perspectives: Historical (chronological period of time) Geographical Cultural Historical perspective. Early guitars. While "classical guitar" is today mainly associated with the modern classical guitar design, there is an increasing interest in early guitars; and understanding the link between historical repertoire and the particular period guitar that was originally used to perform this repertoire. The musicologist and author Graham Wade writes: Nowadays it is customary to play this repertoire on reproductions of instruments authentically modelled on concepts of musicological research with appropriate adjustments to techniques and overall interpretation. Thus over recent decades we have become accustomed to specialist artists with expertise in the art of vihuela (a 16th-century type of guitar popular in Spain), lute, Baroque guitar, 19th-century guitar, etc. Different types of guitars have different sound aesthetics, e.g. different colour-spectrum characteristics (the way the sound energy is spread in the fundamental frequency and the overtones), different response, etc. These differences are due to differences in construction; for example, modern classical guitars usually use a different bracing (fan-bracing) from that used in earlier guitars (they had ladder-bracing); and a different voicing was used by the luthier. There is a historical parallel between musical styles (baroque, classical, romantic, flamenco, jazz) and the style of "sound aesthetic" of the musical instruments used, for example: Robert de Visée played a baroque guitar with a very different sound aesthetic from the guitars used by Mauro Giuliani and Luigi Legnani – they used 19th-century guitars. These guitars in turn sound different from the Torres models used by Segovia that are suited for interpretations of romantic-modern works such as Moreno Torroba. When considering the guitar from a historical perspective, the musical instrument used is as important as the musical language and style of the particular period. As an example: It is impossible to play a historically informed de Visee or Corbetta (baroque guitarist-composers) on a modern classical guitar. The reason is that the baroque guitar used courses, which are two strings close together (in unison), that are plucked together. This gives baroque guitars an unmistakable sound characteristic and tonal texture that is an integral part of an interpretation. Additionally, the sound aesthetic of the baroque guitar (with its strong overtone presence) is very different from modern classical type guitars, as is shown below. Today's use of Torres and post-Torres type guitars for repertoire of all periods is sometimes critically viewed: Torres and post-Torres style modern guitars (with their fan-bracing and design) have a thick and strong tone, very suitable for modern-era repertoire. However, they are considered to emphasize the fundamental too heavily (at the expense of overtone partials) for earlier repertoire (Classical/Romantic: Carulli, Sor, Giuliani, Mertz, ...; Baroque: de Visee, ...; etc.). "Andrés Segovia presented the Spanish guitar as a versatile model for all playing styles" to the extent, that still today, "many guitarists have tunnel-vision of the world of the guitar, coming from the modern Segovia tradition". While fan-braced modern classical Torres and post-Torres style instruments coexisted with traditional ladder-braced guitars at the beginning of the 20th century, the older forms eventually fell away. Some attribute this to the popularity of Segovia, considering him "the catalyst for change toward the Spanish design and the so-called 'modern' school in the 1920s and beyond." The styles of music performed on ladder-braced guitars were becoming unfashionable—and, e.g., in Germany, more musicians were turning towards folk music (Schrammel-music and the Contraguitar). This was localized in Germany and Austria and became unfashionable again. On the other hand, Segovia was playing concerts around the world, popularizing modern classical guitar—and, in the 1920s, Spanish romantic-modern style with guitar works by Moreno Torroba, de Falla, etc. The 19th-century classical guitarist Francisco Tárrega first popularized the Torres design as a classical solo instrument. However, some maintain that Segovia's influence led to its domination over other designs. Factories around the world began producing them in large numbers. Style periods. Renaissance. Composers of the Renaissance period who wrote for four-course guitar include Alonso Mudarra, Miguel de Fuenllana, Adrian Le Roy, Grégoire Brayssing, Guillaume de Morlaye, and Simon Gorlier. Four-course guitar Baroque. Some well known composers of the Baroque guitar were Gaspar Sanz, Robert de Visée, Francesco Corbetta and Santiago de Murcia. Classical and romantic. From approximately 1780 to 1850, the guitar had numerous composers and performers including: Hector Berlioz studied the guitar as a teenager; Franz Schubert owned at least two and wrote for the instrument; and Ludwig van Beethoven, after hearing Giuliani play, commented the instrument was "a miniature orchestra in itself". Niccolò Paganini was also a guitar virtuoso and composer. He once wrote: "I love the guitar for its harmony; it is my constant companion in all my travels". He also said, on another occasion: "I do not like this instrument, but regard it simply as a way of helping me to think." Francisco Tárrega. The guitarist and composer Francisco Tárrega (November 21, 1852 – December 15, 1909) was one of the great guitar virtuosos and teachers and is considered the father of modern classical guitar playing. As a professor of guitar at the conservatories of Madrid and Barcelona, he defined many elements of the modern classical technique and elevated the importance of the guitar in the classical music tradition. Modern period. At the beginning of the 1920s, Andrés Segovia popularized the guitar with tours and early phonograph recordings. Segovia collaborated with the composers Federico Moreno Torroba and Joaquín Turina with the aim of extending the guitar repertoire with new music. Segovia's tour of South America revitalized public interest in the guitar and helped the guitar music of Manuel Ponce and Heitor Villa-Lobos reach a wider audience. The composers Alexandre Tansman and Mario Castelnuovo-Tedesco were commissioned by Segovia to write new pieces for the guitar. Luiz Bonfá popularized Brazilian musical styles such as the newly created Bossa Nova, which was well received by audiences in the USA. "New music" – avant-garde. The classical guitar repertoire also includes modern contemporary works – sometimes termed "New Music" – such as Elliott Carter's "Changes", Cristóbal Halffter's "Codex I", Luciano Berio's "Sequenza XI", Maurizio Pisati's "Sette Studi", Maurice Ohana's "Si Le Jour Paraît", Sylvano Bussotti's "Rara (eco sierologico)", Ernst Krenek's "Suite für Guitarre allein, Op. 164", Franco Donatoni's "Algo: Due pezzi per chitarra", Paolo Coggiola's "Variazioni Notturne", etc. Performers who are known for including modern repertoire include Jürgen Ruck, Elena Càsoli, Leo Brouwer (when he was still performing), John Schneider, Reinbert Evers, Maria Kämmerling, Siegfried Behrend, David Starobin, Mats Scheidegger, Magnus Andersson, etc. This type of repertoire is usually performed by guitarists who have particularly chosen to focus on the avant-garde in their performances. Within the contemporary music scene itself, there are also works which are generally regarded as extreme. These include works such as Brian Ferneyhough's "Kurze Schatten II", Sven-David Sandström's "away from" and Rolf Riehm's "Toccata Orpheus" etc. which are notorious for their extreme difficulty. There are also a variety of databases documenting modern guitar works such as Sheer Pluck and others. Background. The evolution of the classical guitar and its repertoire spans more than four centuries. It has a history that was shaped by contributions from earlier instruments, such as the lute, the vihuela, and the baroque guitar. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The last guitarist to follow in Segovia's footsteps was Julian Bream and Julian Bream will be 73 years old on July 15th 2006. Miguel Llobet, Andrés Segovia and Julian Bream are the three performer personalities of the 20th century. Do not understand me wrong, we have many guitarists today that are very excellent performers, but none with such a distinct personality in their tone and style as Llobet, Segovia and Bream. In all instrumental areas, not just the guitar, there is a lack of individualism with a strong tendency to conformity. This I find very unfortunate since art (music, theatre or the pictorial arts) is a very individual and personal matter. History. Overview of the classical guitar's history. The origins of the modern guitar are not known with certainty. Some believe it is indigenous to Europe, while others think it is an imported instrument. Guitar-like instruments appear in ancient carvings and statues recovered from Egyptian, Sumerian, and Babylonian civilizations. This means that contemporary Iranian instruments such as the tanbur and setar are distantly related to the European guitar, as they all derive ultimately from the same ancient origins, but by very different historical routes and influences. Gitterns called "guitars" were already in use since the 13th century, but their construction and tuning were different from modern guitars. The time where the most changes were made to the guitar was in the 1500s to the 1800s. Renaissance guitar. Alonso de Mudarra's book "Tres Libros de Música", published in Spain in 1546, contains the earliest known written pieces for a four-course guitarra. This four-course "guitar" was popular in France, Spain, and Italy. In France this instrument gained popularity among aristocrats. A considerable volume of music was published in Paris from the 1550s to the 1570s: Simon Gorlier's Le Troysième Livre... mis en tablature de Guiterne was published in 1551. In 1551 Adrian Le Roy also published his Premier Livre de Tablature de Guiterne, and in the same year he also published Briefve et facile instruction pour apprendre la tablature a bien accorder, conduire, et disposer la main sur la Guiterne. Robert Ballard, Grégoire Brayssing from Augsburg, and Guillaume Morlaye (c. 1510 – c. 1558) significantly contributed to its repertoire. Morlaye's Le Premier Livre de Chansons, Gaillardes, Pavannes, Bransles, Almandes, Fantasies – which has a four-course instrument illustrated on its title page – was published in partnership with Michel Fedenzat, and among other music, they published six books of tablature by lutenist Albert de Rippe (who was very likely Guillaume's teacher). Vihuela. The written history of the classical guitar can be traced back to the early 16th century with the development of the "vihuela" in Spain. While the lute was then becoming popular in other parts of Europe, the Spaniards did not take to it well because of its association with the Moors. Instead, the lute-like vihuela appeared with two more strings that gave it more range and complexity. In its most developed form, the vihuela was a guitar-like instrument with six double strings made of gut, tuned like a modern classical guitar with the exception of the third string, which was tuned half a step lower. It has a high sound and is rather large to hold. Few have survived and most of what is known today come from diagrams and paintings. "Early romantic guitar" or "Guitar during the Classical music era". The earliest extant six-string guitar is believed to have been built in 1779 by Gaetano Vinaccia (1759 – after 1831) in Naples, Italy; however, the date on the label is a little ambiguous. The Vinaccia family of luthiers is known for developing the mandolin. This guitar has been examined and does not show tell-tale signs of modifications from a double-course guitar. The authenticity of guitars allegedly produced before the 1790s is often in question. This also corresponds to when Moretti's 6-string method appeared, in 1792. Modern classical guitar. The modern classical guitar was developed in the 19th century by Antonio de Torres Jurado, Ignacio Fleta, Hermann Hauser Sr., and Robert Bouchet. The Spanish luthier and player Antonio de Torres gave the modern classical guitar its definitive form, with a broadened body, increased waist curve, thinned belly, and improved internal bracing. The modern classical guitar replaced an older form for the accompaniment of song and dance called flamenco, and a modified version, known as the flamenco guitar, was created. Technique. The fingerstyle is used fervently on the modern classical guitar. The thumb traditionally plucks the bass – or root note – whereas the fingers ring the melody and its accompanying parts. Often classical guitar technique involves the use of the nails of the right hand to pluck the notes. Noted players were: Francisco Tárrega, Emilio Pujol, Andrés Segovia, Julian Bream, Agustín Barrios, and John Williams (guitarist). Performance. The modern classical guitar is usually played in a seated position, with the instrument resting on the left lap – and the left foot placed on a footstool. Alternatively – if a footstool is not used – a "guitar support" can be placed between the guitar and the left lap (the support usually attaches to the instrument's side with suction cups). (There are of course exceptions, with some performers choosing to hold the instrument another way.) Right-handed players use the fingers of the right hand to pluck the strings, with the thumb plucking from the top of a string downwards (downstroke) and the other fingers plucking from the bottom of the string upwards (upstroke). The little finger in classical technique as it evolved in the 20th century is used only to ride along with the ring finger without striking the strings and to thus physiologically facilitate the ring finger's motion. In contrast, Flamenco technique, and classical compositions evoking Flamenco, employ the little finger semi-independently in the Flamenco four-finger rasgueado, that rapid strumming of the string by the fingers in reverse order employing the back of the fingernail—a familiar characteristic of Flamenco. Flamenco technique, in the performance of the rasgueado also uses the upstroke of the four fingers and the downstroke of the thumb: the string is hit not only with the inner, fleshy side of the fingertip but also with the outer, fingernail side. This was also used in a technique of the vihuela called dedillo which has recently begun to be introduced on the classical guitar. Some modern guitarists, such as Štěpán Rak and Kazuhito Yamashita, use the little finger independently, compensating for the little finger's shortness by maintaining an extremely long fingernail. Rak and Yamashita have also generalized the use of the upstroke of the four fingers and the downstroke of the thumb (the same technique as in the rasgueado of the Flamenco: as explained above the string is hit not only with the inner, fleshy side of the fingertip but also with the outer, fingernail side) both as a free stroke and as a rest stroke. Direct contact with strings. As with other plucked instruments (such as the lute), the musician directly touches the strings (usually plucking) to produce the sound. This has important consequences: Different tone/timbre (of a single note) can be produced by plucking the string in different manners (apoyando or tirando) and in different positions (such as closer and further away from the guitar bridge). For example, plucking an open string will sound brighter than playing the same note(s) on a fretted position (which would have a warmer tone). The instrument's versatility means it can create a variety of tones, but this finger-picking style also makes the instrument harder to learn than a standard acoustic guitar's strumming technique. Fingering notation. In guitar "scores" the five fingers of the right-hand (which pluck the strings) are designated by the first letter of their Spanish names namely p = thumb ("pulgar"), i = index finger ("índice"), m = middle finger ("mayor"), a = ring finger ("anular"), c = little finger or pinky ("meñique/chiquito") The four fingers of the left hand (which fret the strings) are designated 1 = index, 2 = major, 3 = ring finger, 4 = little finger. 0 designates an open string—a string not stopped by a finger and whose full length thus vibrates when plucked. It is rare to use the left hand thumb in performance, the neck of a classical guitar being too wide for comfort, and normal technique keeps the thumb behind the neck. However Johann Kaspar Mertz, for example, is notable for specifying the thumb to fret bass notes on the sixth string, notated with an up arrowhead (⌃). Scores (contrary to "tablatures") do not systematically indicate the string to pluck (though the choice is usually obvious). When indicating the string is useful, the score uses the numbers 1 to 6 inside circles (highest-pitch sting to lowest). Scores do not systematically indicate fretboard positions (where to put the first finger of the fretting hand), but when helpful (mostly with barrés chords) the score indicates positions with Roman numerals from the first position I (index finger on the 1st fret: F-B flat-E flat-A flat-C-F) to the twelfth position XII (index finger on the 12th fret: E-A-D-G-B-E. The 12th fret is where the body begins) or even higher up to position XIX (the classical guitar most often having 19 frets, with the 19th fret being most often split and not being usable to fret the 3rd and 4th strings). Alternation. To achieve tremolo effects and rapid, fluent scale passages, the player must practice alternation, that is, never plucking a string with the same finger twice in a row. Using p to indicate the thumb, i the index finger, m the middle finger and a the ring finger, common alternation patterns include: Repertoire. Music written specifically for the classical guitar dates from the addition of the sixth string (the baroque guitar normally had five pairs of strings) in the late 18th century. A guitar recital may include a variety of works, e.g., works written originally for the lute or vihuela by composers such as John Dowland (b. England 1563) and Luis de Narváez (b. Spain c. 1500), and also music written for the harpsichord by Domenico Scarlatti (b. Italy 1685), for the baroque lute by Sylvius Leopold Weiss (b. Germany 1687), for the baroque guitar by Robert de Visée (b. France c. 1650) or even Spanish-flavored music written for the piano by Isaac Albéniz (b. Spain 1860) and Enrique Granados (b. Spain 1867). The most important composer who did not write for the guitar but whose music is often played on it is Johann Sebastian Bach (b. Germany 1685), whose baroque lute, violin, and cello works have proved highly adaptable to the instrument. Of music written originally for guitar, the earliest important composers are from the classical period and include Fernando Sor (b. Spain 1778) and Mauro Giuliani (b. Italy 1781), both of whom wrote in a style strongly influenced by Viennese classicism. In the 19th-century guitar composers such as Johann Kaspar Mertz (b. Slovakia, Austria 1806) were strongly influenced by the dominance of the piano. Not until the end of the nineteenth century did the guitar begin to establish its own unique identity. Francisco Tárrega (b. Spain 1852) was central to this, sometimes incorporating stylized aspects of flamenco's Moorish influences into his romantic miniatures. This was part of late 19th century mainstream European musical nationalism. Albéniz and Granados were central to this movement; their evocation of the guitar was so successful that their compositions have been absorbed into the standard guitar repertoire. The steel-string and electric guitars characteristic to the rise of rock and roll in the post-WWII era became more widely played in North America and the English-speaking world. Agustín Barrios Mangoré of Paraguay composed many works and brought into the mainstream the characteristics of Latin American music, as did the Brazilian composer Heitor Villa-Lobos. Andrés Segovia commissioned works from Spanish composers such as Federico Moreno Torroba and Joaquín Rodrigo, Italians such as Mario Castelnuovo-Tedesco and Latin American composers such as Manuel Ponce of Mexico. Other prominent Latin American composers are Leo Brouwer of Cuba, Antonio Lauro of Venezuela and Enrique Solares of Guatemala. Julian Bream of Britain managed to get nearly every British composer from William Walton and Benjamin Britten to Peter Maxwell Davies to write significant works for guitar. Bream's collaborations with tenor Peter Pears also resulted in song cycles by Britten, Lennox Berkeley and others. There are significant works by composers such as Hans Werner Henze of Germany, Gilbert Biberian of England and Roland Chadwick of Australia. The classical guitar also became widely used in popular music and rock &amp; roll in the 1960s after guitarist Mason Williams popularized the instrument in his instrumental hit Classical Gas. Guitarist Christopher Parkening is quoted in the book "Classical Gas: The Music of Mason Williams" as saying that it is the most requested guitar piece besides Malagueña and perhaps the best-known instrumental guitar piece today. In the field of New Flamenco, the works and performances of Spanish composer and player Paco de Lucía are known worldwide. Not many classical guitar concertos were written through history. Nevertheless, some guitar concertos are nowadays widely known and popular, especially Joaquín Rodrigo's "Concierto de Aranjuez" (with the famous theme from 2nd movement) and "Fantasía para un gentilhombre". Composers, who also wrote famous guitar concertos are: Antonio Vivaldi (originally for mandolin or lute), Mauro Giuliani, Heitor Villa-Lobos, Mario Castelnuovo-Tedesco, Manuel Ponce, Leo Brouwer, Lennox Berkeley and Malcolm Arnold. Nowadays, more and more contemporary composers decide to write a guitar concerto, among them "Bosco Sacro" by Federico Biscione, for guitar and string orchestra, is one of the most inspired. Physical characteristics. The classical guitar is distinguished by a number of characteristics: Parts. Parts of typical classical guitars include: Fretboard. The fretboard (also called the fingerboard) is a piece of wood embedded with metal frets that constitutes the top of the neck. It is flat or slightly curved. The curvature of the fretboard is measured by the fretboard radius, which is the radius of a hypothetical circle of which the fretboard's surface constitutes a segment. The smaller the fretboard radius, the more noticeably curved the fretboard is. Fretboards are most commonly made of ebony, but may also be made of rosewood, some other hardwood, or of phenolic composite ("micarta"). Frets. Frets are the metal strips (usually nickel alloy or stainless steel) embedded along the fingerboard and placed at points that divide the length of string mathematically. The strings' vibrating length is determined when the strings are pressed down behind the frets. Each fret produces a different pitch and each pitch spaced a half-step apart on the 12 tone scale. The ratio of the widths of two consecutive frets is the twelfth root of two (formula_0), whose numeric value is about 1.059463. The twelfth fret divides the string into two exact halves and the 24th fret (if present) divides the string in half yet again. Every twelve frets represents one octave. This arrangement of frets results in equal tempered tuning. Neck. A classical guitar's frets, fretboard, tuners, headstock, all attached to a long wooden extension, collectively constitute its neck. The wood for the fretboard usually differs from the wood in the rest of the neck. The bending stress on the neck is considerable, particularly when heavier gauge strings are used. The most common scale length for classical guitar is 650mm (calculated by measuring the distance between the end of the nut and the center of the 12th fret, then doubling that measurement). However, scale lengths may vary from 635-664mm or more. Neck joint or 'heel'. This is the point where the neck meets the body. In the traditional Spanish neck joint, the neck and block are one piece with the sides inserted into slots cut in the block. Other necks are built separately and joined to the body either with a dovetail joint, mortise or flush joint. These joints are usually glued and can be reinforced with mechanical fasteners. Recently many manufacturers use bolt-on fasteners. Bolt-on neck joints were once associated only with less expensive instruments but now some top manufacturers and hand builders are using variations of this method. Some people believed that the Spanish-style one piece neck/block and glued dovetail necks have better sustain, but testing has failed to confirm this. While most traditional Spanish style builders use the one-piece neck/heel block, Fleta, a prominent Spanish builder, used a dovetail joint due to the influence of his early training in violin making. One reason for the introduction of mechanical joints was to make it easier to repair necks. This is more of a problem with steel string guitars than with nylon strings, which have about half the string tension. This is why nylon string guitars often do not include a truss rod either. Body. The body of the instrument is a major determinant of the overall sound variety for acoustic guitars. The guitar top, or soundboard, is a finely crafted and engineered element often made of spruce or red cedar. Considered the most prominent factor in determining the sound quality of a guitar, this thin (often 2 or 3 mm thick) piece of wood has a uniform thickness and is strengthened by different types of internal bracing. The back is made in rosewood and Brazilian rosewood is especially coveted, but mahogany or other decorative woods are sometimes used. The majority of the sound is caused by the vibration of the guitar top as the energy of the vibrating strings is transferred to it. Different patterns of wood bracing have been used through the years by luthiers (Torres, Hauser, Ramírez, Fleta, and C.F. Martin being among the most influential designers of their times); to not only strengthen the top against collapsing under the tremendous stress exerted by the tensioned strings, but also to affect the resonance of the top. Some contemporary guitar makers have introduced new construction concepts such as "double-top" consisting of two extra-thin wooden plates separated by Nomex, or carbon-fiber reinforced lattice – pattern bracing. The back and sides are made out of a variety of woods such as mahogany, maple, cypress Indian rosewood and highly regarded Brazilian rosewood ("Dalbergia nigra"). Each one is chosen for its aesthetic effect and structural strength, and such choice can also play a role in determining the instrument's timbre. These are also strengthened with internal bracing, and decorated with inlays and purfling. Antonio de Torres Jurado proved that it was the top, and not the back and sides of the guitar that gave the instrument its sound, in 1862 he built a guitar with back and sides of papier-mâché. (This guitar resides in the Museu de la Musica in Barcelona, and before the year 2000 it was restored to playable condition by the brothers Yagüe, Barcelona). The body of a classical guitar is a resonating chamber that projects the vibrations of the body through a "sound hole", allowing the acoustic guitar to be heard without amplification. The sound hole is normally a single round hole in the top of the guitar (under the strings), though some have different placement, shapes, or numbers of holes. How much air an instrument can move determines its maximum volume. Binding, purfling and kerfing. The top, back and sides of a classical guitar body are very thin, so a flexible piece of wood called "kerfing" (because it is often scored, or "kerfed" so it bends with the shape of the rim) is glued into the corners where the rim meets the top and back. This interior reinforcement provides 5 to 20 mm of solid gluing area for these corner joints. During final construction, a small section of the outside corners is carved or routed out and filled with binding material on the outside corners and decorative strips of material next to the binding, which are called "purfling". This binding serves to seal off the endgrain of the top and back. Binding and purfling materials are generally made of either wood or high-quality plastic materials. Bridge. The main purpose of the bridge on a classical guitar is to transfer the vibration from the strings to the soundboard, which vibrates the air inside of the guitar, thereby amplifying the sound produced by the strings. The bridge holds the strings in place on the body. Also, the position of the saddle, usually a strip of bone or plastic that supports the strings off the bridge, determines the distance to the nut (at the top of the fingerboard). Sizes. The modern full-size classical guitar has a scale length of around , with an overall instrument length of . The scale length has remained quite consistent since it was chosen by the originator of the instrument, Antonio de Torres. This length may have been chosen because it's twice the length of a violin string. As the guitar is tuned to one octave below that of the violin, the same size gut could be used for the first strings of both instruments. Smaller-scale instruments are produced to assist children in learning the instrument as the smaller scale leads to the frets being closer together, making it easier for smaller hands. The scale-size for the smaller guitars is usually in the range , with an instrument length of . Full-size instruments are sometimes referred to as 4/4, while the smaller sizes are 3/4, 1/2, 1/4, and even as small as 1/8 for very small children. However, there is not a standardized set of dimensions for fractional guitars, and their size difference is not linear from a full size guitar. Tuning. A variety of different tunings are used. The most common by far, which one could call the "standard tuning" is: The above order is the tuning from the "1st string" (highest-pitched string e'—spatially the bottom string in playing position) to the "6th string" – lowest-pitched string E—spatially the upper string in playing position, and hence comfortable to pluck with the thumb. The explanation for this "asymmetrical" tuning (in the sense that the maj 3rd is not between the two middle strings, as in the tuning of the viola da gamba) is probably that the guitar originated as a 4-string instrument (actually an instrument with 4 double courses of strings, see above) with a maj 3rd between the 2nd and 3rd strings, and it only became a 6-string instrument by gradual addition of a 5th string and then a 6th string tuned a 4th apart: "The development of the modern tuning can be traced in stages. One of the tunings from the 16th century is C-F-A-D. This is equivalent to the top four strings of the modern guitar tuned a tone lower. However, the absolute pitch for these notes is not equivalent to modern "concert pitch". The tuning of the four-course guitar was moved up by a tone and toward the end of the 16th century, five-course instruments were in use with an added lower string tuned to A. This produced A-D-G-B-E, one of a wide number of variant tunings of the period. The low E string was added during the 18th century." This tuning is such that neighboring strings are at most 5 semitones apart. There are also a variety of commonly used alternate tunings. The most common is known as Drop D tuning which has the 6th string tuned down from an E to a D. See also. &lt;templatestyles src="Col-begin/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt[12]{2}" } ]
https://en.wikipedia.org/wiki?curid=5810
581005
Implicit function theorem
On converting relations to functions of several real variables In multivariable calculus, the implicit function theorem is a tool that allows relations to be converted to functions of several real variables. It does so by representing the relation as the graph of a function. There may not be a single function whose graph can represent the entire relation, but there may be such a function on a restriction of the domain of the relation. The implicit function theorem gives a sufficient condition to ensure that there is such a function. More precisely, given a system of m equations "fi"  ("x"1, ..., "xn", "y"1, ..., "ym") = 0, "i" = 1, ..., "m" (often abbreviated into "F"(x, y) = 0), the theorem states that, under a mild condition on the partial derivatives (with respect to each "yi" ) at a point, the m variables "yi" are differentiable functions of the "xj" in some neighborhood of the point. As these functions can generally not be expressed in closed form, they are "implicitly" defined by the equations, and this motivated the name of the theorem. In other words, under a mild condition on the partial derivatives, the set of zeros of a system of equations is locally the graph of a function. History. Augustin-Louis Cauchy (1789–1857) is credited with the first rigorous form of the implicit function theorem. Ulisse Dini (1845–1918) generalized the real-variable version of the implicit function theorem to the context of functions of any number of real variables. First example. If we define the function "f"("x", "y") = "x"2 + "y"2, then the equation "f"("x", "y") = 1 cuts out the unit circle as the level set {("x", "y") | "f"("x", "y") = 1}. There is no way to represent the unit circle as the graph of a function of one variable "y" = "g"("x") because for each choice of "x" ∈ (−1, 1), there are two choices of "y", namely formula_0. However, it is possible to represent "part" of the circle as the graph of a function of one variable. If we let formula_1 for −1 ≤ "x" ≤ 1, then the graph of "y" = "g"1("x") provides the upper half of the circle. Similarly, if formula_2, then the graph of "y" = "g"2("x") gives the lower half of the circle. The purpose of the implicit function theorem is to tell us that functions like "g"1("x") and "g"2("x") almost always exist, even in situations where we cannot write down explicit formulas. It guarantees that "g"1("x") and "g"2("x") are differentiable, and it even works in situations where we do not have a formula for "f"("x", "y"). Definitions. Let formula_3 be a continuously differentiable function. We think of formula_4 as the Cartesian product formula_5 and we write a point of this product as formula_6 Starting from the given function formula_7, our goal is to construct a function formula_8 whose graph formula_9 is precisely the set of all formula_10 such that formula_11. As noted above, this may not always be possible. We will therefore fix a point formula_12 which satisfies formula_13, and we will ask for a formula_14 that works near the point formula_15. In other words, we want an open set formula_16 containing formula_17, an open set formula_18 containing formula_19, and a function formula_20 such that the graph of formula_14 satisfies the relation formula_21 on formula_22, and that no other points within formula_23 do so. In symbols, formula_24 To state the implicit function theorem, we need the Jacobian matrix of formula_7, which is the matrix of the partial derivatives of formula_7. Abbreviating formula_25 to formula_15, the Jacobian matrix is formula_26 where formula_27 is the matrix of partial derivatives in the variables formula_28 and formula_29 is the matrix of partial derivatives in the variables formula_30. The implicit function theorem says that if formula_29 is an invertible matrix, then there are formula_31, formula_32, and formula_14 as desired. Writing all the hypotheses together gives the following statement. Statement of the theorem. Let formula_3 be a continuously differentiable function, and let formula_4 have coordinates formula_10. Fix a point formula_33 with formula_34, where formula_35 is the zero vector. If the Jacobian matrix (this is the right-hand panel of the Jacobian matrix shown in the previous section): formula_36 is invertible, then there exists an open set formula_16 containing formula_17 such that there exists a unique function formula_37 such that formula_38, and formula_39. Moreover, formula_14 is continuously differentiable and, denoting the left-hand panel of the Jacobian matrix shown in the previous section as: formula_40 the Jacobian matrix of partial derivatives of formula_14 in formula_31 is given by the matrix product: formula_41 Higher derivatives. If, moreover, formula_7 is analytic or continuously differentiable formula_42 times in a neighborhood of formula_15, then one may choose formula_31 in order that the same holds true for formula_14 inside formula_31. In the analytic case, this is called the analytic implicit function theorem. Proof for 2D case. Suppose formula_43 is a continuously differentiable function defining a curve formula_44. Let formula_45 be a point on the curve. The statement of the theorem above can be rewritten for this simple case as follows: &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — If formula_46 then in a neighbourhood of the point formula_45 we can write formula_47, where formula_7 is a real function. Proof. Since "F" is differentiable we write the differential of "F" through partial derivatives: formula_48 Since we are restricted to movement on the curve formula_49 and by assumption formula_50 around the point formula_45 (since formula_51 is continuous at formula_45 and formula_52). Therefore we have a first-order ordinary differential equation: formula_53 Now we are looking for a solution to this ODE in an open interval around the point formula_45 for which, at every point in it, formula_54. Since "F" is continuously differentiable and from the assumption we have formula_55 From this we know that formula_56 is continuous and bounded on both ends. From here we know that formula_57 is Lipschitz continuous in both "x" and "y". Therefore, by Cauchy-Lipschitz theorem, there exists unique "y"("x") that is the solution to the given ODE with the initial conditions. Q.E.D. The circle example. Let us go back to the example of the unit circle. In this case "n" = "m" = 1 and formula_58. The matrix of partial derivatives is just a 1 × 2 matrix, given by formula_59 Thus, here, the "Y" in the statement of the theorem is just the number 2"b"; the linear map defined by it is invertible if and only if "b" ≠ 0. By the implicit function theorem we see that we can locally write the circle in the form "y" = "g"("x") for all points where "y" ≠ 0. For (±1, 0) we run into trouble, as noted before. The implicit function theorem may still be applied to these two points, by writing x as a function of y, that is, formula_60; now the graph of the function will be formula_61, since where "b" = 0 we have "a" = 1, and the conditions to locally express the function in this form are satisfied. The implicit derivative of "y" with respect to "x", and that of "x" with respect to "y", can be found by totally differentiating the implicit function formula_62 and equating to 0: formula_63 giving formula_64 and formula_65 Application: change of coordinates. Suppose we have an m-dimensional space, parametrised by a set of coordinates formula_66. We can introduce a new coordinate system formula_67 by supplying m functions formula_68 each being continuously differentiable. These functions allow us to calculate the new coordinates formula_67 of a point, given the point's old coordinates formula_66 using formula_69. One might want to verify if the opposite is possible: given coordinates formula_67, can we 'go back' and calculate the same point's original coordinates formula_66? The implicit function theorem will provide an answer to this question. The (new and old) coordinates formula_70 are related by "f" = 0, with formula_71 Now the Jacobian matrix of "f" at a certain point ("a", "b") [ where formula_72 ] is given by formula_73 where I"m" denotes the "m" × "m" identity matrix, and J is the "m" × "m" matrix of partial derivatives, evaluated at ("a", "b"). (In the above, these blocks were denoted by X and Y. As it happens, in this particular application of the theorem, neither matrix depends on "a".) The implicit function theorem now states that we can locally express formula_66 as a function of formula_67 if "J" is invertible. Demanding "J" is invertible is equivalent to det "J" ≠ 0, thus we see that we can go back from the primed to the unprimed coordinates if the determinant of the Jacobian "J" is non-zero. This statement is also known as the inverse function theorem. Example: polar coordinates. As a simple application of the above, consider the plane, parametrised by polar coordinates ("R", "θ"). We can go to a new coordinate system (cartesian coordinates) by defining functions "x"("R", "θ") = "R" cos("θ") and "y"("R", "θ") = "R" sin("θ"). This makes it possible given any point ("R", "θ") to find corresponding Cartesian coordinates ("x", "y"). When can we go back and convert Cartesian into polar coordinates? By the previous example, it is sufficient to have det "J" ≠ 0, with formula_74 Since det "J" = "R", conversion back to polar coordinates is possible if "R" ≠ 0. So it remains to check the case "R" = 0. It is easy to see that in case "R" = 0, our coordinate transformation is not invertible: at the origin, the value of θ is not well-defined. Generalizations. Banach space version. Based on the inverse function theorem in Banach spaces, it is possible to extend the implicit function theorem to Banach space valued mappings. Let "X", "Y", "Z" be Banach spaces. Let the mapping "f" : "X" × "Y" → "Z" be continuously Fréchet differentiable. If formula_75, formula_76, and formula_77 is a Banach space isomorphism from "Y" onto "Z", then there exist neighbourhoods "U" of "x"0 and "V" of "y"0 and a Fréchet differentiable function "g" : "U" → "V" such that "f"("x", "g"("x")) = 0 and "f"("x", "y") = 0 if and only if "y" = "g"("x"), for all formula_78. Implicit functions from non-differentiable functions. Various forms of the implicit function theorem exist for the case when the function "f" is not differentiable. It is standard that local strict monotonicity suffices in one dimension. The following more general form was proven by Kumagai based on an observation by Jittorntrum. Consider a continuous function formula_79 such that formula_80. If there exist open neighbourhoods formula_81 and formula_82 of "x"0 and "y"0, respectively, such that, for all "y" in "B", formula_83 is locally one-to-one, then there exist open neighbourhoods formula_84 and formula_85 of "x"0 and "y"0, such that, for all formula_86, the equation "f"("x", "y") = 0 has a unique solution formula_87 where "g" is a continuous function from "B"0 into "A"0. Collapsing manifolds. Perelman’s collapsing theorem for 3-manifolds, the capstone of his proof of Thurston's geometrization conjecture, can be understood as an extension of the implicit function theorem. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pm\\sqrt{1-x^2}" }, { "math_id": 1, "text": "g_1(x) = \\sqrt{1-x^2}" }, { "math_id": 2, "text": "g_2(x) = -\\sqrt{1-x^2}" }, { "math_id": 3, "text": "f: \\R^{n+m} \\to \\R^m" }, { "math_id": 4, "text": "\\R^{n+m}" }, { "math_id": 5, "text": "\\R^n\\times\\R^m," }, { "math_id": 6, "text": "(\\mathbf{x}, \\mathbf{y}) = (x_1,\\ldots, x_n, y_1, \\ldots y_m)." }, { "math_id": 7, "text": "f" }, { "math_id": 8, "text": "g: \\R^n \\to \\R^m" }, { "math_id": 9, "text": "(\\textbf{x}, g(\\textbf{x}))" }, { "math_id": 10, "text": "(\\textbf{x}, \\textbf{y})" }, { "math_id": 11, "text": "f(\\textbf{x}, \\textbf{y}) = \\textbf{0}" }, { "math_id": 12, "text": "(\\textbf{a}, \\textbf{b}) = (a_1, \\dots, a_n, b_1, \\dots, b_m)" }, { "math_id": 13, "text": "f(\\textbf{a}, \\textbf{b}) = \\textbf{0}" }, { "math_id": 14, "text": "g" }, { "math_id": 15, "text": "(\\textbf{a}, \\textbf{b})" }, { "math_id": 16, "text": "U \\subset \\R^n" }, { "math_id": 17, "text": "\\textbf{a}" }, { "math_id": 18, "text": "V \\subset \\R^m" }, { "math_id": 19, "text": "\\textbf{b}" }, { "math_id": 20, "text": "g : U \\to V" }, { "math_id": 21, "text": "f = \\textbf{0}" }, { "math_id": 22, "text": "U\\times V" }, { "math_id": 23, "text": "U \\times V" }, { "math_id": 24, "text": "\\{ (\\mathbf{x}, g(\\mathbf{x})) \\mid \\mathbf x \\in U \\} = \\{ (\\mathbf{x}, \\mathbf{y})\\in U \\times V \\mid f(\\mathbf{x}, \\mathbf{y}) = \\mathbf{0} \\}." }, { "math_id": 25, "text": "(a_1, \\dots, a_n, b_1, \\dots, b_m)" }, { "math_id": 26, "text": "(Df)(\\mathbf{a},\\mathbf{b})\n= \\left[\\begin{array}{ccc|ccc}\n \\frac{\\partial f_1}{\\partial x_1}(\\mathbf{a},\\mathbf{b}) & \\cdots & \\frac{\\partial f_1}{\\partial x_n}(\\mathbf{a},\\mathbf{b}) &\n \\frac{\\partial f_1}{\\partial y_1}(\\mathbf{a},\\mathbf{b}) & \\cdots & \\frac{\\partial f_1}{\\partial y_m}(\\mathbf{a},\\mathbf{b}) \\\\\n \\vdots & \\ddots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n \\frac{\\partial f_m}{\\partial x_1}(\\mathbf{a},\\mathbf{b}) & \\cdots & \\frac{\\partial f_m}{\\partial x_n}(\\mathbf{a},\\mathbf{b}) &\n \\frac{\\partial f_m}{\\partial y_1}(\\mathbf{a},\\mathbf{b}) & \\cdots & \\frac{\\partial f_m}{\\partial y_m}(\\mathbf{a},\\mathbf{b})\n\\end{array}\\right]\n= \\left[\\begin{array}{c|c} X & Y \\end{array}\\right]" }, { "math_id": 27, "text": "X" }, { "math_id": 28, "text": "x_i" }, { "math_id": 29, "text": "Y" }, { "math_id": 30, "text": "y_j" }, { "math_id": 31, "text": "U" }, { "math_id": 32, "text": "V" }, { "math_id": 33, "text": "(\\textbf{a}, \\textbf{b}) = (a_1,\\dots,a_n, b_1,\\dots, b_m)" }, { "math_id": 34, "text": "f(\\textbf{a}, \\textbf{b}) = \\mathbf{0}" }, { "math_id": 35, "text": "\\mathbf{0} \\in \\R^m" }, { "math_id": 36, "text": "J_{f, \\mathbf{y}} (\\mathbf{a}, \\mathbf{b}) = \\left [ \\frac{\\partial f_i}{\\partial y_j} (\\mathbf{a}, \\mathbf{b}) \\right ]" }, { "math_id": 37, "text": "g: U \\to \\R^m" }, { "math_id": 38, "text": " g(\\mathbf{a}) = \\mathbf{b}" }, { "math_id": 39, "text": " f(\\mathbf{x}, g(\\mathbf{x})) = \\mathbf{0} ~ \\text{for all} ~ \\mathbf{x}\\in U" }, { "math_id": 40, "text": "\nJ_{f, \\mathbf{x}} (\\mathbf{a}, \\mathbf{b}) = \\left [ \\frac{\\partial f_i}{\\partial x_j} (\\mathbf{a}, \\mathbf{b}) \\right ],\n" }, { "math_id": 41, "text": "\n\\left[\\frac{\\partial g_i}{\\partial x_j} (\\mathbf{x})\\right]_{m\\times n} =- \\left [ J_{f, \\mathbf{y}}(\\mathbf{x}, g(\\mathbf{x})) \\right ]_{m \\times m} ^{-1} \\, \\left [ J_{f, \\mathbf{x}}(\\mathbf{x}, g(\\mathbf{x})) \\right ]_{m \\times n}\n" }, { "math_id": 42, "text": "k" }, { "math_id": 43, "text": "F:\\R^2 \\to \\R" }, { "math_id": 44, "text": "F(\\mathbf{r}) = F(x,y) = 0 " }, { "math_id": 45, "text": "(x_0, y_0)" }, { "math_id": 46, "text": "\\left. \\frac{\\partial F}{ \\partial y} \\right|_{(x_0, y_0)} \\neq 0" }, { "math_id": 47, "text": "y = f(x)" }, { "math_id": 48, "text": "\\mathrm{d} F = \\operatorname{grad} F \\cdot \\mathrm{d}\\mathbf{r} = \\frac{\\partial F}{\\partial x} \\mathrm{d} x + \\frac{\\partial F}{\\partial y}\\mathrm{d}y." }, { "math_id": 49, "text": "F = 0" }, { "math_id": 50, "text": "\\tfrac{\\partial F}{\\partial y} \\neq 0" }, { "math_id": 51, "text": "\\tfrac{\\partial F}{\\partial y}" }, { "math_id": 52, "text": "\\left. \\tfrac{\\partial F}{ \\partial y} \\right|_{(x_0, y_0)} \\neq 0" }, { "math_id": 53, "text": "\\partial_x F \\mathrm{d} x + \\partial_y F \\mathrm{d} y = 0, \\quad y(x_0) = y_0" }, { "math_id": 54, "text": " \\partial_y F \\neq 0" }, { "math_id": 55, "text": "|\\partial_x F| < \\infty, |\\partial_y F| < \\infty, \\partial_y F \\neq 0." }, { "math_id": 56, "text": "\\tfrac{\\partial_x F}{\\partial_y F}" }, { "math_id": 57, "text": "-\\tfrac{\\partial_x F}{\\partial_y F}" }, { "math_id": 58, "text": "f(x,y) = x^2 + y^2 - 1" }, { "math_id": 59, "text": "(Df)(a,b) = \\begin{bmatrix} \\dfrac{\\partial f}{\\partial x}(a,b) & \\dfrac{\\partial f}{\\partial y}(a,b) \\end{bmatrix} = \\begin{bmatrix} 2a & 2b \\end{bmatrix}" }, { "math_id": 60, "text": "x = h(y)" }, { "math_id": 61, "text": "\\left(h(y), y\\right)" }, { "math_id": 62, "text": "x^2+y^2-1" }, { "math_id": 63, "text": "2x\\, dx+2y\\, dy = 0," }, { "math_id": 64, "text": "\\frac{dy}{dx}=-\\frac{x}{y}" }, { "math_id": 65, "text": "\\frac{dx}{dy} = -\\frac{y}{x}. " }, { "math_id": 66, "text": " (x_1,\\ldots,x_m) " }, { "math_id": 67, "text": " (x'_1,\\ldots,x'_m) " }, { "math_id": 68, "text": " h_1\\ldots h_m " }, { "math_id": 69, "text": " x'_1=h_1(x_1,\\ldots,x_m), \\ldots, x'_m=h_m(x_1,\\ldots,x_m) " }, { "math_id": 70, "text": "(x'_1,\\ldots,x'_m, x_1,\\ldots,x_m)" }, { "math_id": 71, "text": "f(x'_1,\\ldots,x'_m,x_1,\\ldots, x_m)=(h_1(x_1,\\ldots, x_m)-x'_1,\\ldots , h_m(x_1,\\ldots, x_m)-x'_m)." }, { "math_id": 72, "text": "a=(x'_1,\\ldots,x'_m), b=(x_1,\\ldots,x_m)" }, { "math_id": 73, "text": "(Df)(a,b) = \\left [\\begin{matrix}\n -1 & \\cdots & 0 \\\\\n \\vdots & \\ddots & \\vdots \\\\\n 0 & \\cdots & -1\n\\end{matrix}\\left|\n\\begin{matrix}\n\\frac{\\partial h_1}{\\partial x_1}(b) & \\cdots & \\frac{\\partial h_1}{\\partial x_m}(b)\\\\\n\\vdots & \\ddots & \\vdots\\\\\n\\frac{\\partial h_m}{\\partial x_1}(b) & \\cdots & \\frac{\\partial h_m}{\\partial x_m}(b)\\\\\n\\end{matrix} \\right.\\right] = [-I_m |J ]." }, { "math_id": 74, "text": "J =\\begin{bmatrix}\n \\frac{\\partial x(R,\\theta)}{\\partial R} & \\frac{\\partial x(R,\\theta)}{\\partial \\theta} \\\\\n \\frac{\\partial y(R,\\theta)}{\\partial R} & \\frac{\\partial y(R,\\theta)}{\\partial \\theta} \\\\\n\\end{bmatrix}=\n \\begin{bmatrix}\n \\cos \\theta & -R \\sin \\theta \\\\\n \\sin \\theta & R \\cos \\theta\n\\end{bmatrix}." }, { "math_id": 75, "text": "(x_0,y_0)\\in X\\times Y" }, { "math_id": 76, "text": "f(x_0,y_0)=0" }, { "math_id": 77, "text": "y\\mapsto Df(x_0,y_0)(0,y)" }, { "math_id": 78, "text": "(x,y)\\in U\\times V" }, { "math_id": 79, "text": "f : \\R^n \\times \\R^m \\to \\R^n" }, { "math_id": 80, "text": "f(x_0, y_0) = 0" }, { "math_id": 81, "text": "A \\subset \\R^n" }, { "math_id": 82, "text": "B \\subset \\R^m" }, { "math_id": 83, "text": "f(\\cdot, y) : A \\to \\R^n" }, { "math_id": 84, "text": "A_0 \\subset \\R^n" }, { "math_id": 85, "text": "B_0 \\subset \\R^m" }, { "math_id": 86, "text": "y \\in B_0" }, { "math_id": 87, "text": "x = g(y) \\in A_0," } ]
https://en.wikipedia.org/wiki?curid=581005
58103184
Raindrop size distribution
Measurement system to quantify intensity of rainfall The raindrop size distribution ("DSD"), or granulometry of rain, is the distribution of the number of raindrops according to their diameter (D). Three processes account for the formation of drops: water vapor condensation, accumulation of small drops on large drops and collisions between sizes. According to the time spent in the cloud, the vertical movement in it and the ambient temperature, the drops that have a very varied history and a distribution of diameters from a few micrometers to a few millimeters. Definition. In general, the drop size distribution is represented as a truncated gamma function for diameter zero to the maximum possible size of rain droplets. The number of drop with diameter formula_0 is therefore : formula_1 with formula_2, formula_3 and formula_4 as constants. Marshall-Palmer distribution. The most well-known study about raindrop size distribution is from Marshall and Palmer done at McGill University in Montréal in 1948. They used stratiform rain with formula_5 and concluded to an exponential drop size distribution. This Marshall-Palmer distribution is expressed as: formula_6 Where The units of N0 are sometimes simplified to cm −4 but this removes the information that this value is calculated per cubic meter of air. As the different precipitations (rain, snow, sleet, etc...), and the different types of clouds that produce them vary in time and space, the coefficients of the drop distribution function will vary with each situation. The Marshall-Palmer relationship is still the most quoted but it must be remembered that it is an average of many stratiform rain events in mid-latitudes. The upper figure shows mean distributions of stratiform and convective rainfall. The linear part of the distributions can be adjusted with particular formula_7 of the Marshall-Palmer distribution. The bottom one is a series of drop diameter distributions at several convective events in Florida with different precipitation rates. We can see that the experimental curves are more complex than the average ones, but the general appearance is the same. Many other forms of distribution functions are therefore found in the meteorological literature to more precisely adjust the particle size to particular events. Over time researchers have realized that the distribution of drops is more of a problem of probability of producing drops of different diameters depending on the type of precipitation than a deterministic relationship. So there is a continuum of families of curves for stratiform rain, and another for convective rain. Ulbrich distribution. The Marshall and Palmer distribution uses an exponential function that does not simulate properly drops of very small diameters (the curve in the top figure). Several experiments have shown that the actual number of these droplets is less than the theoretical curve. Carlton W. Ulbrich developed a more general formula in 1983 taking into account that a drop is spherical if D &lt;1 mm and an ellipsoid whose horizontal axis gets flattened as D gets larger. It is mechanically impossible to exceed D = 10 mm as the drop breaks at large diameters. From the general distribution, the diameter spectrum changes, μ = 0 inside the cloud, where the evaporation of small drops is negligible due to saturation conditions and μ = 2 out of the cloud, where the small drops evaporate because they are in drier air. With the same notation as before, we have for the drizzle the distribution of Ulbrich: formula_8 and formula_9 Where formula_10 is the liquid water content, formula_11 water density, and formula_12 0.2 is an average value of the diameter in drizzle. For rain, introducing rainrate R (mm/h), the amount of rain per hour over a standard surface: formula_13 and formula_14 Measurement. The first measurements of this distribution were made by rather rudimentary tool by Palmer, Marshall's student, exposing a cardboard covered with flour to the rain for a short time. The mark left by each drop being proportional to its diameter, he could determine the distribution by counting the number of marks corresponding to each droplet size. This was immediately after the Second World War. Different devices have been developed to get this distribution more accurately: Drop size versus radar reflectivity. Knowledge of the distribution of raindrops in a cloud can be used to relate what is recorded by a weather radar to what is obtained on the ground as the amount of precipitation. We can to find the relation between the reflectivity of the radar echoes and what we measure with a device like the disdrometer. The rainrate (R) is equal to number of particules (formula_15), their volume (formula_16) and their falling speed (formula_17): formula_18 The radar reflectivity Z is: formula_19 where K is the Permittivity of water Z and R having similar formulation, one can solve the equations to have a Z-R of the type: formula_20 Where a and b are related to the type of precipitation (rain, snow, convective (like in thunderstorms) or stratiform (like from nimbostratus clouds) which have different formula_4, K, N0 and formula_21. The best known of this relation is the Marshall-Palmer Z-R relationship which gives a = 200 and b = 1.6. It is still one of the most used because it is valid for synoptic rain in mid-latitudes, a very common case. Other relationships were found for snow, rainstorm, tropical rain, etc.
[ { "math_id": 0, "text": "D" }, { "math_id": 1, "text": "N(D) = N_0 D^\\mu e^{-\\Lambda D} " }, { "math_id": 2, "text": "N_0" }, { "math_id": 3, "text": "\\mu" }, { "math_id": 4, "text": "\\Lambda" }, { "math_id": 5, "text": "\\mu = 0" }, { "math_id": 6, "text": "N(D)_{MP} = N_0 e^{-\\Lambda D} " }, { "math_id": 7, "text": "\\scriptstyle \\Lambda" }, { "math_id": 8, "text": "N_0\\mathrm{(cm^{-4-\\mu})} = \\left[\\frac{6}{\\pi(\\mu+3)!}\\right]\\left(\\frac{M_l}{10^{-3}\\rho_e}\\right)\\Lambda^{\\mu +4}" }, { "math_id": 9, "text": "\\Lambda\\mathrm{(cm^{-1})}=\\frac{3.67+\\mu}{D_0}" }, { "math_id": 10, "text": "M_l" }, { "math_id": 11, "text": "\\rho_e" }, { "math_id": 12, "text": "\\scriptstyle D_0\\approx " }, { "math_id": 13, "text": "D_0{(cm)}\\approx0.13R^{0.14}" }, { "math_id": 14, "text": "N_0\\mathrm{(cm^{-4-\\mu})}\\approx6\\times10^{-2}\\exp(3.2\\times\\mu)" }, { "math_id": 15, "text": "\\scriptstyle N (D)" }, { "math_id": 16, "text": "\\scriptstyle \\pi D^3/6" }, { "math_id": 17, "text": "\\scriptstyle v(D)" }, { "math_id": 18, "text": "R = \\int_{0}^{Dmax} N (D)(\\pi D^3/6) v(D)dD " }, { "math_id": 19, "text": "Z_{rain} = |K_{rain}|^2 \\int_{0}^{Dmax} N (D) D^6dD \\qquad" }, { "math_id": 20, "text": "\\,Z_{rain} = aR^b" }, { "math_id": 21, "text": "\\scriptstyle v" } ]
https://en.wikipedia.org/wiki?curid=58103184
58103878
Peano kernel theorem
Mathematical theorem used in numerical analysis In numerical analysis, the Peano kernel theorem is a general result on error bounds for a wide class of numerical approximations (such as numerical quadratures), defined in terms of linear functionals. It is attributed to Giuseppe Peano. Statement. Let formula_0 be the space of all functions formula_1 that are differentiable on formula_2 that are of bounded variation on formula_3, and let formula_4 be a linear functional on formula_0. Assume that that formula_4 "annihilates" all polynomials of degree formula_5, i.e.formula_6Suppose further that for any bivariate function formula_7 with formula_8, the following is valid:formula_9and define the Peano kernel of formula_4 asformula_10using the notationformula_11The "Peano kernel theorem" states that, if formula_12, then for every function formula_1 that is formula_13 times continuously differentiable, we have formula_14 Bounds. Several bounds on the value of formula_15 follow from this result:formula_16 where formula_17, formula_18 and formula_19are the taxicab, Euclidean and maximum norms respectively. Application. In practice, the main application of the Peano kernel theorem is to bound the error of an approximation that is exact for all formula_20. The theorem above follows from the Taylor polynomial for formula_1 with integral remainder: formula_21 defining formula_22 as the error of the approximation, using the linearity of formula_4 together with exactness for formula_20 to annihilate all but the final term on the right-hand side, and using the formula_23 notation to remove the formula_24-dependence from the integral limits. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{V}[a,b]" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "(a,b)" }, { "math_id": 3, "text": "[a,b]" }, { "math_id": 4, "text": "L" }, { "math_id": 5, "text": "\\leq \\nu" }, { "math_id": 6, "text": "Lp=0,\\qquad \\forall p\\in\\mathbb{P}_\\nu[x]." }, { "math_id": 7, "text": "g(x,\\theta)" }, { "math_id": 8, "text": "g(x,\\cdot),\\,g(\\cdot,\\theta)\\in C^{\\nu+1}[a,b]" }, { "math_id": 9, "text": "L\\int_a^bg(x,\\theta)\\,d\\theta=\\int_a^bLg(x,\\theta)\\,d\\theta," }, { "math_id": 10, "text": "k(\\theta)=L[(x-\\theta)^\\nu_+],\\qquad\\theta\\in[a,b]," }, { "math_id": 11, "text": "(x-\\theta)^\\nu_+ = \\begin{cases} (x-\\theta)^\\nu, & x\\geq\\theta, \\\\ 0, & x\\leq\\theta. \\end{cases}" }, { "math_id": 12, "text": "k\\in\\mathcal{V}[a,b]" }, { "math_id": 13, "text": "\\nu+1" }, { "math_id": 14, "text": "Lf=\\frac{1}{\\nu!}\\int_a^bk(\\theta)f^{(\\nu+1)}(\\theta)\\,d\\theta." }, { "math_id": 15, "text": "Lf" }, { "math_id": 16, "text": "\\begin{align}\n|Lf|&\\leq\\frac{1}{\\nu!}\\|k\\|_1\\|f^{(\\nu+1)}\\|_\\infty\\\\[5pt]\n|Lf|&\\leq\\frac{1}{\\nu!}\\|k\\|_\\infty\\|f^{(\\nu+1)}\\|_1\\\\[5pt]\n|Lf|&\\leq\\frac{1}{\\nu!}\\|k\\|_2\\|f^{(\\nu+1)}\\|_2\n\\end{align}" }, { "math_id": 17, "text": "\\|\\cdot\\|_1" }, { "math_id": 18, "text": "\\|\\cdot\\|_2" }, { "math_id": 19, "text": "\\|\\cdot\\|_\\infty" }, { "math_id": 20, "text": "f\\in\\mathbb{P}_\\nu" }, { "math_id": 21, "text": "\n\\begin{align}\nf(x)=f(a) + {} & (x-a)f'(a) + \\frac{(x-a)^2}{2}f''(a)+\\cdots \\\\[6pt]\n& \\cdots+\\frac{(x-a)^\\nu}{\\nu!}f^{(\\nu)}(a)+\n\\frac{1}{\\nu!}\\int_a^x(x-\\theta)^\\nu f^{(\\nu+1)}(\\theta)\\,d\\theta,\n\\end{align}\n" }, { "math_id": 22, "text": "L(f)" }, { "math_id": 23, "text": "(\\cdot)_+" }, { "math_id": 24, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=58103878
58103934
Mapping theorem (point process)
The mapping theorem is a theorem in the theory of point processes, a sub-discipline of probability theory. It describes how a Poisson point process is altered under measurable transformations. This allows construction of more complex Poisson point processes out of homogeneous Poisson point processes and can, for example, be used to simulate these more complex Poisson point processes in a similar manner to inverse transform sampling. Statement. Let formula_0 be locally compact and polish and let formula_1 be a measurable function. Let formula_2 be a Radon measure on formula_3 and assume that the pushforward measure formula_4 of formula_2 under the function formula_5 is a Radon measure on formula_6. Then the following holds: If formula_7 is a Poisson point process on formula_3 with intensity measure formula_2, then formula_8 is a Poisson point process on formula_6 with intensity measure formula_4.
[ { "math_id": 0, "text": " X,Y " }, { "math_id": 1, "text": " f \\colon X \\to Y " }, { "math_id": 2, "text": " \\mu " }, { "math_id": 3, "text": " X " }, { "math_id": 4, "text": " \\nu:= \\mu \\circ f^{-1} " }, { "math_id": 5, "text": " f " }, { "math_id": 6, "text": " Y " }, { "math_id": 7, "text": " \\xi " }, { "math_id": 8, "text": " \\xi \\circ f^{-1} " } ]
https://en.wikipedia.org/wiki?curid=58103934
58104681
Pseudo-polynomial transformation
Function used in computational complexity theory In computational complexity theory, a pseudo-polynomial transformation is a function which maps instances of one strongly NP-complete problem into another and is computable in pseudo-polynomial time. Definitions. Maximal numerical parameter. Some computational problems are parameterized by numbers whose magnitude exponentially exceed size of the input. For example, the problem of testing whether a number "n" is prime can be solved by naively checking candidate factors from 2 to formula_0 in formula_1 divisions, therefore exponentially more than the input size formula_2. Suppose that formula_3 is an encoding of a computational problem formula_4 over alphabet formula_5, then formula_6 is a function that maps formula_7, being the encoding of an instance formula_8 of the problem formula_4, to the maximal numerical parameter of formula_8. Pseudo-polynomial transformation. Suppose that formula_9 and formula_10 are decision problems, formula_11 and formula_12 are their encodings over correspondingly formula_13 and formula_14 alphabets. A pseudo-polynomial transformation from formula_9 to formula_10 is a function formula_15 such that Intuitively, (1) allows one to reason about instances of formula_9 in terms of instances of formula_10 (and back), (2) ensures that deciding formula_11 using the transformation and a pseudo-polynomial decider for formula_12 is pseudo-polynomial itself, (3) enforces that formula_22 grows fast enough so that formula_12 must have a pseudo-polynomial decider, and (4) enforces that a subproblem of formula_11 that testifies its strong NP-completeness (i.e. all instances have numerical parameters bounded by a polynomial in input size and the subproblem is NP-complete itself) is mapped to a subproblem of formula_12 whose instances also have numerical parameters bounded by a polynomial in input size. Proving strong NP-completeness. The following lemma allows to derive strong NP-completeness from existence of a transformation: " If formula_9 is a strongly NP-complete decision problem, formula_10 is a decision problem in NP, and there exists a pseudo-polynomial transformation from formula_9 to formula_10, then formula_10 is strongly NP-complete" Proof of the lemma. Suppose that formula_9 is a strongly NP-complete decision problem encoded by formula_11 over formula_13 alphabet and formula_10 is a decision problem in NP encoded by formula_12 over formula_14 alphabet. Let formula_23 be a pseudo-polynomial transformation from formula_9 to formula_10 with formula_24, formula_25 as specified in the definition. From the definition of strong NP-completeness there exists a polynomial formula_26 such that formula_27 is NP-complete. For formula_28 and any formula_29 there is formula_30 Therefore, formula_31 Since formula_32 is NP-complete and formula_33 is computable in polynomial time, formula_34 is NP-complete. From this and the definition of strong NP-completeness it follows that formula_12 is strongly NP-complete. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{n}" }, { "math_id": 1, "text": "\\sqrt{n}-1" }, { "math_id": 2, "text": "O(\\log(n))" }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "\\Pi" }, { "math_id": 5, "text": "\\Sigma" }, { "math_id": 6, "text": " \\operatorname{Num}_\\Pi: \\Sigma^* \\to \\mathbb{N}" }, { "math_id": 7, "text": "w_I \\in \\Sigma^*" }, { "math_id": 8, "text": "I" }, { "math_id": 9, "text": "\\Pi_1" }, { "math_id": 10, "text": "\\Pi_2" }, { "math_id": 11, "text": "L_1" }, { "math_id": 12, "text": "L_2" }, { "math_id": 13, "text": "\\Sigma_1" }, { "math_id": 14, "text": "\\Sigma_2" }, { "math_id": 15, "text": "f: \\Sigma_1 \\to \\Sigma_2" }, { "math_id": 16, "text": "\\forall w \\in \\Sigma_1 \\quad w \\in L_1 \\iff f(w) \\in L_2" }, { "math_id": 17, "text": "\\forall w \\in \\Sigma_1 \\quad f(w)" }, { "math_id": 18, "text": "\\operatorname{Num}_{\\Pi_1}(w)" }, { "math_id": 19, "text": "|w|" }, { "math_id": 20, "text": "\\exists Q_A \\in \\mathbb{N}[X] \\quad\\forall w \\in \\Sigma_1 \\quad |w| \\leq Q_A(|f(w)|)" }, { "math_id": 21, "text": "\\exists Q_B \\in \\mathbb{N}[X,Y] \\quad\\forall w \\in \\Sigma_1 \\quad \\operatorname{Num}_{\\Pi_2}(f(w)) \\leq Q_B(\\operatorname{Num}_{\\Pi_1}(w), |w|)" }, { "math_id": 22, "text": "f" }, { "math_id": 23, "text": "f: L_1 \\to L_2" }, { "math_id": 24, "text": "Q_A" }, { "math_id": 25, "text": "Q_B" }, { "math_id": 26, "text": "P \\in \\mathbb{N}[X]" }, { "math_id": 27, "text": "L_{1/P} = \\{w \\in L_1 : \\operatorname{Num}_{\\Pi_1}(w) \\leq P(|w|) \\}" }, { "math_id": 28, "text": "\\widehat{P}(n) = Q_B(P(Q_A(n)),Q_A(n))" }, { "math_id": 29, "text": "w \\in L_{1/P}" }, { "math_id": 30, "text": "\n\\begin{aligned}\n\\operatorname{Num}_{\\Pi_2}(f(w)) &\\leq Q_B(\\operatorname{Num}_{\\Pi_1}(w), |w|) && \\text{(definition of }f\\text{)} \\\\[4pt]\n &\\leq Q_B(P(w), |w|) && \\text{(property of } L_{1/P}\\text{)} \\\\[4pt]\n &\\leq Q_B(P(Q_A(|f(w)|)), Q_A(|f(w)|)) && \\text{(definition of }f\\text{)} \\\\[4pt]\n &\\leq \\widehat{P}(|f(w)|) && \\text{(definition of } \\widehat{P}\\text{)}\n\\end{aligned}\n" }, { "math_id": 31, "text": "f(L_{1/P}) = \\{w \\in L_2 : \\operatorname{Num}_{\\Pi_2}(w) \\leq \\widehat{P}(|w|) \\} = L_{2/\\widehat{P}}" }, { "math_id": 32, "text": "L_{1/P}" }, { "math_id": 33, "text": "f|L_{1/P}" }, { "math_id": 34, "text": "L_{2/\\widehat{P}}" } ]
https://en.wikipedia.org/wiki?curid=58104681
58105226
Cavg
Average concentration of a drug in circulation during a dosing interval Cavg is the average concentration of a drug in the central circulation during a dosing interval in steady state. It is calculated by formula_0 where formula_1 is the area under the curve and formula_2 the dosing interval. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_{\\text{avg}}=\\frac{AUC_{\\tau,\\text{ss}}}{\\tau}" }, { "math_id": 1, "text": "AUC_{\\tau,\\text{ss}}" }, { "math_id": 2, "text": "\\tau" } ]
https://en.wikipedia.org/wiki?curid=58105226
581071
Plus construction
In mathematics, the plus construction is a method for simplifying the fundamental group of a space without changing its homology and cohomology groups. Explicitly, if formula_0 is a based connected CW complex and formula_1 is a perfect normal subgroup of formula_2 then a map formula_3 is called a +-construction relative to formula_1 if formula_4 induces an isomorphism on homology, and formula_1 is the kernel of formula_5. The plus construction was introduced by Michel Kervaire (1969), and was used by Daniel Quillen to define algebraic K-theory. Given a perfect normal subgroup of the fundamental group of a connected CW complex formula_0, attach two-cells along loops in formula_0 whose images in the fundamental group generate the subgroup. This operation generally changes the homology of the space, but these changes can be reversed by the addition of three-cells. The most common application of the plus construction is in algebraic K-theory. If formula_6 is a unital ring, we denote by formula_7 the group of invertible formula_8-by-formula_8 matrices with elements in formula_6. formula_7 embeds in formula_9 by attaching a formula_10 along the diagonal and formula_11s elsewhere. The direct limit of these groups via these maps is denoted formula_12 and its classifying space is denoted formula_13. The plus construction may then be applied to the perfect normal subgroup formula_14 of formula_15, generated by matrices which only differ from the identity matrix in one off-diagonal entry. For formula_16, the formula_8-th homotopy group of the resulting space, formula_17, is isomorphic to the formula_8-th formula_18-group of formula_6, that is, formula_19 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "P" }, { "math_id": 2, "text": "\\pi_1(X)" }, { "math_id": 3, "text": "f\\colon X \\to Y" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "\\pi_1(X) \\to \\pi_1(Y)" }, { "math_id": 6, "text": "R" }, { "math_id": 7, "text": "\\operatorname{GL}_n(R)" }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "\\operatorname{GL}_{n+1}(R)" }, { "math_id": 10, "text": "1" }, { "math_id": 11, "text": "0" }, { "math_id": 12, "text": "\\operatorname{GL}(R)" }, { "math_id": 13, "text": "B\\operatorname{GL}(R)" }, { "math_id": 14, "text": "E(R)" }, { "math_id": 15, "text": "\\operatorname{GL}(R) = \\pi_1(B\\operatorname{GL}(R))" }, { "math_id": 16, "text": "n>0" }, { "math_id": 17, "text": "B\\operatorname{GL}(R)^+" }, { "math_id": 18, "text": "K" }, { "math_id": 19, "text": "\\pi_n\\left( B\\operatorname{GL}(R)^+\\right) \\cong K_n(R)." } ]
https://en.wikipedia.org/wiki?curid=581071
58109106
Layered permutation
In the mathematics of permutations, a layered permutation is a permutation that reverses contiguous blocks of elements. Equivalently, it is the direct sum of decreasing permutations. One of the earlier works establishing the significance of layered permutations was , which established the Stanley–Wilf conjecture for classes of permutations forbidding a layered permutation, before the conjecture was proven more generally. Example. For instance, the layered permutations of length four, with the reversed blocks separated by spaces, are the eight permutations 1 2 3 4 1 2 43 1 32 4 1 432 21 3 4 21 43 321 4 4321 Characterization by forbidden patterns. The layered permutations can also be equivalently described as the permutations that do not contain the permutation patterns 231 or 312. That is, no three elements in the permutation (regardless of whether they are consecutive) have the same ordering as either of these forbidden triples. Enumeration. A layered permutation on the numbers from formula_0 to formula_1 can be uniquely described by the subset of the numbers from formula_0 to formula_2 that are the first element in a reversed block. (The number formula_1 is always the first element in its reversed block, so it is redundant for this description.) Because there are formula_3 subsets of the numbers from formula_0 to formula_2, there are also formula_3 layered permutation of length formula_1. The layered permutations are Wilf equivalent to other permutation classes, meaning that the numbers of permutations of each length are the same. For instance, the Gilbreath permutations are counted by the same function formula_3. Superpatterns. The shortest superpattern of the layered permutations of length formula_1 is itself a layered permutation. Its length is a sorting number, the number of comparisons needed for binary insertion sort to sort formula_4 elements. For formula_5 these numbers are 1, 3, 5, 8, 11, 14, 17, 21, 25, 29, 33, 37, ... (sequence in the OEIS) and in general they are given by the formula formula_6 Related permutation classes. Every layered permutation is an involution. They are exactly the 231-avoiding involutions, and they are also exactly the 312-avoiding involutions. The layered permutations are a subset of the stack-sortable permutations, which forbid the pattern 231 but not the pattern 312. Like the stack-sortable permutations, they are also a subset of the separable permutations, the permutations formed by recursive combinations of direct and skew sums. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "n-1" }, { "math_id": 3, "text": "2^{n-1}" }, { "math_id": 4, "text": "n+1" }, { "math_id": 5, "text": "n=1,2,3,\\dots" }, { "math_id": 6, "text": "(n+1)\\bigl\\lceil\\log_2 (n+1)\\bigr\\rceil - 2^{\\left\\lceil\\log_2 (n+1)\\right\\rceil} + 1." } ]
https://en.wikipedia.org/wiki?curid=58109106
581124
Cramér–Rao bound
Lower bound on variance of an estimator In estimation theory and statistics, the Cramér–Rao bound (CRB) relates to estimation of a deterministic (fixed, though unknown) parameter. The result is named in honor of Harald Cramér and C. R. Rao, but has also been derived independently by Maurice Fréchet, Georges Darmois, and by Alexander Aitken and Harold Silverstone. It is also known as Fréchet-Cramér–Rao or Fréchet-Darmois-Cramér-Rao lower bound. It states that the precision of any unbiased estimator is at most the Fisher information; or (equivalently) the reciprocal of the Fisher information is a lower bound on its variance. An unbiased estimator that achieves this bound is said to be (fully) "efficient". Such a solution achieves the lowest possible mean squared error among all unbiased methods, and is, therefore, the minimum variance unbiased (MVU) estimator. However, in some cases, no unbiased technique exists which achieves the bound. This may occur either if for any unbiased estimator, there exists another with a strictly smaller variance, or if an MVU estimator exists, but its variance is strictly greater than the inverse of the Fisher information. The Cramér–Rao bound can also be used to bound the variance of biased estimators of given bias. In some cases, a biased approach can result in both a variance and a mean squared error that are below the unbiased Cramér–Rao lower bound; see estimator bias. Significant progress over the Cramér–Rao lower bound was proposed by A. Bhattacharyya through a series of works, called Bhattacharyya Bound. Statement. The Cramér–Rao bound is stated in this section for several increasingly general cases, beginning with the case in which the parameter is a scalar and its estimator is unbiased. All versions of the bound require certain regularity conditions, which hold for most well-behaved distributions. These conditions are listed later in this section. Scalar unbiased case. Suppose formula_0 is an unknown deterministic parameter that is to be estimated from formula_1 independent observations (measurements) of formula_2, each from a distribution according to some probability density function formula_3. The variance of any "unbiased" estimator formula_4 of formula_0 is then bounded by the reciprocal of the Fisher information formula_5: formula_6 where the Fisher information formula_5 is defined by formula_7 and formula_8 is the natural logarithm of the likelihood function for a single sample formula_2 and formula_9 denotes the expected value with respect to the density formula_3 of formula_10. If not indicated, in what follows, the expectation is taken with respect to formula_10. If formula_11 is twice differentiable and certain regularity conditions hold, then the Fisher information can also be defined as follows: formula_12 The efficiency of an unbiased estimator formula_4 measures how close this estimator's variance comes to this lower bound; estimator efficiency is defined as formula_13 or the minimum possible variance for an unbiased estimator divided by its actual variance. The Cramér–Rao lower bound thus gives formula_14. General scalar case. A more general form of the bound can be obtained by considering a biased estimator formula_15, whose expectation is not formula_0 but a function of this parameter, say, formula_16. Hence formula_17 is not generally equal to 0. In this case, the bound is given by formula_18 where formula_19 is the derivative of formula_16 (by formula_0), and formula_5 is the Fisher information defined above. Bound on the variance of biased estimators. Apart from being a bound on estimators of functions of the parameter, this approach can be used to derive a bound on the variance of biased estimators with a given bias, as follows. Consider an estimator formula_4 with bias formula_20, and let formula_21. By the result above, any unbiased estimator whose expectation is formula_16 has variance greater than or equal to formula_22. Thus, any estimator formula_4 whose bias is given by a function formula_23 satisfies formula_24 The unbiased version of the bound is a special case of this result, with formula_25. It's trivial to have a small variance − an "estimator" that is constant has a variance of zero. But from the above equation, we find that the mean squared error of a biased estimator is bounded by formula_26 using the standard decomposition of the MSE. Note, however, that if formula_27 this bound might be less than the unbiased Cramér–Rao bound formula_28. For instance, in the example of estimating variance below, formula_29. Multivariate case. Extending the Cramér–Rao bound to multiple parameters, define a parameter column vector formula_30 with probability density function formula_31 which satisfies the two regularity conditions below. The Fisher information matrix is a formula_32 matrix with element formula_33 defined as formula_34 Let formula_35 be an estimator of any vector function of parameters, formula_36, and denote its expectation vector formula_37 by formula_38. The Cramér–Rao bound then states that the covariance matrix of formula_35 satisfies formula_39, formula_40 where If formula_35 is an unbiased estimator of formula_46 (i.e., formula_47), then the Cramér–Rao bound reduces to formula_48 If it is inconvenient to compute the inverse of the Fisher information matrix, then one can simply take the reciprocal of the corresponding diagonal element to find a (possibly loose) lower bound. formula_49 Regularity conditions. The bound relies on two weak regularity conditions on the probability density function, formula_50, and the estimator formula_15: Proof. Proof for the general case based on the Chapman–Robbins bound. Proof based on. &lt;templatestyles src="Math_proof/styles.css" /&gt;Proof First equation: Let formula_55 be an infinitesimal, then for any formula_56, plugging formula_57 in, we have formula_58 Plugging this into multivariate Chapman–Robbins bound gives formula_59. Second equation: It suffices to prove this for scalar case, with formula_60 taking values in formula_61 . Because for general formula_15 , we can take any formula_62, then defining formula_63, the scalar case gives formula_64This holds for all formula_62, so we can concludeformula_65The scalar case states that formula_66 with formula_67. Let formula_55 be an infinitesimal, then for any formula_56, taking formula_57 in the single-variate Chapman–Robbins bound gives formula_68. By linear algebra, formula_69 for any positive-definite matrix formula_70, thus we obtain formula_71 A standalone proof for the general scalar case. For the general scalar case: Assume that formula_72 is an estimator with expectation formula_16 (based on the observations formula_10), i.e. that formula_73. The goal is to prove that, for all formula_0, formula_74 Let formula_10 be a random variable with probability density function formula_50. Here formula_75 is a statistic, which is used as an estimator for formula_76. Define formula_77 as the score: formula_78 where the chain rule is used in the final equality above. Then the expectation of formula_77, written formula_79, is zero. This is because: formula_80 where the integral and partial derivative have been interchanged (justified by the second regularity condition). If we consider the covariance formula_81 of formula_77 and formula_53, we have formula_82, because formula_83. Expanding this expression we have formula_84 again because the integration and differentiation operations commute (second condition). The Cauchy–Schwarz inequality shows that formula_85 therefore formula_86 which proves the proposition. Examples. Multivariate normal distribution. For the case of a "d"-variate normal distribution formula_87 the Fisher information matrix has elements formula_88 where "tr" is the trace. For example, let formula_89 be a sample of formula_1 independent observations with unknown mean formula_0 and known variance formula_90 . formula_91 Then the Fisher information is a scalar given by formula_92 and so the Cramér–Rao bound is formula_93 Normal variance with known mean. Suppose "X" is a normally distributed random variable with known mean formula_94 and unknown variance formula_90. Consider the following statistic: formula_95 Then "T" is unbiased for formula_90, as formula_96. What is the variance of "T"? formula_97 (the second equality follows directly from the definition of variance). The first term is the fourth moment about the mean and has value formula_98; the second is the square of the variance, or formula_99. Thus formula_100 Now, what is the Fisher information in the sample? Recall that the score formula_77 is defined as formula_101 where formula_102 is the likelihood function. Thus in this case, formula_103 formula_104 where the second equality is from elementary calculus. Thus, the information in a single observation is just minus the expectation of the derivative of formula_77, or formula_105 Thus the information in a sample of formula_1 independent observations is just formula_1 times this, or formula_106 The Cramér–Rao bound states that formula_107 In this case, the inequality is saturated (equality is achieved), showing that the estimator is efficient. However, we can achieve a lower mean squared error using a biased estimator. The estimator formula_108 obviously has a smaller variance, which is in fact formula_109 Its bias is formula_110 so its mean squared error is formula_111 which is less than what unbiased estimators can achieve according to the Cramér–Rao bound. When the mean is not known, the minimum mean squared error estimate of the variance of a sample from Gaussian distribution is achieved by dividing by formula_112, rather than formula_113 or formula_114. References and notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\theta" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "f(x;\\theta)" }, { "math_id": 4, "text": "\\hat{\\theta}" }, { "math_id": 5, "text": "I(\\theta)" }, { "math_id": 6, "text": "\\operatorname{var}(\\hat{\\theta})\n\\geq\n\\frac{1}{I(\\theta)}\n" }, { "math_id": 7, "text": "\nI(\\theta) = n \\operatorname{E}_{X;\\theta}\n \\left[\n \\left(\n \\frac{\\partial \\ell(X;\\theta)}{\\partial\\theta}\n \\right)^2\n \\right]\n" }, { "math_id": 8, "text": "\\ell(x;\\theta)=\\log (f(x;\\theta))" }, { "math_id": 9, "text": "\\operatorname{E}_{x;\\theta}" }, { "math_id": 10, "text": "X" }, { "math_id": 11, "text": "\\ell(x;\\theta)" }, { "math_id": 12, "text": "\nI(\\theta) = -n \\operatorname{E}_{X;\\theta}\\left[ \\frac{\\partial^2 \\ell(X;\\theta)}{\\partial\\theta^2} \\right]\n" }, { "math_id": 13, "text": "e(\\hat{\\theta}) = \\frac{I(\\theta)^{-1}}{\\operatorname{var}(\\hat{\\theta})}" }, { "math_id": 14, "text": "e(\\hat{\\theta}) \\le 1" }, { "math_id": 15, "text": "T(X)" }, { "math_id": 16, "text": "\\psi(\\theta)" }, { "math_id": 17, "text": " E\\{T(X)\\} - \\theta = \\psi(\\theta) - \\theta " }, { "math_id": 18, "text": "\n\\operatorname{var}(T)\n\\geq\n\\frac{[\\psi'(\\theta)]^2}{I(\\theta)}\n" }, { "math_id": 19, "text": "\\psi'(\\theta)" }, { "math_id": 20, "text": "b(\\theta) = E\\{\\hat{\\theta}\\} - \\theta" }, { "math_id": 21, "text": "\\psi(\\theta) = b(\\theta) + \\theta" }, { "math_id": 22, "text": "(\\psi'(\\theta))^2/I(\\theta)" }, { "math_id": 23, "text": "b(\\theta)" }, { "math_id": 24, "text": "\n\\operatorname{var} \\left(\\hat{\\theta}\\right)\n\\geq\n\\frac{[1+b'(\\theta)]^2}{I(\\theta)}.\n" }, { "math_id": 25, "text": "b(\\theta)=0" }, { "math_id": 26, "text": "\\operatorname{E}\\left((\\hat{\\theta}-\\theta)^2\\right)\\geq\\frac{[1+b'(\\theta)]^2}{I(\\theta)}+b(\\theta)^2," }, { "math_id": 27, "text": "1+b'(\\theta)<1" }, { "math_id": 28, "text": "1/I(\\theta)" }, { "math_id": 29, "text": "1+b'(\\theta)= \\frac{n}{n+2} <1" }, { "math_id": 30, "text": "\\boldsymbol{\\theta} = \\left[ \\theta_1, \\theta_2, \\dots, \\theta_d \\right]^T \\in \\mathbb{R}^d" }, { "math_id": 31, "text": "f(x; \\boldsymbol{\\theta})" }, { "math_id": 32, "text": "d \\times d" }, { "math_id": 33, "text": "I_{m, k}" }, { "math_id": 34, "text": "\nI_{m, k}\n= \\operatorname{E} \\left[\n \\frac{\\partial }{\\partial \\theta_m} \\log f\\left(x; \\boldsymbol{\\theta}\\right)\n \\frac{\\partial }{\\partial \\theta_k} \\log f\\left(x; \\boldsymbol{\\theta}\\right)\n\\right] = -\\operatorname{E} \\left[\n \\frac{\\partial ^2}{\\partial \\theta_m \\, \\partial \\theta_k} \\log f\\left(x; \\boldsymbol{\\theta}\\right)\n\\right].\n" }, { "math_id": 35, "text": "\\boldsymbol{T}(X)" }, { "math_id": 36, "text": "\\boldsymbol{T}(X) = (T_1(X), \\ldots, T_d(X))^T" }, { "math_id": 37, "text": "\\operatorname{E}[\\boldsymbol{T}(X)]" }, { "math_id": 38, "text": "\\boldsymbol{\\psi}(\\boldsymbol{\\theta})" }, { "math_id": 39, "text": "\nI\\left(\\boldsymbol{\\theta}\\right)\n\\geq\n\\phi(\\theta)^T\n\\operatorname{cov}_{\\boldsymbol{\\theta}}\\left(\\boldsymbol{T}(X)\\right)^{-1}\\phi(\\theta)\n" }, { "math_id": 40, "text": "\n\\operatorname{cov}_{\\boldsymbol{\\theta}}\\left(\\boldsymbol{T}(X)\\right)\n\\geq\n\\phi(\\theta)\nI\\left(\\boldsymbol{\\theta}\\right)^{-1}\n\\phi(\\theta)^T\n" }, { "math_id": 41, "text": "A \\ge B" }, { "math_id": 42, "text": "A-B" }, { "math_id": 43, "text": "\\phi(\\theta) := \\partial \\boldsymbol{\\psi}(\\boldsymbol{\\theta})/\\partial \\boldsymbol{\\theta}" }, { "math_id": 44, "text": "ij" }, { "math_id": 45, "text": "\\partial \\psi_i(\\boldsymbol{\\theta})/\\partial \\theta_j" }, { "math_id": 46, "text": "\\boldsymbol{\\theta}" }, { "math_id": 47, "text": "\\boldsymbol{\\psi}\\left(\\boldsymbol{\\theta}\\right) = \\boldsymbol{\\theta}" }, { "math_id": 48, "text": "\n\\operatorname{cov}_{\\boldsymbol{\\theta}}\\left(\\boldsymbol{T}(X)\\right)\n\\geq\nI\\left(\\boldsymbol{\\theta}\\right)^{-1}.\n" }, { "math_id": 49, "text": "\n\\operatorname{var}_{\\boldsymbol{\\theta}}(T_m(X))\n=\n\\left[\\operatorname{cov}_{\\boldsymbol{\\theta}}\\left(\\boldsymbol{T}(X)\\right)\\right]_{mm}\n\\geq\n\\left[I\\left(\\boldsymbol{\\theta}\\right)^{-1}\\right]_{mm}\n\\geq\n\\left(\\left[I\\left(\\boldsymbol{\\theta}\\right)\\right]_{mm}\\right)^{-1}.\n" }, { "math_id": 50, "text": "f(x; \\theta)" }, { "math_id": 51, "text": "f(x; \\theta) > 0" }, { "math_id": 52, "text": " \\frac{\\partial}{\\partial\\theta} \\log f(x;\\theta)" }, { "math_id": 53, "text": "T" }, { "math_id": 54, "text": "\n \\frac{\\partial}{\\partial\\theta}\n \\left[\n \\int T(x) f(x;\\theta) \\,dx\n \\right]\n =\n \\int T(x)\n \\left[\n \\frac{\\partial}{\\partial\\theta} f(x;\\theta)\n \\right]\n \\,dx\n" }, { "math_id": 55, "text": "\\delta" }, { "math_id": 56, "text": "v\\in \\R^n" }, { "math_id": 57, "text": "\\theta' = \\theta + \\delta v" }, { "math_id": 58, "text": "(E_{\\theta'}[T] - E_{\\theta}[T]) = v^T \\phi(\\theta)\\delta; \\quad \\chi^2(\\mu_{\\theta'} ; \\mu_\\theta) = v^T I(\\theta) v \\delta^2" }, { "math_id": 59, "text": "I(\\theta) \\geq \\phi(\\theta) \\operatorname{Cov}_\\theta[T]^{-1} \\phi(\\theta)^T" }, { "math_id": 60, "text": "h(X)" }, { "math_id": 61, "text": "\\R" }, { "math_id": 62, "text": "v \\in \\R^m" }, { "math_id": 63, "text": "h:= \\sum_j v_j T_j" }, { "math_id": 64, "text": "\\operatorname{Var}_\\theta[h] = v^T \\operatorname{Cov}_\\theta[T]v \\geq v^T \\phi(\\theta) I(\\theta)^{-1}\\phi(\\theta)^T v" }, { "math_id": 65, "text": "\\operatorname{Cov}_\\theta[T] \\geq \\phi(\\theta) I(\\theta)^{-1}\\phi(\\theta)^T" }, { "math_id": 66, "text": "\\operatorname{Var}_\\theta[h] \\geq \\phi(\\theta)^T I(\\theta)^{-1} \\phi(\\theta)" }, { "math_id": 67, "text": "\\phi(\\theta) := \\nabla_\\theta E_\\theta[h]" }, { "math_id": 68, "text": "\\operatorname{Var}_\\theta[h] \\geq \\frac{\\langle v, \\phi(\\theta)\\rangle^2}{v^T I(\\theta) v}" }, { "math_id": 69, "text": "\\sup_{v\\neq 0} \\frac{\\langle w, v\\rangle^2}{v^T M v} = w^T M^{-1}w" }, { "math_id": 70, "text": "M" }, { "math_id": 71, "text": "\\operatorname{Var}_\\theta[h] \\geq \\phi(\\theta)^T I(\\theta)^{-1} \\phi(\\theta)." }, { "math_id": 72, "text": "T=t(X)" }, { "math_id": 73, "text": "\\operatorname{E}(T) = \\psi (\\theta)" }, { "math_id": 74, "text": "\\operatorname{var}(t(X)) \\geq \\frac{[\\psi^\\prime(\\theta)]^2}{I(\\theta)}." }, { "math_id": 75, "text": "T = t(X)" }, { "math_id": 76, "text": "\\psi (\\theta)" }, { "math_id": 77, "text": "V" }, { "math_id": 78, "text": "V = \\frac{\\partial}{\\partial\\theta} \\ln f(X;\\theta) = \\frac{1}{f(X;\\theta)}\\frac{\\partial}{\\partial\\theta}f(X;\\theta)" }, { "math_id": 79, "text": "\\operatorname{E}(V)" }, { "math_id": 80, "text": "\n\\operatorname{E}(V) = \\int f(x;\\theta)\\left[\\frac{1}{f(x;\\theta)}\\frac{\\partial }{\\partial \\theta} f(x;\\theta)\\right] \\, dx = \\frac{\\partial}{\\partial\\theta}\\int f(x;\\theta) \\, dx = 0\n" }, { "math_id": 81, "text": "\\operatorname{cov}(V, T)" }, { "math_id": 82, "text": "\\operatorname{cov}(V, T) = \\operatorname{E}(V T)" }, { "math_id": 83, "text": "\\operatorname{E}(V) = 0" }, { "math_id": 84, "text": "\n\\begin{align}\n\\operatorname{cov}(V,T)\n& = \\operatorname{E}\n\\left(\n T \\cdot\\left[\\frac{1}{f(X;\\theta)}\\frac{\\partial}{\\partial\\theta}f(X;\\theta) \\right]\n\\right) \\\\[6pt]\n& = \\int t(x) \\left[\\frac{1}{f(x;\\theta)} \\frac{\\partial}{\\partial\\theta} f(x;\\theta) \\right] f(x;\\theta)\\, dx \\\\[6pt]\n& = \\frac{\\partial}{\\partial\\theta}\n\\left[ \\int t(x) f(x;\\theta)\\,dx \\right]\n= \\frac{\\partial}{\\partial\\theta} E(T) = \\psi^\\prime(\\theta)\n\\end{align}\n" }, { "math_id": 85, "text": "\n\\sqrt{ \\operatorname{var} (T) \\operatorname{var} (V)} \\geq \\left| \\operatorname{cov}(V,T) \\right| = \\left | \\psi^\\prime (\\theta)\n\\right |" }, { "math_id": 86, "text": "\n\\operatorname{var} (T) \\geq \\frac{[\\psi^\\prime(\\theta)]^2}{\\operatorname{var} (V)}\n= \\frac{[\\psi^\\prime(\\theta)]^2}{I(\\theta)}\n" }, { "math_id": 87, "text": "\n\\boldsymbol{x}\n\\sim\n\\mathcal{N}_d\n\\left(\n \\boldsymbol{\\mu}( \\boldsymbol{\\theta})\n ,\n {\\boldsymbol C} ( \\boldsymbol{\\theta})\n\\right)\n" }, { "math_id": 88, "text": "\nI_{m, k}\n= \\frac{\\partial \\boldsymbol{\\mu}^T}{\\partial \\theta_m}\n{\\boldsymbol C}^{-1}\n\\frac{\\partial \\boldsymbol{\\mu}}{\\partial \\theta_k}\n+ \\frac{1}{2}\n\\operatorname{tr}\n\\left(\n {\\boldsymbol C}^{-1}\n \\frac{\\partial {\\boldsymbol C}}{\\partial \\theta_m}\n {\\boldsymbol C}^{-1}\n \\frac{\\partial {\\boldsymbol C}}{\\partial \\theta_k}\n\\right)\n" }, { "math_id": 89, "text": "w[j]" }, { "math_id": 90, "text": "\\sigma^2" }, { "math_id": 91, "text": "w[j] \\sim \\mathcal{N}_{d, n} \\left(\\theta {\\boldsymbol 1}, \\sigma^2 {\\boldsymbol I} \\right)." }, { "math_id": 92, "text": "\nI(\\theta)\n=\n\\left(\\frac{\\partial\\boldsymbol{\\mu}(\\theta)}{\\partial\\theta}\\right)^T{\\boldsymbol C}^{-1} \\left(\\frac{\\partial\\boldsymbol{\\mu}(\\theta)}{\\partial\\theta}\\right)\n= \\sum^{n}_{i=1} \\frac{1}{\\sigma^2} = \\frac{n}{\\sigma^2},\n" }, { "math_id": 93, "text": "\n\\operatorname{var}\\left(\\hat \\theta\\right)\n\\geq\n\\frac{\\sigma^2}{n}.\n" }, { "math_id": 94, "text": "\\mu" }, { "math_id": 95, "text": "\nT=\\frac{\\sum_{i=1}^n (X_i-\\mu)^2}{n}.\n" }, { "math_id": 96, "text": "E(T)=\\sigma^2" }, { "math_id": 97, "text": "\n\\operatorname{var}(T) = \\operatorname{var}\\left(\\frac{\\sum_{i=1}^n(X_i-\\mu)^2}{n}\\right) = \\frac{\\sum_{i=1}^n\\operatorname{var}(X_i-\\mu)^2}{n^2} = \\frac{n\\operatorname{var}(X-\\mu)^2}{n^2}=\\frac{1}{n}\n\\left[\n\\operatorname{E}\\left\\{(X-\\mu)^4\\right\\}-\\left(\\operatorname{E}\\{(X-\\mu)^2\\}\\right)^2\n\\right]\n" }, { "math_id": 98, "text": "3(\\sigma^2)^2" }, { "math_id": 99, "text": "(\\sigma^2)^2" }, { "math_id": 100, "text": "\\operatorname{var}(T)=\\frac{2(\\sigma^2)^2}{n}." }, { "math_id": 101, "text": "\nV=\\frac{\\partial}{\\partial\\sigma^2}\\log\\left[ L(\\sigma^2,X)\\right]\n" }, { "math_id": 102, "text": "L" }, { "math_id": 103, "text": "\n\\log\\left[L(\\sigma^2,X)\\right]=\\log\\left[\\frac{1}{\\sqrt{2\\pi\\sigma^2}}e^{-(X-\\mu)^2 /{2\\sigma^2}}\\right] =-\\log(\\sqrt{2\\pi\\sigma^2})-\\frac{(X-\\mu)^2}{2\\sigma^2}\n" }, { "math_id": 104, "text": "\nV=\\frac{\\partial}{\\partial\\sigma^2}\\log \\left[ L(\\sigma^2,X) \\right]=\\frac{\\partial}{\\partial\\sigma^2}\\left[-\\log(\\sqrt{2\\pi\\sigma^2})-\\frac{(X-\\mu)^2}{2\\sigma^2}\\right] =-\\frac{1}{2\\sigma^2}+\\frac{(X-\\mu)^2}{2(\\sigma^2)^2}\n" }, { "math_id": 105, "text": "\nI\n=-\\operatorname{E}\\left(\\frac{\\partial V}{\\partial\\sigma^2}\\right)\n=-\\operatorname{E}\\left(-\\frac{(X-\\mu)^2}{(\\sigma^2)^3}+\\frac{1}{2(\\sigma^2)^2}\\right)\n=\\frac{\\sigma^2}{(\\sigma^2)^3}-\\frac{1}{2(\\sigma^2)^2}\n=\\frac{1}{2(\\sigma^2)^2}." }, { "math_id": 106, "text": "\\frac{n}{2(\\sigma^2)^2}." }, { "math_id": 107, "text": "\n\\operatorname{var}(T)\\geq\\frac{1}{I}." }, { "math_id": 108, "text": "\nT=\\frac{\\sum_{i=1}^n (X_i-\\mu)^2}{n+2}.\n" }, { "math_id": 109, "text": "\\operatorname{var}(T)=\\frac{2n(\\sigma^2)^2}{(n+2)^2}." }, { "math_id": 110, "text": "\\left(1-\\frac{n}{n+2}\\right)\\sigma^2=\\frac{2\\sigma^2}{n+2}" }, { "math_id": 111, "text": "\\operatorname{MSE}(T)=\\left(\\frac{2n}{(n+2)^2}+\\frac{4}{(n+2)^2}\\right)(\\sigma^2)^2\n=\\frac{2(\\sigma^2)^2}{n+2}" }, { "math_id": 112, "text": "n+1" }, { "math_id": 113, "text": "n-1" }, { "math_id": 114, "text": "n+2" } ]
https://en.wikipedia.org/wiki?curid=581124
58117270
Distribution function (measure theory)
In mathematics, in particular in measure theory, there are different notions of distribution function and it is important to understand the context in which they are used (properties of functions, or properties of measures). Distribution functions (in the sense of measure theory) are a generalization of distribution functions (in the sense of probability theory). Definitions. The first definition presented here is typically used in Analysis (harmonic analysis, Fourier Analysis, and integration theory in general) to analysis properties of functions. &lt;templatestyles src="Block indent/styles.css"/&gt; Definition 1: Suppose formula_0 is a measure space, and let formula_1 be a real-valued measurable function. The distribution function associated with formula_1 is the function formula_2 given byformula_3It is convenient also to define formula_4. The function formula_5 provides information about the size of a measurable function formula_1. The next definitions of distribution function are straight generalizations of the notion of distribution functions (in the sense of probability theory). &lt;templatestyles src="Block indent/styles.css"/&gt; Definition 2. Let formula_6 be a finite measure on the space formula_7 of real numbers, equipped with the Borel formula_8-algebra. The distribution function associated to formula_6 is the function formula_9 defined by formula_10 It is well known result in measure theory that if formula_11 is a nondecreasing right continuous function, then the function formula_6 defined on the collection of finite intervals of the form formula_12 by formula_13 extends uniquely to a measure formula_14 on a formula_8-algebra formula_15 that included the Borel sets. Furthermore, if two such functions formula_16 and formula_17 induce the same measure, i.e. formula_18, then formula_19 is constant. Conversely, if formula_6 is a measure on Borel subsets of the real line that is finite on compact sets, then the function formula_20 defined by formula_21 is a nondecreasing right-continuous function with formula_22 such that formula_23. This particular "distribution function" is well defined whether formula_6 is finite or infinite; for this reason, a few authors also refer to formula_24 as a distribution function of the measure formula_25. That is: &lt;templatestyles src="Block indent/styles.css"/&gt;Definition 3: Given the measure space formula_7, if formula_6 is finite on compact sets, then the nondecreasing right-continuous function formula_26 with formula_27 such that formula_28 is called the "canonical" distribution function associated to formula_6. Example. As the measure, choose the Lebesgue measure formula_29. Then by Definition of formula_29 formula_30 Therefore, the distribution function of the Lebesgue measure is formula_31 for all formula_32.
[ { "math_id": 0, "text": "(X,\\mathcal{B},\\mu)" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "d_f:[0,\\infty)\\rightarrow\\mathbb{R}\\cup\\{\\infty\\}" }, { "math_id": 3, "text": " d_f(s)=\\mu\\Big(\\{x\\in X: |f(x)|>s\\}\\Big)" }, { "math_id": 4, "text": "d_f(\\infty)=0" }, { "math_id": 5, "text": "d_f" }, { "math_id": 6, "text": "\\mu" }, { "math_id": 7, "text": "(\\mathbb{R},\\mathcal{B}(\\mathbb{R}),\\mu)" }, { "math_id": 8, "text": "\\sigma" }, { "math_id": 9, "text": " F_\\mu \\colon \\R \\to \\R " }, { "math_id": 10, "text": " F_\\mu(t)=\\mu\\big((-\\infty,t]\\big)" }, { "math_id": 11, "text": "F:\\mathbb{R}\\to\\mathbb{R}" }, { "math_id": 12, "text": "(a,b]" }, { "math_id": 13, "text": " \\mu\\big((a,b]\\big)=F(b)-F(a)" }, { "math_id": 14, "text": "\\mu_F" }, { "math_id": 15, "text": "\\mathcal{M}" }, { "math_id": 16, "text": "F" }, { "math_id": 17, "text": "G" }, { "math_id": 18, "text": "\\mu_F = \\mu_G" }, { "math_id": 19, "text": "F-G" }, { "math_id": 20, "text": "F_\\mu:\\mathbb{R}\\to\\mathbb{R}" }, { "math_id": 21, "text": " F_\\mu(t)= \\begin{cases} \\mu((0,t]) & \\text{if } t\\geq 0 \\\\ -\\mu((t,0]) & \\text{if } t < 0\\end{cases}" }, { "math_id": 22, "text": "F(0)=0" }, { "math_id": 23, "text": "\\mu_{F_\\mu}=\\mu" }, { "math_id": 24, "text": "F_{\\mu}" }, { "math_id": 25, "text": " \\mu " }, { "math_id": 26, "text": "F_\\mu" }, { "math_id": 27, "text": "F_\\mu(0)=0" }, { "math_id": 28, "text": "\\mu\\big((a,b]) = F_\\mu(b)-F_\\mu(a)" }, { "math_id": 29, "text": " \\lambda " }, { "math_id": 30, "text": " \\lambda((0,t])=t-0=t \\text{ and } -\\lambda((t,0])=-(0-t)=t" }, { "math_id": 31, "text": " F_\\lambda(t)=t" }, { "math_id": 32, "text": " t \\in \\R " }, { "math_id": 33, "text": "[0,\\mu(X)]" }, { "math_id": 34, "text": "d_f(s_0)<\\infty" }, { "math_id": 35, "text": "s_0\\geq0" }, { "math_id": 36, "text": "\\lim_{s\\to\\infty} d_f(s) = 0." }, { "math_id": 37, "text": "(\\mathbb{R},\\mathcal{B}(\\mathbb{R}))" }, { "math_id": 38, "text": " \\lim_{t \\to - \\infty} F_\\mu(t)=0 \\text{ and } \\lim_{t \\to \\infty} F_\\mu(t)=\\mu(\\mathbb{R}). " }, { "math_id": 39, "text": "\\mu((- \\infty, t])=\\infty " }, { "math_id": 40, "text": "t\\in\\mathbb{R}" } ]
https://en.wikipedia.org/wiki?curid=58117270
5811728
Volterra series
Model for approximating non-linear effects, similar to a Taylor series The Volterra series is a model for non-linear behavior similar to the Taylor series. It differs from the Taylor series in its ability to capture "memory" effects. The Taylor series can be used for approximating the response of a nonlinear system to a given input if the output of the system depends strictly on the input at that particular time. In the Volterra series, the output of the nonlinear system depends on the input to the system at "all" other times. This provides the ability to capture the "memory" effect of devices like capacitors and inductors. It has been applied in the fields of medicine (biomedical engineering) and biology, especially neuroscience. It is also used in electrical engineering to model intermodulation distortion in many devices, including power amplifiers and frequency mixers. Its main advantage lies in its generalizability: it can represent a wide range of systems. Thus, it is sometimes considered a non-parametric model. In mathematics, a Volterra series denotes a functional expansion of a dynamic, nonlinear, time-invariant functional. The Volterra series are frequently used in system identification. The Volterra series, which is used to prove the Volterra theorem, is an infinite sum of multidimensional convolutional integrals. History. The Volterra series is a modernized version of the theory of analytic functionals from the Italian mathematician Vito Volterra, in his work dating from 1887. Norbert Wiener became interested in this theory in the 1920s due to his contact with Volterra's student Paul Lévy. Wiener applied his theory of Brownian motion for the integration of Volterra analytic functionals. The use of the Volterra series for system analysis originated from a restricted 1942 wartime report of Wiener's, who was then a professor of mathematics at MIT. He used the series to make an approximate analysis of the effect of radar noise in a nonlinear receiver circuit. The report became public after the war. As a general method of analysis of nonlinear systems, the Volterra series came into use after about 1957 as the result of a series of reports, at first privately circulated, from MIT and elsewhere. The name itself, "Volterra series", came into use a few years later. Mathematical theory. The theory of the Volterra series can be viewed from two different perspectives: The latter functional mapping perspective is more frequently used due to the assumed time-invariance of the system. Continuous time. A continuous time-invariant system with "x"("t") as input and "y"("t") as output can be expanded in the Volterra series as formula_0 Here the constant term formula_1 on the right side is usually taken to be zero by suitable choice of output level formula_2. The function formula_3 is called the "n"-th-order Volterra kernel. It can be regarded as a higher-order impulse response of the system. For the representation to be unique, the kernels must be symmetrical in the "n" variables formula_4. If it is not symmetrical, it can be replaced by a symmetrized kernel, which is the average over the "n"! permutations of these "n" variables formula_4. If "N" is finite, the series is said to be "truncated". If "a", "b", and "N" are finite, the series is called "doubly finite". Sometimes the "n"-th-order term is divided by "n"!, a convention which is convenient when taking the output of one Volterra system as the input of another ("cascading"). "The causality condition": Since in any physically realizable system the output can only depend on previous values of the input, the kernels formula_5 will be zero if any of the variables formula_6 are negative. The integrals may then be written over the half range from zero to infinity. So if the operator is causal, formula_7. "Fréchet's approximation theorem": The use of the Volterra series to represent a time-invariant functional relation is often justified by appealing to a theorem due to Fréchet. This theorem states that a time-invariant functional relation (satisfying certain very general conditions) can be approximated uniformly and to an arbitrary degree of precision by a sufficiently high finite-order Volterra series. Among other conditions, the set of admissible input functions formula_8 for which the approximation will hold is required to be compact. It is usually taken to be an equicontinuous, uniformly bounded set of functions, which is compact by the Arzelà–Ascoli theorem. In many physical situations, this assumption about the input set is a reasonable one. The theorem, however, gives no indication as to how many terms are needed for a good approximation, which is an essential question in applications. Discrete time. The discrete-time case is similar to the continuous-time case, except that the integrals are replaced by summations: formula_9 where formula_10 Each function formula_11 is called a "discrete-time Volterra kernels". If "P" is finite, the series operator is said to be "truncated". If "a", "b" and "P" are finite, the series operator is called a "doubly finite Volterra series". If formula_7, the operator is said to be "causal". We can always consider, without loss of the generality, the kernel formula_12 as symmetrical. In fact, for the commutativity of the multiplication it is always possible to symmetrize it by forming a new kernel taken as the average of the kernels for all permutations of the variables formula_13. For a causal system with symmetrical kernels we can rewrite the "n"-th term approximately in triangular form formula_14 Methods to estimate the kernel coefficients. Estimating the Volterra coefficients individually is complicated, since the basis functionals of the Volterra series are correlated. This leads to the problem of simultaneously solving a set of integral equations for the coefficients. Hence, estimation of Volterra coefficients is generally performed by estimating the coefficients of an orthogonalized series, e.g. the Wiener series, and then recomputing the coefficients of the original Volterra series. The Volterra series main appeal over the orthogonalized series lies in its intuitive, canonical structure, i.e. all interactions of the input have one fixed degree. The orthogonalized basis functionals will generally be quite complicated. An important aspect, with respect to which the following methods differ, is whether the orthogonalization of the basis functionals is to be performed over the idealized specification of the input signal (e.g. gaussian, white noise) or over the actual realization of the input (i.e. the pseudo-random, bounded, almost-white version of gaussian white noise, or any other stimulus). The latter methods, despite their lack of mathematical elegance, have been shown to be more flexible (as arbitrary inputs can be easily accommodated) and precise (due to the effect that the idealized version of the input signal is not always realizable). Crosscorrelation method. This method, developed by Lee and Schetzen, orthogonalizes with respect to the actual mathematical description of the signal, i.e. the projection onto the new basis functionals is based on the knowledge of the moments of the random signal. We can write the Volterra series in terms of homogeneous operators, as formula_15 where formula_16 To allow identification orthogonalization, Volterra series must be rearranged in terms of orthogonal non-homogeneous "G" operators (Wiener series): formula_17 The "G" operators can be defined by the following: formula_18 formula_19 whenever formula_20 is arbitrary homogeneous Volterra, "x"("n") is some stationary white noise (SWN) with zero mean and variance "A". Recalling that every Volterra functional is orthogonal to all Wiener functional of greater order, and considering the following Volterra functional: formula_21 we can write formula_22 If "x" is SWN, formula_23 and by letting formula_24, we have formula_25 So if we exclude the diagonal elements, formula_26, it is formula_27 If we want to consider the diagonal elements, the solution proposed by Lee and Schetzen is formula_28 The main drawback of this technique is that the estimation errors, made on all elements of lower-order kernels, will affect each diagonal element of order "p" by means of the summation formula_29, conceived as the solution for the estimation of the diagonal elements themselves. Efficient formulas to avoid this drawback and references for diagonal kernel element estimation exist Once the Wiener kernels were identified, Volterra kernels can be obtained by using Wiener-to-Volterra formulas, in the following reported for a fifth-order Volterra series: formula_30 formula_31 formula_32 formula_33 formula_34 formula_35 Multiple-variance method. In the traditional orthogonal algorithm, using inputs with high formula_36 has the advantage of stimulating high-order nonlinearity, so as to achieve more accurate high-order kernel identification. As a drawback, the use of high formula_36 values causes high identification error in lower-order kernels, mainly due to nonideality of the input and truncation errors. On the contrary, the use of lower formula_36 in the identification process can lead to a better estimation of lower-order kernel, but can be insufficient to stimulate high-order nonlinearity. This phenomenon, which can be called "locality" of truncated Volterra series, can be revealed by calculating the output error of a series as a function of different variances of input. This test can be repeated with series identified with different input variances, obtaining different curves, each with a minimum in correspondence of the variance used in the identification. To overcome this limitation, a low formula_36 value should be used for the lower-order kernel and gradually increased for higher-order kernels. This is not a theoretical problem in Wiener kernel identification, since the Wiener functional are orthogonal to each other, but an appropriate normalization is needed in Wiener-to-Volterra conversion formulas for taking into account the use of different variances. Furthermore, new Wiener to Volterra conversion formulas are needed. The traditional Wiener kernel identification should be changed as follows: formula_37 formula_38 formula_39 formula_40 In the above formulas the impulse functions are introduced for the identification of diagonal kernel points. If the Wiener kernels are extracted with the new formulas, the following Wiener-to-Volterra formulas (explicited up the fifth order) are needed: formula_41 formula_42 formula_43 formula_44 formula_45 formula_46 As can be seen, the drawback with respect to the previous formula is that for the identification of the "n"-th-order kernel, all lower kernels must be identified again with the higher variance. However, an outstanding improvement in the output MSE will be obtained if the Wiener and Volterra kernels are obtained with the new formulas. Feedforward network. This method was developed by Wray and Green (1994) and utilizes the fact that a simple 2-fully connected layer neural network (i.e., a multilayer perceptron) is computationally equivalent to the Volterra series and therefore contains the kernels hidden in its architecture. After such a network has been trained to successfully predict the output based on the current state and memory of the system, the kernels can then be computed from the weights and biases of that network. The general notation for the "n"-th-order Volterra kernel is given by formula_47 where formula_48 is the order, formula_49 the weights to the linear output node, formula_50 the coefficients of the polynomial expansion of the output function of the hidden nodes, and formula_51 are the weights from the input layer to the non-linear hidden layer. It is important to note that this method allows kernel extraction up until the number of input delays in the architecture of the network. Furthermore, it is vital to carefully construct the size of the network input layer so that it represents the effective memory of the system. Exact orthogonal algorithm. This method and its more efficient version (fast orthogonal algorithm) were invented by Korenberg. In this method the orthogonalization is performed empirically over the actual input. It has been shown to perform more precisely than the crosscorrelation method. Another advantage is that arbitrary inputs can be used for the orthogonalization and that fewer data points suffice to reach a desired level of accuracy. Also, estimation can be performed incrementally until some criterion is fulfilled. Linear regression. Linear regression is a standard tool from linear analysis. Hence, one of its main advantages is the widespread existence of standard tools for solving linear regressions efficiently. It has some educational value, since it highlights the basic property of Volterra series: linear combination of non-linear basis-functionals. For estimation, the order of the original should be known, since the Volterra basis functionals are not orthogonal, and thus estimation cannot be performed incrementally. Kernel method. This method was invented by Franz and Schölkopf and is based on statistical learning theory. Consequently, this approach is also based on minimizing the empirical error (often called empirical risk minimization). Franz and Schölkopf proposed that the kernel method could essentially replace the Volterra series representation, although noting that the latter is more intuitive. Differential sampling. This method was developed by van Hemmen and coworkers and utilizes Dirac delta functions to sample the Volterra coefficients. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n y(t) = h_0 + \\sum_{n=1}^N \\int_a^b \\cdots \\int_a^b\n h_n(\\tau_1, \\dots, \\tau_n) \\prod^n_{j=1} x(t - \\tau_j) \\,d\\tau_j.\n" }, { "math_id": 1, "text": "h_0" }, { "math_id": 2, "text": "y" }, { "math_id": 3, "text": "h_n(\\tau_1, \\dots, \\tau_n)" }, { "math_id": 4, "text": "\\tau" }, { "math_id": 5, "text": "h_n(t_1, t_2, \\ldots, t_n)" }, { "math_id": 6, "text": "t_1, t_2, \\ldots, t_n" }, { "math_id": 7, "text": "a \\geq 0" }, { "math_id": 8, "text": "x(t)" }, { "math_id": 9, "text": "\ny(n) = h_0 + \\sum_{p=1}^P \\sum_{\\tau_1=a}^b \\cdots \\sum_{\\tau_p=a}^b\n h_p(\\tau_1, \\dots, \\tau_p) \\prod^p_{j=1} x(n - \\tau_j),\n" }, { "math_id": 10, "text": "P \\in \\{1, 2, \\dots\\} \\cup \\{\\infty\\}." }, { "math_id": 11, "text": "h_p" }, { "math_id": 12, "text": "h_p(\\tau_1, \\dots, \\tau_p)" }, { "math_id": 13, "text": "\\tau_1, \\dots, \\tau_p" }, { "math_id": 14, "text": "\n \\sum_{\\tau_1=0}^M \\sum_{\\tau_2=\\tau_1}^M \\cdots \\sum_{\\tau_p=\\tau_{p-1}}^M\n h_p(\\tau_1, \\dots, \\tau_p) \\prod^p_{j=1} x(n - \\tau_j).\n" }, { "math_id": 15, "text": "\n y(n) = h_0 + \\sum_{p=1}^P H_p x(n),\n" }, { "math_id": 16, "text": "\n H_p x(n) = \\sum_{\\tau_1=a}^b \\cdots \\sum_{\\tau_p=a}^b h_p(\\tau_1, \\dots, \\tau_p) \\prod^p_{j=1} x(n - \\tau_j).\n" }, { "math_id": 17, "text": "\n y(n) = \\sum_p H_p x(n) \\equiv \\sum_p G_p x(n).\n" }, { "math_id": 18, "text": "\n E\\{H_i x(n) G_j x(n)\\} = 0; \\quad i < j,\n" }, { "math_id": 19, "text": "\n E\\{G_i x(n) G_j x(n)\\} = 0; \\quad i \\neq j,\n" }, { "math_id": 20, "text": "H_i x(n)" }, { "math_id": 21, "text": "\n H^*_{\\overline{p}} x(n) = \\prod^{\\overline{p}}_{j=1} x(n - \\tau_j),\n" }, { "math_id": 22, "text": "\n E\\left\\{y(n) H^*_{\\overline{p}} x(n)\\right\\} = E\\left\\{\\sum_{p=0}^\\infty G_p x(n) H^*_{\\overline{p}} x(n)\\right\\}.\n" }, { "math_id": 23, "text": "\\tau_1 \\neq \\tau_2 \\neq \\ldots \\neq \\tau_P" }, { "math_id": 24, "text": "A = \\sigma^2_x" }, { "math_id": 25, "text": "\n E\\left\\{y(n) \\prod^{\\overline{p}}_{j=1} x(n - \\tau_j)\\right\\} = E\\left\\{G_{\\overline{p}} x(n) \\prod^{\\overline{p}}_{j=1} x(n - \\tau_j)\\right\\} = \\overline{p}! A^{\\overline{p}} k_{\\overline{p}}(\\tau_1, \\dots, \\tau_{\\overline{p}}).\n" }, { "math_id": 26, "text": "{\\tau_i \\neq \\tau_j,\\ \\forall i, j}" }, { "math_id": 27, "text": "\n k_p(\\tau_1, \\dots, \\tau_p) = \\frac{E\\left\\{y(n) x(n - \\tau_1) \\cdots x(n - \\tau_p)\\right\\}}{p! A^p}.\n" }, { "math_id": 28, "text": "\n k_p(\\tau_1, \\dots, \\tau_p) = \\frac{E\\left\\{\\left(y(n) - \\sum\\limits_{m=0}^{p-1} G_m x(n)\\right) x(n - \\tau_1) \\cdots x(n - \\tau_p)\\right\\}}{p! A^p}.\n" }, { "math_id": 29, "text": "\\sum\\limits_{m=0}^{p-1} G_m x(n)" }, { "math_id": 30, "text": "\n h_5 = k_5,\n" }, { "math_id": 31, "text": "\n h_4 = k_4,\n" }, { "math_id": 32, "text": "\n h_3 = k_3 - 10 A \\sum_{\\tau_4} k_5(\\tau_1, \\tau_2, \\tau_3, \\tau_4, \\tau_4),\n" }, { "math_id": 33, "text": "\n h_2 = k_2 - 6 A \\sum_{\\tau_3} k_4(\\tau_1, \\tau_2, \\tau_3, \\tau_3),\n" }, { "math_id": 34, "text": "\n h_1 = k_1 - 3 A \\sum_{\\tau_2} k_3(\\tau_1, \\tau_2, \\tau_2) + 15 A^2 \\sum_{\\tau2} \\sum_{\\tau_3} k_5(\\tau_1, \\tau_2, \\tau_2, \\tau_3, \\tau_3),\n" }, { "math_id": 35, "text": "\n h_0 = k_0 - A \\sum_{\\tau_1} k_2(\\tau_1, \\tau_1) + 3 A^2 \\sum_{\\tau_1} \\sum_{\\tau_2} k_4(\\tau_1, \\tau_1, \\tau_2, \\tau_2).\n" }, { "math_id": 36, "text": "\\sigma_x" }, { "math_id": 37, "text": "\n k_0^{(0)} = E\\{y^{(0)}(n)\\},\n" }, { "math_id": 38, "text": "\n k_1^{(1)}(\\tau_1) = \\frac{1}{A_1} E\\left\\{y^{(1)}(n) x^{(1)}(n - \\tau_1)\\right\\},\n" }, { "math_id": 39, "text": "\n k_2^{(2)}(\\tau_1, \\tau_2) = \\frac{1}{2! A_2^2} \\left\\{E\\left\\{y^{(2)}(n) \\prod_{i=1}^2 x^{(2)}(n - \\tau_i)\\right\\} - A_2 k_0^{(2)} \\delta_{\\tau_1 \\tau_2}\\right\\},\n" }, { "math_id": 40, "text": "\n k_3^{(3)}(\\tau_1, \\tau_2, \\tau_3) = \\frac{1}{3! A_3^3} \\left\\{E\\left\\{y^{(3)}(n) \\prod_{i=1}^3 x^{(3)}(n - \\tau_i)\\right\\} - A_3^2 \\left[k_1^{(3)}(\\tau_1) \\delta_{\\tau_2 \\tau_3} + k_1^{(3)}(\\tau_2) \\delta_{\\tau_1 \\tau_3} + k_1^{(3)}(\\tau_3) \\delta_{\\tau_1 \\tau_2}\\right]\\right\\}.\n" }, { "math_id": 41, "text": "\n h_5 = k_5^{(5)},\n" }, { "math_id": 42, "text": "\n h_4 = k_4^{(4)},\n" }, { "math_id": 43, "text": "\n h_3 = k_3^{(3)} - 10 A_3 \\sum_{\\tau_4} k_5^{(5)}(\\tau_1, \\tau_2, \\tau_3, \\tau_4, \\tau_4),\n" }, { "math_id": 44, "text": "\n h_2 = k_2^{(2)} - 6 A_2 \\sum_{\\tau_3} k_4^{(4)}(\\tau_1, \\tau_2, \\tau_3, \\tau_3),\n" }, { "math_id": 45, "text": "\n h_1 = k_1^{(1)} - 3 A_1 \\sum_{\\tau_2} k_3^{(3)}(\\tau_1, \\tau_2, \\tau_2) + 15 A_1^2 \\sum_{\\tau2} \\sum_{\\tau_3} k_5^{(5)}(\\tau_1, \\tau_2, \\tau_2, \\tau_3, \\tau_3),\n" }, { "math_id": 46, "text": "\n h_0 = k_0^{(0)} - A_0 \\sum_{\\tau_1} k_2^{(2)}(\\tau_1, \\tau_1) + 3 A_0^2 \\sum_{\\tau_1} \\sum_{\\tau_2} k_4^{(4)}(\\tau_1, \\tau_1, \\tau_2, \\tau_2).\n" }, { "math_id": 47, "text": "\n h_n(\\tau_1, \\dots, \\tau_n) = \\sum_{i=1}^M (c_i a_{ni} \\omega_{\\tau_1 i} \\dots \\omega_{\\tau_n i}),\n" }, { "math_id": 48, "text": "n" }, { "math_id": 49, "text": "c_i" }, { "math_id": 50, "text": "a_{ji}" }, { "math_id": 51, "text": "\\omega_{ji}" } ]
https://en.wikipedia.org/wiki?curid=5811728
581175
Vertex-transitive graph
Graph where all pairs of vertices are automorphic In the mathematical field of graph theory, a vertex-transitive graph is a graph G in which, given any two vertices "v"1 and "v"2 of G, there is some automorphism formula_0 such that formula_1 In other words, a graph is vertex-transitive if its automorphism group acts transitively on its vertices. A graph is vertex-transitive if and only if its graph complement is, since the group actions are identical. Every symmetric graph without isolated vertices is vertex-transitive, and every vertex-transitive graph is regular. However, not all vertex-transitive graphs are symmetric (for example, the edges of the truncated tetrahedron), and not all regular graphs are vertex-transitive (for example, the Frucht graph and Tietze's graph). Finite examples. Finite vertex-transitive graphs include the symmetric graphs (such as the Petersen graph, the Heawood graph and the vertices and edges of the Platonic solids). The finite Cayley graphs (such as cube-connected cycles) are also vertex-transitive, as are the vertices and edges of the Archimedean solids (though only two of these are symmetric). Potočnik, Spiga and Verret have constructed a census of all connected cubic vertex-transitive graphs on at most 1280 vertices. Although every Cayley graph is vertex-transitive, there exist other vertex-transitive graphs that are not Cayley graphs. The most famous example is the Petersen graph, but others can be constructed including the line graphs of edge-transitive non-bipartite graphs with odd vertex degrees. Properties. The edge-connectivity of a connected vertex-transitive graph is equal to the degree "d", while the vertex-connectivity will be at least 2("d" + 1)/3. If the degree is 4 or less, or the graph is also edge-transitive, or the graph is a minimal Cayley graph, then the vertex-connectivity will also be equal to "d". Infinite examples. Infinite vertex-transitive graphs include: Two countable vertex-transitive graphs are called quasi-isometric if the ratio of their distance functions is bounded from below and from above. A well known conjecture stated that every infinite vertex-transitive graph is quasi-isometric to a Cayley graph. A counterexample was proposed by and Leader in 2001. In 2005, Eskin, Fisher, and Whyte confirmed the counterexample.
[ { "math_id": 0, "text": "f : G \\to G\\ " }, { "math_id": 1, "text": "f(v_1) = v_2.\\ " } ]
https://en.wikipedia.org/wiki?curid=581175
5811855
Riesz mean
In mathematics, the Riesz mean is a certain mean of the terms in a series. They were introduced by Marcel Riesz in 1911 as an improvement over the Cesàro mean[#endnote_][#endnote_]. The Riesz mean should not be confused with the Bochner–Riesz mean or the Strong–Riesz mean. Definition. Given a series formula_0, the Riesz mean of the series is defined by formula_1 Sometimes, a generalized Riesz mean is defined as formula_2 Here, the formula_3 are a sequence with formula_4 and with formula_5 as formula_6. Other than this, the formula_3 are taken as arbitrary. Riesz means are often used to explore the summability of sequences; typical summability theorems discuss the case of formula_7 for some sequence formula_8. Typically, a sequence is summable when the limit formula_9 exists, or the limit formula_10 exists, although the precise summability theorems in question often impose additional conditions. Special cases. Let formula_11 for all formula_12. Then formula_13 Here, one must take formula_14; formula_15 is the Gamma function and formula_16 is the Riemann zeta function. The power series formula_17 can be shown to be convergent for formula_18. Note that the integral is of the form of an inverse Mellin transform. Another interesting case connected with number theory arises by taking formula_19 where formula_20 is the Von Mangoldt function. Then formula_21 Again, one must take "c" &gt; 1. The sum over "ρ" is the sum over the zeroes of the Riemann zeta function, and formula_22 is convergent for "λ" &gt; 1. The integrals that occur here are similar to the Nörlund–Rice integral; very roughly, they can be connected to that integral via Perron's formula.
[ { "math_id": 0, "text": "\\{s_n\\}" }, { "math_id": 1, "text": "s^\\delta(\\lambda) = \n\\sum_{n\\le \\lambda} \\left(1-\\frac{n}{\\lambda}\\right)^\\delta s_n " }, { "math_id": 2, "text": "R_n = \\frac{1}{\\lambda_n} \\sum_{k=0}^n (\\lambda_k-\\lambda_{k-1})^\\delta s_k" }, { "math_id": 3, "text": "\\lambda_n" }, { "math_id": 4, "text": "\\lambda_n\\to\\infty" }, { "math_id": 5, "text": "\\lambda_{n+1}/\\lambda_n\\to 1" }, { "math_id": 6, "text": "n\\to\\infty" }, { "math_id": 7, "text": "s_n = \\sum_{k=0}^n a_k" }, { "math_id": 8, "text": "\\{a_k\\}" }, { "math_id": 9, "text": "\\lim_{n\\to\\infty} R_n" }, { "math_id": 10, "text": "\\lim_{\\delta\\to 1,\\lambda\\to\\infty}s^\\delta(\\lambda)" }, { "math_id": 11, "text": "a_n=1" }, { "math_id": 12, "text": "n" }, { "math_id": 13, "text": " \n\\sum_{n\\le \\lambda} \\left(1-\\frac{n}{\\lambda}\\right)^\\delta\n= \\frac{1}{2\\pi i} \\int_{c-i\\infty}^{c+i\\infty} \n\\frac{\\Gamma(1+\\delta)\\Gamma(s)}{\\Gamma(1+\\delta+s)} \\zeta(s) \\lambda^s \\, ds\n= \\frac{\\lambda}{1+\\delta} + \\sum_n b_n \\lambda^{-n}.\n" }, { "math_id": 14, "text": "c>1" }, { "math_id": 15, "text": "\\Gamma(s)" }, { "math_id": 16, "text": "\\zeta(s)" }, { "math_id": 17, "text": "\\sum_n b_n \\lambda^{-n}" }, { "math_id": 18, "text": "\\lambda > 1" }, { "math_id": 19, "text": "a_n=\\Lambda(n)" }, { "math_id": 20, "text": "\\Lambda(n)" }, { "math_id": 21, "text": " \n\\sum_{n\\le \\lambda} \\left(1-\\frac{n}{\\lambda}\\right)^\\delta \\Lambda(n)\n= - \\frac{1}{2\\pi i} \\int_{c-i\\infty}^{c+i\\infty} \n\\frac{\\Gamma(1+\\delta)\\Gamma(s)}{\\Gamma(1+\\delta+s)} \n\\frac{\\zeta^\\prime(s)}{\\zeta(s)} \\lambda^s \\, ds\n= \\frac{\\lambda}{1+\\delta} + \n\\sum_\\rho \\frac {\\Gamma(1+\\delta)\\Gamma(\\rho)}{\\Gamma(1+\\delta+\\rho)}\n+\\sum_n c_n \\lambda^{-n}.\n" }, { "math_id": 22, "text": "\\sum_n c_n \\lambda^{-n} \\, " } ]
https://en.wikipedia.org/wiki?curid=5811855
58123003
Sorting number
In mathematics and computer science, the sorting numbers are a sequence of numbers introduced in 1950 by Hugo Steinhaus for the analysis of comparison sort algorithms. These numbers give the worst-case number of comparisons used by both binary insertion sort and merge sort. However, there are other algorithms that use fewer comparisons. Formula and examples. The formula_0th sorting number is given by the formula &lt;templatestyles src="Block indent/styles.css"/&gt;formula_1 The sequence of numbers given by this formula (starting with formula_2) is &lt;templatestyles src="Block indent/styles.css"/&gt; The same sequence of numbers can also be obtained from the recurrence relation, formula_3. It is an example of a 2-regular sequence. Asymptotically, the value of the formula_0th sorting number fluctuates between approximately formula_4 and formula_5 depending on the ratio between formula_0 and the nearest power of two. Application to sorting. In 1950, Hugo Steinhaus observed that these numbers count the number of comparisons used by binary insertion sort, and conjectured (incorrectly) that they give the minimum number of comparisons needed to sort formula_0 items using any comparison sort. The conjecture was disproved in 1959 by L. R. Ford Jr. and Selmer M. Johnson, who found a different sorting algorithm, the Ford–Johnson merge-insert sort, using fewer comparisons. The same sequence of sorting numbers also gives the worst-case number of comparisons used by merge sort to sort formula_0 items. Other applications. The sorting numbers (shifted by one position) also give the sizes of the shortest possible superpatterns for the layered permutations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "\\displaystyle n\\lceil\\log_2 n\\rceil - 2^{\\lceil\\log_2 n\\rceil} + 1." }, { "math_id": 2, "text": "n = 1" }, { "math_id": 3, "text": "A(n) = A\\bigl(\\lfloor n/2\\rfloor\\bigr) + A\\bigl(\\lceil n/2\\rceil\\bigr) + n - 1" }, { "math_id": 4, "text": "n\\log_2 n - n" }, { "math_id": 5, "text": "n\\log_2 n - 0.915n," } ]
https://en.wikipedia.org/wiki?curid=58123003
5812333
Closed-loop pole
Positions of a closed-loop transfer function's poles in the s-plane In systems theory, closed-loop poles are the positions of the poles (or eigenvalues) of a closed-loop transfer function in the s-plane. The open-loop transfer function is equal to the product of all transfer function blocks in the forward path in the block diagram. The closed-loop transfer function is obtained by dividing the open-loop transfer function by the sum of one and the product of all transfer function blocks throughout the negative feedback loop. The closed-loop transfer function may also be obtained by algebraic or block diagram manipulation. Once the closed-loop transfer function is obtained for the system, the closed-loop poles are obtained by solving the characteristic equation. The characteristic equation is nothing more than setting the denominator of the closed-loop transfer function to zero. In control theory there are two main methods of analyzing feedback systems: the transfer function (or frequency domain) method and the state space method. When the transfer function method is used, attention is focused on the locations in the s-plane where the transfer function is undefined (the "poles") or zero (the "zeroes"; see Zeroes and poles). Two different transfer functions are of interest to the designer. If the feedback loops in the system are opened (that is prevented from operating) one speaks of the "open-loop transfer function", while if the feedback loops are operating normally one speaks of the "closed-loop transfer function". For more on the relationship between the two, see root-locus. Closed-loop poles in control theory. The response of a linear time-invariant system to any input can be derived from its impulse response and step response. The eigenvalues of the system determine completely the natural response (unforced response). In control theory, the response to any input is a combination of a transient response and steady-state response. Therefore, a crucial design parameter is the location of the eigenvalues, or closed-loop poles. In root-locus design, the gain "K" is usually parameterized. Each point on the locus satisfies the angle condition and magnitude condition and corresponds to a different value of "K". For negative feedback systems, the closed-loop poles move along the root-locus from the open-loop poles to the open-loop zeroes as the gain is increased. For this reason, the root-locus is often used for design of proportional control, i.e. those for which formula_0. Finding closed-loop poles. Consider a simple feedback system with controller formula_0, plant formula_1 and transfer function formula_2 in the feedback path. Note that a unity feedback system has formula_3 and the block is omitted. For this system, the open-loop transfer function is the product of the blocks in the forward path, formula_4. The product of the blocks around the entire closed loop is formula_5. Therefore, the closed-loop transfer function is formula_6 The closed-loop poles, or eigenvalues, are obtained by solving the characteristic equation formula_7. In general, the solution will be n complex numbers where n is the order of the characteristic polynomial. The preceding is valid for single-input-single-output systems (SISO). An extension is possible for multiple input multiple output systems, that is for systems where formula_1 and formula_8 are matrices whose elements are made of transfer functions. In this case the poles are the solution of the equation formula_9 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\textbf{G}_c = K" }, { "math_id": 1, "text": "\\textbf{G}(s)" }, { "math_id": 2, "text": "\\textbf{H}(s)" }, { "math_id": 3, "text": "\\textbf{H}(s)=1" }, { "math_id": 4, "text": "\\textbf{G}_c\\textbf{G} = K\\textbf{G}" }, { "math_id": 5, "text": "\\textbf{G}_c\\textbf{G}\\textbf{H} = K\\textbf{G}\\textbf{H}" }, { "math_id": 6, "text": "\\textbf{T}(s)=\\frac{K\\textbf{G}}{1+K\\textbf{G}\\textbf{H}}." }, { "math_id": 7, "text": "{1+K\\textbf{G}\\textbf{H}}=0" }, { "math_id": 8, "text": "\\textbf{K}(s)" }, { "math_id": 9, "text": "\\det(\\textbf{I}+\\textbf{G}(s)\\textbf{K}(s))=0. \\, " } ]
https://en.wikipedia.org/wiki?curid=5812333
58128086
Supergolden ratio
Algebraic integer, approximately 1.46557 In mathematics, the supergolden ratio is a geometrical proportion close to 85/58. Its true value is the real solution of the equation "x"3 = "x"2 + 1. The name "supergolden ratio" results from analogy with the golden ratio, the positive solution of the equation "x"2 = "x" + 1. Definition. Two quantities a &gt; b &gt; 0 are in the supergolden ratio-squared if formula_0. The ratio formula_1 is commonly denoted &amp;NoBreak;&amp;NoBreak; Based on this definition, one has formula_2 It follows that the supergolden ratio is found as the unique real solution of the cubic equation formula_3 The decimal expansion of the root begins as formula_4 (sequence in the OEIS). The minimal polynomial for the reciprocal root is the depressed cubic formula_5, thus the simplest solution with Cardano's formula, formula_6 formula_7 or, using the hyperbolic sine, formula_8 &amp;NoBreak;&amp;NoBreak; is the superstable fixed point of the iteration formula_9. The iteration formula_10 results in the continued radical formula_11 Dividing the defining trinomial formula_12 by &amp;NoBreak;&amp;NoBreak; one obtains formula_13, and the conjugate elements of &amp;NoBreak;&amp;NoBreak; are formula_14 with formula_15 and formula_16 Properties. Many properties of &amp;NoBreak;&amp;NoBreak; are related to golden ratio &amp;NoBreak;&amp;NoBreak;. For example, the supergolden ratio can be expressed in terms of itself as the infinite geometric series  formula_17 and formula_18 in comparison to the golden ratio identity formula_19 and "vice versa". Additionally, formula_20, while formula_21 For every integer formula_22 one has formula_23 Argument formula_24 satisfies the identity formula_25 Continued fraction pattern of a few low powers formula_26 (13/19) formula_27 formula_28 (22/15) formula_29 (15/7) formula_30 (22/7) formula_31 (60/13) formula_32 (115/17) Notably, the continued fraction of &amp;NoBreak;}&amp;NoBreak; begins as permutation of the first six natural numbers; the next term is equal to their sum + 1. The supergolden ratio is the fourth smallest Pisot number. Because the absolute value formula_33 of the algebraic conjugates is smaller than 1, powers of &amp;NoBreak;&amp;NoBreak; generate almost integers. For example: formula_34. After eleven rotation steps the phases of the inward spiraling conjugate pair – initially close to &amp;NoBreak;&amp;NoBreak; – nearly align with the imaginary axis. The minimal polynomial of the supergolden ratio formula_35 has discriminant formula_36. The Hilbert class field of imaginary quadratic field formula_37 can be formed by adjoining &amp;NoBreak;&amp;NoBreak;. With argument formula_38 a generator for the ring of integers of &amp;NoBreak;&amp;NoBreak;, one has the special value of Dedekind eta quotient formula_39. Expressed in terms of the Weber-Ramanujan class invariant Gn formula_40. Properties of the related Klein j-invariant &amp;NoBreak;&amp;NoBreak; result in near identity formula_41. The difference is &lt; 1/143092. The elliptic integral singular value formula_42 for &amp;NoBreak;&amp;NoBreak; has closed form expression formula_43 (which is less than 1/10 the eccentricity of the orbit of Venus). Narayana sequence. Narayana's cows is a recurrence sequence originating from a problem proposed by the 14th century Indian mathematician Narayana Pandita. He asked for the number of cows and calves in a herd after 20 years, beginning with one cow in the first year, where each cow gives birth to one calf each year from the age of three onwards. The Narayana sequence has a close connection to the Fibonacci and Padovan sequences and plays an important role in data coding, cryptography and combinatorics. The number of compositions of n into parts 1 and 3 is counted by the "n"th Narayana number. The Narayana sequence is defined by the third-order recurrence relation formula_44 for "n" &gt; 2, with initial values formula_45. The first few terms are 1, 1, 1, 2, 3, 4, 6, 9, 13, 19, 28, 41, 60, 88... (sequence in the OEIS). The limit ratio between consecutive terms is the supergolden ratio. The first 11 indices n for which formula_46 is prime are n = 3, 4, 8, 9, 11, 16, 21, 25, 81, 6241, 25747 (sequence in the OEIS). The last number has 4274 decimal digits. The sequence can be extended to negative indices using formula_47. The generating function of the Narayana sequence is given by formula_48 for formula_49 The Narayana numbers are related to sums of binomial coefficients by formula_50. The characteristic equation of the recurrence is formula_51. If the three solutions are real root &amp;NoBreak;&amp;NoBreak; and conjugate pair &amp;NoBreak;&amp;NoBreak; and &amp;NoBreak;&amp;NoBreak;, the Narayana numbers can be computed with the Binet formula formula_52, with real &amp;NoBreak;&amp;NoBreak; and conjugates &amp;NoBreak;&amp;NoBreak; and &amp;NoBreak;&amp;NoBreak; the roots of formula_53. Since formula_54 and formula_55, the number &amp;NoBreak;}&amp;NoBreak; is the nearest integer to formula_56, with "n" ≥ 0 and formula_57 Coefficients formula_58 result in the Binet formula for the related sequence formula_59. The first few terms are 3, 1, 1, 4, 5, 6, 10, 15, 21, 31, 46, 67, 98, 144... (sequence in the OEIS). This anonymous sequence has the Fermat property: if p is prime, formula_60. The converse does not hold, but the small number of odd pseudoprimes formula_61 makes the sequence special. The 8 odd composite numbers below 108 to pass the test are n = 1155, 552599, 2722611, 4822081, 10479787, 10620331, 16910355, 66342673. The Narayana numbers are obtained as integral powers "n" &gt; 3 of a matrix with real eigenvalue &amp;NoBreak;&amp;NoBreak; formula_62 formula_63 The trace of &amp;NoBreak;}&amp;NoBreak; gives the above &amp;NoBreak;}&amp;NoBreak;. Alternatively, &amp;NoBreak;&amp;NoBreak; can be interpreted as incidence matrix for a D0L Lindenmayer system on the alphabet &amp;NoBreak;}&amp;NoBreak; with corresponding substitution rule formula_64 and initiator &amp;NoBreak;&amp;NoBreak;. The series of words &amp;NoBreak;&amp;NoBreak; produced by iterating the substitution have the property that the number of c's, b's and a's are equal to successive Narayana numbers. The lengths of these words are formula_65 Associated to this string rewriting process is a compact set composed of self-similar tiles called the Rauzy fractal, that visualizes the combinatorial information contained in a multiple-generation three-letter sequence. Supergolden rectangle. A supergolden rectangle is a rectangle whose side lengths are in a &amp;NoBreak;&amp;NoBreak; ratio. Compared to the golden rectangle, containing linear ratios formula_66, the supergolden rectangle has one more degree of self-similarity. Given a rectangle of height 1, length &amp;NoBreak;&amp;NoBreak; and diagonal length formula_67 (according to formula_68). On the left-hand side, cut off a square of side length 1 and mark the intersection with the falling diagonal. The remaining rectangle now has aspect ratio formula_69 (according to formula_70). The upper right triangle has altitude formula_71 and the perpendicular foot coincides with the intersection point. Divide the original rectangle into four parts by a second, horizontal cut passing through the intersection point. Numbering counter-clockwise starting from the upper right, the resulting first, second and fourth parts are all supergolden rectangles; while the third has aspect ratio formula_72. The original rectangle and successively the second, first and fourth parts have diagonal lengths in the ratios formula_73 or, equivalently, areas formula_74. The areas of the diagonally opposite first and third parts are equal. In the first part supergolden rectangle perpendicular to the original one, the process can be repeated at a scale of formula_75. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\left( \\frac{a+b}{a} \\right)^{2} = \\frac{a}{b} " }, { "math_id": 1, "text": " \\frac{a+b}{a} " }, { "math_id": 2, "text": " \\begin{align}\n1&=\\left( \\frac{a+b}{a} \\right)^{2} \\frac{b}{a}\\\\\n&=\\left( \\frac{a+b}{a} \\right)^{2} \\left( \\frac{a+b}{a} - 1 \\right)\\\\\n&\\implies \\psi^{2} \\left( \\psi - 1 \\right) = 1 \\end{align} " }, { "math_id": 3, "text": "\\psi^{3} - \\psi^{2} - 1 = 0." }, { "math_id": 4, "text": "1.465\\,571\\,231\\,876\\,768..." }, { "math_id": 5, "text": "x^{3} +x -1" }, { "math_id": 6, "text": " w_{1,2} = \\left( 1 \\pm \\frac{1}{3} \\sqrt{ \\frac{31}{3}} \\right) /2 " }, { "math_id": 7, "text": " 1 /\\psi =\\sqrt[3]{w_1} +\\sqrt[3]{w_2} " }, { "math_id": 8, "text": " 1 /\\psi =\\frac{2}{ \\sqrt{3}} \\sinh \\left( \\frac{1}{3} \\operatorname{arsinh} \\left( \\frac{3 \\sqrt{3}}{2} \\right) \\right)." }, { "math_id": 9, "text": " x \\gets (2x^{3}+1) /(3x^{2}+1) " }, { "math_id": 10, "text": " x \\gets \\sqrt[3]{1 +x^{2}} " }, { "math_id": 11, "text": " \\psi =\\sqrt[3]{1 +\\sqrt[3/2]{1 +\\sqrt[3/2]{1 +\\cdots}}} " }, { "math_id": 12, "text": "x^{3} -x^{2} -1" }, { "math_id": 13, "text": " x^{2} +x /\\psi^{2} +1 /\\psi " }, { "math_id": 14, "text": " x_{1,2} = \\left( -1 \\pm i \\sqrt{4 \\psi^2 + 3} \\right) /2 \\psi^{2}," }, { "math_id": 15, "text": "x_1 +x_2 = 1 -\\psi \\;" }, { "math_id": 16, "text": "\\; x_1x_2 =1 /\\psi." }, { "math_id": 17, "text": " \\psi = \\sum_{n=0}^{\\infty} \\psi^{-3n}" }, { "math_id": 18, "text": " \\,\\psi^2 = 2\\sum_{n=0}^{\\infty} \\psi^{-7n}," }, { "math_id": 19, "text": " \\varphi = \\sum_{n=0}^{\\infty} \\varphi^{-2n} " }, { "math_id": 20, "text": " 1 +\\varphi^{-1} +\\varphi^{-2} =2 " }, { "math_id": 21, "text": " \\sum_{n=0}^{7} \\psi^{-n} = 3." }, { "math_id": 22, "text": "n" }, { "math_id": 23, "text": "\\begin{align}\n\\psi^{n} &=\\psi^{n-1} +\\psi^{n-3}\\\\\n&=\\psi^{n-2} +\\psi^{n-3} +\\psi^{n-4}\\\\\n&=\\psi^{n-2} +2\\psi^{n-4} +\\psi^{n-6}.\\end{align}" }, { "math_id": 24, "text": "\\;\\theta =\\arcsec(2\\psi^{4})\\;" }, { "math_id": 25, "text": "\\;\\tan(\\theta) -4\\sin(\\theta) =3\\sqrt{3}." }, { "math_id": 26, "text": " \\psi^{-1} = [0;1,2,6,1,3,5,4,22,...] \\approx 0.6823 " }, { "math_id": 27, "text": "\\ \\psi^{0} = [1] " }, { "math_id": 28, "text": "\\ \\psi^{1} = [1;2,6,1,3,5,4,22,1,...] \\approx 1.4656 " }, { "math_id": 29, "text": "\\ \\psi^{2} = [2;6,1,3,5,4,22,1,1,...] \\approx 2.1479 " }, { "math_id": 30, "text": "\\ \\psi^{3} = [3;6,1,3,5,4,22,1,1,...] \\approx 3.1479 " }, { "math_id": 31, "text": "\\ \\psi^{4} = [4;1,1,1,1,2,2,1,2,2,...] \\approx 4.6135 " }, { "math_id": 32, "text": "\\ \\psi^{5} = [6;1,3,5,4,22,1,1,4,...] \\approx 6.7614 " }, { "math_id": 33, "text": "1 /\\sqrt{\\psi}" }, { "math_id": 34, "text": "\\psi^{11} = 67.000222765... \\approx 67 + 1/4489" }, { "math_id": 35, "text": " m(x) = x^{3}-x^{2}-1 " }, { "math_id": 36, "text": "\\Delta=-31" }, { "math_id": 37, "text": " K = \\mathbb{Q}( \\sqrt{\\Delta}) " }, { "math_id": 38, "text": " \\tau=(1 +\\sqrt{\\Delta})/2\\, " }, { "math_id": 39, "text": " \\psi = \\frac{ e^{\\pi i/24}\\,\\eta(\\tau)}{ \\sqrt{2}\\,\\eta(2\\tau)} " }, { "math_id": 40, "text": " \\psi = \\frac{ \\mathfrak{f} ( \\sqrt{ \\Delta} )}{ \\sqrt{2} } = \\frac{ G_{31} }{ \\sqrt[4]{2} } " }, { "math_id": 41, "text": " e^{\\pi \\sqrt{- \\Delta}} \\approx \\left( \\sqrt{2}\\,\\psi \\right)^{24} - 24 " }, { "math_id": 42, "text": " k_{r} =\\lambda^{*}(r) " }, { "math_id": 43, "text": " \\lambda^{*}(31) =\\sin ( \\arcsin \\left( ( \\sqrt[4]{2}\\,\\psi)^{-12} \\right) /2) " }, { "math_id": 44, "text": " N_{n} =N_{n-1} +N_{n-3} " }, { "math_id": 45, "text": " N_{0} =N_{1} =N_{2} =1 " }, { "math_id": 46, "text": "N_{n}" }, { "math_id": 47, "text": " N_{n} =N_{n+3} -N_{n+2}" }, { "math_id": 48, "text": " \\frac{1}{1 - x - x^{3}} = \\sum_{n=0}^{\\infty} N_{n}x^{n} " }, { "math_id": 49, "text": "x < 1 /\\psi" }, { "math_id": 50, "text": " N_{n} = \\sum_{k=0}^{\\lfloor n / 3 \\rfloor}{n-2k \\choose k}" }, { "math_id": 51, "text": "x^{3}-x^{2}-1=0" }, { "math_id": 52, "text": " N_{n-2} =a \\alpha^{n} +b \\beta^{n} +c \\gamma^{n} " }, { "math_id": 53, "text": " 31x^{3} +x -1 = 0 " }, { "math_id": 54, "text": " \\left\\vert b \\beta^{n} +c \\gamma^{n} \\right\\vert < 1 /\\sqrt{ \\alpha^{n}} " }, { "math_id": 55, "text": " \\alpha = \\psi " }, { "math_id": 56, "text": " a\\,\\psi^{n+2} " }, { "math_id": 57, "text": " a =\\psi /( \\psi^{2} +3) =" }, { "math_id": 58, "text": " a =b =c =1 " }, { "math_id": 59, "text": " A_{n} =N_{n} +2N_{n-3} " }, { "math_id": 60, "text": " A_{p} \\equiv A_{1} \\bmod p " }, { "math_id": 61, "text": "\\,n \\mid (A_{n} -1) " }, { "math_id": 62, "text": " Q = \\begin{pmatrix} 1 & 0 & 1 \\\\ 1 & 0 & 0 \\\\ 0 & 1 & 0 \\end{pmatrix} ," }, { "math_id": 63, "text": " Q^{n} = \\begin{pmatrix} N_{n} & N_{n-2} & N_{n-1} \\\\ N_{n-1} & N_{n-3} & N_{n-2} \\\\ N_{n-2} & N_{n-4} & N_{n-3} \\end{pmatrix} " }, { "math_id": 64, "text": "\\begin{cases}\na \\;\\mapsto \\;ab\\\\\nb \\;\\mapsto \\;c\\\\\nc \\;\\mapsto \\;a\n\\end{cases}" }, { "math_id": 65, "text": "l(w_n) =N_n." }, { "math_id": 66, "text": "\\varphi^{2}:\\varphi:1" }, { "math_id": 67, "text": " \\sqrt{\\psi^{3}}" }, { "math_id": 68, "text": "1+\\psi^{2}=\\psi^{3}" }, { "math_id": 69, "text": "\\psi^{2}:1" }, { "math_id": 70, "text": "\\psi-1=\\psi^{-2}" }, { "math_id": 71, "text": "1 /\\sqrt{\\psi}\\," }, { "math_id": 72, "text": "\\psi^{3}:1" }, { "math_id": 73, "text": "\\psi^{3}:\\psi^{2}:\\psi:1" }, { "math_id": 74, "text": "\\psi^{6}:\\psi^{4}:\\psi^{2}:1" }, { "math_id": 75, "text": "1:\\psi^{2}" }, { "math_id": 76, "text": "x^{3}=x^{2}+1" }, { "math_id": 77, "text": "x^{2}=x+1" }, { "math_id": 78, "text": "x^{3}=x+1" }, { "math_id": 79, "text": "x^{3}=2x^{2}+1" } ]
https://en.wikipedia.org/wiki?curid=58128086
58128725
Merge-insertion sort
Type of comparison sorting algorithm In computer science, merge-insertion sort or the Ford–Johnson algorithm is a comparison sorting algorithm published in 1959 by L. R. Ford Jr. and Selmer M. Johnson. It uses fewer comparisons in the worst case than the best previously known algorithms, binary insertion sort and merge sort, and for 20 years it was the sorting algorithm with the fewest known comparisons. Although not of practical significance, it remains of theoretical interest in connection with the problem of sorting with a minimum number of comparisons. The same algorithm may have also been independently discovered by Stanisław Trybuła and Czen Ping. Algorithm. Merge-insertion sort performs the following steps, on an input formula_0 of formula_1 elements: The algorithm is designed to take advantage of the fact that the binary searches used to insert elements into formula_3 are most efficient (from the point of view of worst case analysis) when the length of the subsequence that is searched is one less than a power of two. This is because, for those lengths, all outcomes of the search use the same number of comparisons as each other. To choose an insertion ordering that produces these lengths, consider the sorted sequence formula_3 after step 4 of the outline above (before inserting the remaining elements), and let formula_6 denote the formula_7th element of this sorted sequence. Thus, formula_8 where each element formula_6 with formula_9 is paired with an element formula_10 that has not yet been inserted. (There are no elements formula_11 or formula_12 because formula_13 and formula_14 were paired with each other.) If formula_1 is odd, the remaining unpaired element should also be numbered as formula_15 with formula_7 larger than the indexes of the paired elements. Then, the final step of the outline above can be expanded into the following steps: formula_18 Analysis. Let formula_19 denote the number of comparisons that merge-insertion sort makes, in the worst case, when sorting formula_1 elements. This number of comparisons can be broken down as the sum of three terms: In the third term, the worst-case number of comparisons for the elements in the first group is two, because each is inserted into a subsequence of formula_3 of length at most three. First, formula_17 is inserted into the three-element subsequence formula_21. Then, formula_16 is inserted into some permutation of the three-element subsequence formula_22, or in some cases into the two-element subsequence formula_23. Similarly, the elements formula_24 and formula_25 of the second group are each inserted into a subsequence of length at most seven, using three comparisons. More generally, the worst-case number of comparisons for the elements in the formula_7th group is formula_26, because each is inserted into a subsequence of length at most formula_27. By summing the number of comparisons used for all the elements and solving the resulting recurrence relation, this analysis can be used to compute the values of formula_19, giving the formula formula_28 or, in closed form, formula_29 For formula_30 the numbers of comparisons are 0, 1, 3, 5, 7, 10, 13, 16, 19, 22, 26, 30, 34, ... (sequence in the OEIS) Relation to other comparison sorts. The algorithm is called merge-insertion sort because the initial comparisons that it performs before its recursive call (pairing up arbitrary items and comparing each pair) are the same as the initial comparisons of merge sort, while the comparisons that it performs after the recursive call (using binary search to insert elements one by one into a sorted list) follow the same principle as insertion sort. In this sense, it is a hybrid algorithm that combines both merge sort and insertion sort. For small inputs (up to formula_31) its numbers of comparisons equal the lower bound on comparison sorting of formula_32. However, for larger inputs the number of comparisons made by the merge-insertion algorithm is bigger than this lower bound. Merge-insertion sort also performs fewer comparisons than the sorting numbers, which count the comparisons made by binary insertion sort or merge sort in the worst case. The sorting numbers fluctuate between formula_33 and formula_34, with the same leading term but a worse constant factor in the lower-order linear term. Merge-insertion sort is the sorting algorithm with the minimum possible comparisons for formula_1 items whenever formula_35, and it has the fewest comparisons known for formula_36. For 20 years, merge-insertion sort was the sorting algorithm with the fewest comparisons known for all input lengths. However, in 1979 Glenn Manacher published another sorting algorithm that used even fewer comparisons, for large enough inputs. It remains unknown exactly how many comparisons are needed for sorting, for all formula_1, but Manacher's algorithm and later record-breaking sorting algorithms have all used modifications of the merge-insertion sort ideas. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "\\lfloor n/2\\rfloor" }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "\\lceil n/2\\rceil-1" }, { "math_id": 5, "text": "X\\setminus S" }, { "math_id": 6, "text": "x_i" }, { "math_id": 7, "text": "i" }, { "math_id": 8, "text": "S=(x_1,x_2,x_3,\\dots)," }, { "math_id": 9, "text": "i\\ge 3" }, { "math_id": 10, "text": "y_i < x_i" }, { "math_id": 11, "text": "y_1" }, { "math_id": 12, "text": "y_2" }, { "math_id": 13, "text": "x_1" }, { "math_id": 14, "text": "x_2" }, { "math_id": 15, "text": "y_i" }, { "math_id": 16, "text": "y_3" }, { "math_id": 17, "text": "y_4" }, { "math_id": 18, "text": "y_4,y_3,y_6,y_5,y_{12},y_{11},y_{10},y_9,y_8,y_7,y_{22},y_{21}\\dots" }, { "math_id": 19, "text": "C(n)" }, { "math_id": 20, "text": "C(\\lfloor n/2\\rfloor)" }, { "math_id": 21, "text": "(x_1,x_2,x_3)" }, { "math_id": 22, "text": "(x_1,x_2,y_4)" }, { "math_id": 23, "text": "(x_1,x_2)" }, { "math_id": 24, "text": "y_6" }, { "math_id": 25, "text": "y_5" }, { "math_id": 26, "text": "i+1" }, { "math_id": 27, "text": "2^{i+1}-1" }, { "math_id": 28, "text": "C(n)=\\sum_{i=1}^n \\left\\lceil \\log_2 \\frac{3i}{4} \\right\\rceil \\approx n\\log_2 n - 1.415n" }, { "math_id": 29, "text": "C(n)=n\\biggl\\lceil\\log_2\\frac{3n}{4}\\biggr\\rceil-\\biggl\\lfloor\\frac{2^{\\lfloor \\log_2 6n\\rfloor}}{3}\\biggr\\rfloor+\\biggl\\lfloor\\frac{\\log_2 6n}{2}\\biggr\\rfloor." }, { "math_id": 30, "text": "n=1,2,\\dots" }, { "math_id": 31, "text": "n=11" }, { "math_id": 32, "text": "\\lceil\\log_2 n!\\rceil\\approx n\\log_2 n - 1.443n" }, { "math_id": 33, "text": "n\\log_2 n - 0.915n" }, { "math_id": 34, "text": "n\\log_2 n - n" }, { "math_id": 35, "text": "n\\le 22" }, { "math_id": 36, "text": "n\\le 46" } ]
https://en.wikipedia.org/wiki?curid=58128725
58129622
Band model
In geometry, the band model is a conformal model of the hyperbolic plane. The band model employs a portion of the Euclidean plane between two parallel lines. Distance is preserved along one line through the middle of the band. Assuming the band is given by formula_0, the metric is given by formula_1. Geodesics include the line along the middle of the band, and any open line segment perpendicular to boundaries of the band connecting the sides of the band. Every end of a geodesic either meets a boundary of the band at a right angle or is asymptotic to the midline; the midline itself is the only geodesic that does not meet a boundary. Lines parallel to the boundaries of the band within the band are hypercycles whose centers are the line through the middle of the band. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{z \\in \\mathbb C: \\left|\\operatorname {Im} z\\right| < \\pi / 2\\}" }, { "math_id": 1, "text": "|dz| \\sec (\\operatorname{Im} z)" } ]
https://en.wikipedia.org/wiki?curid=58129622
5813188
Ricker model
The Ricker model, named after Bill Ricker, is a classic discrete population model which gives the expected number "N" "t"+1 (or density) of individuals in generation "t" + 1 as a function of the number of individuals in the previous generation, formula_0 Here "r" is interpreted as an intrinsic growth rate and "k" as the carrying capacity of the environment. Unlike some other models like the Logistic map, the carrying capacity in the Ricker model is not a hard barrier that cannot be exceeded by the population, but it only determines the overall scale of the population. The Ricker model was introduced in 1954 by Ricker in the context of stock and recruitment in fisheries. The model can be used to predict the number of fish that will be present in a fishery. Subsequent work has derived the model under other assumptions such as scramble competition, within-year resource limited competition or even as the outcome of source-sink Malthusian patches linked by density-dependent dispersal. The Ricker model is a limiting case of the Hassell model which takes the form formula_1 When "c" = 1, the Hassell model is simply the Beverton–Holt model. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N_{t+1} = N_t e^{r\\left(1-\\frac{N_t}{k}\\right)}.\\," }, { "math_id": 1, "text": "N_{t+1} = k_1 \\frac{N_t}{ \\left(1+k_2 N_t\\right)^c}. " } ]
https://en.wikipedia.org/wiki?curid=5813188
581417
Table of standard reduction potentials for half-reactions important in biochemistry
Standard apparent reduction potentials (E°') in biochemistry at pH 7 The values below are standard apparent reduction potentials (E°') for electro-biochemical half-reactions measured at 25 °C, 1 atmosphere and a pH of 7 in aqueous solution. The actual physiological potential depends on the ratio of the reduced (Red) and oxidized (Ox) forms according to the Nernst equation and the thermal voltage. When an oxidizer (Ox) accepts a number "z" of electrons ( "e"−) to be converted in its reduced form (Red), the half-reaction is expressed as: Ox + "z"  "e"− → Red The reaction quotient (Qr) is the ratio of the chemical activity ("a"i) of the reduced form (the reductant, "a"Red) to the activity of the oxidized form (the oxidant, "a"ox). It is equal to the ratio of their concentrations ("C"i) only if the system is sufficiently diluted and the activity coefficients ("γ"i) are close to unity ("a"i = "γ"i "C"i): formula_0 The Nernst equation is a function of Qr and can be written as follows: formula_1 At chemical equilibrium, the reaction quotient Qr of the product activity ("a"Red) by the reagent activity ("a"Ox) is equal to the equilibrium constant (K) of the half-reaction and in the absence of driving force (ΔG 0) the potential (Ered) also becomes nul. The numerically simplified form of the Nernst equation is expressed as: formula_2 Where formula_3 is the standard reduction potential of the half-reaction expressed versus the standard reduction potential of hydrogen. For standard conditions in electrochemistry (T = 25 °C, P = 1 atm and all concentrations being fixed at 1 mol/L, or 1 M) the standard reduction potential of hydrogen formula_4 is fixed at zero by convention as it serves of reference. The standard hydrogen electrode (SHE), with [ H+] = 1 M works thus at a pH = 0. At pH = 7, when [ H+] = 10−7 M, the reduction potential formula_5 of  H+ differs from zero because it depends on pH. Solving the Nernst equation for the half-reaction of reduction of two protons into hydrogen gas gives: formula_6 formula_7 In biochemistry and in biological fluids, at pH = 7, it is thus important to note that the reduction potential of the protons ( H+) into hydrogen gas H2 is no longer zero as with the standard hydrogen electrode (SHE) at 1 M  H+ (pH = 0) in classical electrochemistry, but that formula_8 versus the standard hydrogen electrode (SHE). The same also applies for the reduction potential of oxygen: For , formula_9 = 1.229 V, so, applying the Nernst equation for pH = 7 gives: formula_6 formula_10 For obtaining the values of the reduction potential at pH = 7 for the redox reactions relevant for biological systems, the same kind of conversion exercise is done using the corresponding Nernst equation expressed as a function of pH. The conversion is simple, but care must be taken not to inadvertently mix reduction potential converted at pH = 7 with other data directly taken from tables referring to SHE (pH = 0). Expression of the Nernst equation as a function of pH. The formula_11 and pH of a solution are related by the Nernst equation as commonly represented by a Pourbaix diagram (formula_11 – pH plot). For a half cell equation, conventionally written as a reduction reaction ("i.e.", electrons accepted by an oxidant on the left side): formula_12 The half-cell standard reduction potential formula_9 is given by formula_13 where formula_14 is the standard Gibbs free energy change, z is the number of electrons involved, and F is Faraday's constant. The Nernst equation relates pH and formula_11: formula_15   where curly braces { } indicate activities, and exponents are shown in the conventional manner.This equation is the equation of a straight line for formula_11 as a function of pH with a slope of formula_16 volt (pH has no units). This equation predicts lower formula_11 at higher pH values. This is observed for the reduction of O2 into H2O, or OH−, and for reduction of H+ into H2. Formal standard reduction potential combined with the pH dependency. To obtain the reduction potential as a function of the measured concentrations of the redox-active species in solution, it is necessary to express the activities as a function of the concentrations. formula_15 Given that the chemical activity denoted here by { } is the product of the activity coefficient "γ" by the concentration denoted by [ ]: ai = "γ"i·Ci, here expressed as {X} = "γ"x [X] and {X}x = ("γ"x)x [X]x and replacing the logarithm of a product by the sum of the logarithms ("i.e.", log (a·b) = log a + log b), the log of the reaction quotient (formula_17) (without {H+} already isolated apart in the last term as "h" pH) expressed here above with activities { } becomes: formula_18 It allows to reorganize the Nernst equation as: formula_19 formula_20 Where formula_21 is the formal standard potential independent of pH including the activity coefficients. Combining formula_21 directly with the last term depending on pH gives: formula_22 For a pH = 7: formula_23 So, formula_24 It is therefore important to know to what exact definition does refer the value of a reduction potential for a given biochemical redox process reported at pH = 7, and to correctly understand the relationship used. Is it simply: This requires thus to dispose of a clear definition of the considered reduction potential, and of a sufficiently detailed description of the conditions in which it is valid, along with a complete expression of the corresponding Nernst equation. Were also the reported values only derived from thermodynamic calculations, or determined from experimental measurements and under what specific conditions? Without being able to correctly answering these questions, mixing data from different sources without appropriate conversion can lead to errors and confusion. ==Determination of the formal standard reduction potential when 1== The formal standard reduction potential formula_21 can be defined as the measured reduction potential formula_5 of the half-reaction at unity concentration ratio of the oxidized and reduced species ("i.e.", when 1) under given conditions. Indeed: as, formula_28, when formula_29, formula_30, when formula_31, because formula_32, and that the term formula_33 is included in formula_21. The formal reduction potential makes possible to more simply work with molar or molal concentrations in place of activities. Because molar and molal concentrations were once referred as formal concentrations, it could explain the origin of the adjective "formal" in the expression "formal" potential. The formal potential is thus the reversible potential of an electrode at equilibrium immersed in a solution where reactants and products are at unit concentration. If any small incremental change of potential causes a change in the direction of the reaction, "i.e." from reduction to oxidation or "vice versa", the system is close to equilibrium, reversible and is at its formal potential. When the formal potential is measured under standard conditions ("i.e." the activity of each dissolved species is 1 mol/L, T = 298.15 K = 25 °C = 77 °F, Pgas = 1 bar) it becomes "de facto" a standard potential. According to Brown and Swift (1949), "A formal potential is defined as the potential of a half-cell, measured against the standard hydrogen electrode, when the total concentration of each oxidation state is one formal". The activity coefficients formula_34 and formula_35 are included in the formal potential formula_21, and because they depend on experimental conditions such as temperature, ionic strength, and pH, formula_21 cannot be referred as an immuable standard potential but needs to be systematically determined for each specific set of experimental conditions. Formal reduction potentials are applied to simplify results interpretations and calculations of a considered system. Their relationship with the standard reduction potentials must be clearly expressed to avoid any confusion. Main factors affecting the formal (or apparent) standard reduction potentials. The main factor affecting the formal (or apparent) reduction potentials formula_21 in biochemical or biological processes is the pH. To determine approximate values of formal reduction potentials, neglecting in a first approach changes in activity coefficients due to ionic strength, the Nernst equation has to be applied taking care to first express the relationship as a function of pH. The second factor to be considered are the values of the concentrations taken into account in the Nernst equation. To define a formal reduction potential for a biochemical reaction, the pH value, the concentrations values and the hypotheses made on the activity coefficients must always be clearly indicated. When using, or comparing, several formal (or apparent) reduction potentials they must also be internally consistent. Problems may occur when mixing different sources of data using different conventions or approximations ("i.e.", with different underlying hypotheses). When working at the frontier between inorganic and biological processes (e.g., when comparing abiotic and biotic processes in geochemistry when microbial activity could also be at work in the system), care must be taken not to inadvertently directly mix standard reduction potentials (formula_9 versus SHE, pH = 0) with formal (or apparent) reduction potentials (formula_36 at pH = 7). Definitions must be clearly expressed and carefully controlled, especially if the sources of data are different and arise from different fields (e.g., picking and directly mixing data from classical electrochemistry textbooks (formula_9 versus SHE, pH = 0) and microbiology textbooks (formula_36 at pH = 7) without paying attention to the conventions on which they are based). Example in biochemistry. For example, in a two electrons couple like NAD+:NADH the reduction potential becomes ~ 30 mV (or more exactly, 59.16 mV/2 = 29.6 mV) more positive for every power of ten increase in the ratio of the oxidised to the reduced form. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q_r = \\frac{a_\\text{Red}}{a_\\text{Ox}} = \\frac{C_\\text{Red}}{C_\\text{Ox}}" }, { "math_id": 1, "text": "E_\\text{red} = E^\\ominus_\\text{red} - \\frac{RT}{zF} \\ln Q_r=E^\\ominus_\\text{red} - \\frac{RT}{zF} \\ln\\frac{a_\\text{Red}}{a_\\text{Ox}}." }, { "math_id": 2, "text": "E_\\text{red} = E^\\ominus_\\text{red} - \\frac{0.059\\ V}{z} \\log_{10}\\frac{a_\\text{Red}}{a_\\text{Ox}}" }, { "math_id": 3, "text": "E^\\ominus_\\text{red}" }, { "math_id": 4, "text": "E^{\\ominus}_\\text{red H+}" }, { "math_id": 5, "text": "E_\\text{red}" }, { "math_id": 6, "text": "E_\\text{red} = E^{\\ominus}_\\text{red} - 0.05916 \\ pH" }, { "math_id": 7, "text": "E_\\text{red} = 0 - \\left(0.05916 \\ \\text{×} \\ 7\\right) = -0.414 \\ V" }, { "math_id": 8, "text": "E_\\text{red} = -0.414\\mathrm V" }, { "math_id": 9, "text": "E^{\\ominus}_\\text{red}" }, { "math_id": 10, "text": "E_\\text{red} = 1.229 - \\left(0.05916 \\ \\text{×} \\ 7\\right) = 0.815 \\ V" }, { "math_id": 11, "text": "E_h" }, { "math_id": 12, "text": "a \\, A + b \\, B + h \\, \\ce{H+} + z \\, e^{-} \\quad \\ce{<=>} \\quad c \\, C + d \\, D" }, { "math_id": 13, "text": "E^{\\ominus}_\\text{red} (\\text{volt}) = -\\frac{\\Delta G^\\ominus}{zF}" }, { "math_id": 14, "text": "\\Delta G^\\ominus" }, { "math_id": 15, "text": "E_h = E_\\text{red} = E^{\\ominus}_\\text{red} - \\frac{0.05916}{z} \\log\\left(\\frac{\\{C\\}^c\\{D\\}^d}{\\{A\\}^a\\{B\\}^b}\\right) - \\frac{0.05916\\,h}{z} \\text{pH}" }, { "math_id": 16, "text": "-0.05916\\,\\left(\\frac{h}{z}\\right)" }, { "math_id": 17, "text": "Q_r" }, { "math_id": 18, "text": "\\log\\left(\\frac{\\{C\\}^c\\{D\\}^d}{\\{A\\}^a\\{B\\}^b}\\right) = \\log\\left(\\frac{\\left({\\gamma_\\text{C}}\\right)^c \\left({\\gamma_\\text{D}}\\right)^d}{\\left({\\gamma_\\text{A}}\\right)^a \\left({\\gamma_\\text{B}}\\right)^b}\\right)+ \\log\\left(\\frac{\\left[C\\right]^c\\left[D\\right]^d}{\\left[A\\right]^a\\left[B\\right]^b}\\right)" }, { "math_id": 19, "text": "E_h = E_\\text{red} = \\underbrace{\\left(E^{\\ominus}_\\text{red} - \\frac{0.05916}{z} \\log\\left(\\frac{\\left({\\gamma_\\text{C}}\\right)^c \\left({\\gamma_\\text{D}}\\right)^d}{\\left({\\gamma_\\text{A}}\\right)^a \\left({\\gamma_\\text{B}}\\right)^b}\\right)\\right)}_{E^{\\ominus '}_\\text{red}} - \\frac{0.05916}{z} \\log\\left(\\frac{\\left[C\\right]^c\\left[D\\right]^d}{\\left[A\\right]^a\\left[B\\right]^b}\\right) - \\frac{0.05916\\,h}{z} \\text{pH}" }, { "math_id": 20, "text": "E_h = E_\\text{red} = E^{\\ominus '}_\\text{red} - \\frac{0.05916}{z} \\log\\left(\\frac{\\left[C\\right]^c\\left[D\\right]^d}{\\left[A\\right]^a\\left[B\\right]^b}\\right) - \\frac{0.05916\\,h}{z} \\text{pH}" }, { "math_id": 21, "text": "E^{\\ominus '}_\\text{red}" }, { "math_id": 22, "text": "E_h = E_\\text{red} = \\left(E^{\\ominus '}_\\text{red} - \\frac{0.05916\\,h}{z} \\text{pH} \\right)- \\frac{0.05916}{z} \\log\\left(\\frac{\\left[C\\right]^c\\left[D\\right]^d}{\\left[A\\right]^a\\left[B\\right]^b}\\right)" }, { "math_id": 23, "text": "E_h = E_\\text{red} = \\underbrace{\\left(E^{\\ominus '}_\\text{red} - \\frac{0.05916\\,h}{z} \\text{× 7} \\right)}_{E^{\\ominus '}_\\text{red apparent at pH 7}} - \\frac{0.05916}{z} \\log\\left(\\frac{\\left[C\\right]^c\\left[D\\right]^d}{\\left[A\\right]^a\\left[B\\right]^b}\\right)" }, { "math_id": 24, "text": "E_h = E_\\text{red} = E^{\\ominus '}_\\text{red apparent at pH 7} - \\frac{0.05916}{z} \\log\\left(\\frac{\\left[C\\right]^c\\left[D\\right]^d}{\\left[A\\right]^a\\left[B\\right]^b}\\right)" }, { "math_id": 25, "text": "E_h = E_\\text{red}" }, { "math_id": 26, "text": "E^{\\ominus '}_\\text{red apparent at pH 7}" }, { "math_id": 27, "text": "\\frac{h} {z} = \\frac{\\text{(number of involved protons)}} {\\text{(number of exchanged electrons)}}" }, { "math_id": 28, "text": "E_\\text{red} = E^{\\ominus}_\\text{red}" }, { "math_id": 29, "text": "\\frac{a_\\text{red}} {a_\\text{ox}} = 1" }, { "math_id": 30, "text": "E_\\text{red} = E^{\\ominus'}_\\text{red}" }, { "math_id": 31, "text": "\\frac{C_\\text{red}} {C_\\text{ox}} = 1" }, { "math_id": 32, "text": "\\ln{1} = 0" }, { "math_id": 33, "text": "\\frac{\\gamma_\\text{red}} {\\gamma_\\text{ox}}" }, { "math_id": 34, "text": "\\gamma_{red}" }, { "math_id": 35, "text": "\\gamma_{ox}" }, { "math_id": 36, "text": "E^{\\ominus'}_\\text{red}" } ]
https://en.wikipedia.org/wiki?curid=581417
58151408
Transseries
Mathematical field In mathematics, the field formula_0 of logarithmic-exponential transseries is a non-Archimedean ordered differential field which extends comparability of asymptotic growth rates of elementary nontrigonometric functions to a much broader class of objects. Each log-exp transseries represents a formal asymptotic behavior, and it can be manipulated formally, and when it converges (or in every case if using special semantics such as through infinite surreal numbers), corresponds to actual behavior. Transseries can also be convenient for representing functions. Through their inclusion of exponentiation and logarithms, transseries are a strong generalization of the power series at infinity (formula_1) and other similar asymptotic expansions. The field formula_0 was introduced independently by Dahn-Göring and Ecalle in the respective contexts of model theory or exponential fields and of the study of analytic singularity and proof by Ecalle of the Dulac conjectures. It constitutes a formal object, extending the field of exp-log functions of Hardy and the field of accelerando-summable series of Ecalle. The field formula_0 enjoys a rich structure: an ordered field with a notion of generalized series and sums, with a compatible derivation with distinguished antiderivation, compatible exponential and logarithm functions and a notion of formal composition of series. Examples and counter-examples. Informally speaking, exp-log transseries are "well-based" (i.e. reverse well-ordered) formal Hahn series of real powers of the positive infinite indeterminate formula_2, exponentials, logarithms and their compositions, with real coefficients. Two important additional conditions are that the exponential and logarithmic depth of an exp-log transseries formula_3 that is the maximal numbers of iterations of exp and log occurring in formula_3 must be finite. The following formal series are log-exp transseries: formula_4 formula_5 The following formal series are "not" log-exp transseries: formula_6 — this series is not well-based. formula_7 — the logarithmic depth of this series is infinite formula_8 — the exponential and logarithmic depths of this series are infinite It is possible to define differential fields of transseries containing the two last series; they belong respectively to formula_9 and formula_10 (see the paragraph "Using surreal numbers" below). Introduction. A remarkable fact is that asymptotic growth rates of elementary nontrigonometric functions and even all functions definable in the model theoretic structure formula_11 of the ordered exponential field of real numbers are all comparable: For all such formula_12 and formula_13, we have formula_14 or formula_15, where formula_16 means formula_17. The equivalence class of formula_12 under the relation formula_18 is the asymptotic behavior of formula_12, also called the "germ" of formula_12 (or the germ of formula_12 at infinity). The field of transseries can be intuitively viewed as a formal generalization of these growth rates: In addition to the elementary operations, transseries are closed under "limits" for appropriate sequences with bounded exponential and logarithmic depth. However, a complication is that growth rates are non-Archimedean and hence do not have the least upper bound property. We can address this by associating a sequence with the least upper bound of minimal complexity, analogously to construction of surreal numbers. For example, formula_19 is associated with formula_20 rather than formula_21 because formula_22 decays too quickly, and if we identify fast decay with complexity, it has greater complexity than necessary (also, because we care only about asymptotic behavior, pointwise convergence is not dispositive). Because of the comparability, transseries do not include oscillatory growth rates (such as formula_23). On the other hand, there are transseries such as formula_24 that do not directly correspond to convergent series or real valued functions. Another limitation of transseries is that each of them is bounded by a tower of exponentials, i.e. a finite iteration formula_25 of formula_26, thereby excluding tetration and other transexponential functions, i.e. functions which grow faster than any tower of exponentials. There are ways to construct fields of generalized transseries including formal transexponential terms, for instance formal solutions formula_27 of the Abel equation formula_28. Formal construction. Transseries can be defined as formal (potentially infinite) expressions, with rules defining which expressions are valid, comparison of transseries, arithmetic operations, and even differentiation. Appropriate transseries can then be assigned to corresponding functions or germs, but there are subtleties involving convergence. Even transseries that diverge can often be meaningfully (and uniquely) assigned actual growth rates (that agree with the formal operations on transseries) using accelero-summation, which is a generalization of Borel summation. Transseries can be formalized in several equivalent ways; we use one of the simplest ones here. A "transseries" is a well-based sum, formula_29 with finite exponential depth, where each formula_30 is a nonzero real number and formula_31 is a monic transmonomial (formula_32 is a transmonomial but is not monic unless the "coefficient" formula_33; each formula_31 is different; the order of the summands is irrelevant). The sum might be infinite or transfinite; it is usually written in the order of decreasing formula_31. Here, "well-based" means that there is no infinite ascending sequence formula_34 (see well-ordering). A "monic transmonomial" is one of 1, "x", log "x", log log "x", ..., "e"purely_large_transseries. "Note:" Because formula_35, we do not include it as a primitive, but many authors do; "log-free" transseries do not include formula_36 but formula_37 is permitted. Also, circularity in the definition is avoided because the purely_large_transseries (above) will have lower exponential depth; the definition works by recursion on the exponential depth. See "Log-exp transseries as iterated Hahn series" (below) for a construction that uses formula_38 and explicitly separates different stages. A "purely large transseries" is a nonempty transseries formula_39 with every formula_40. Transseries have "finite exponential depth", where each level of nesting of "e" or log increases depth by 1 (so we cannot have "x" + log "x" + log log "x" + ...). Addition of transseries is termwise: formula_41 (absence of a term is equated with a zero coefficient). "Comparison:" The most significant term of formula_39 is formula_32 for the largest formula_31 (because the sum is well-based, this exists for nonzero transseries). formula_39 is positive iff the coefficient of the most significant term is positive (this is why we used 'purely large' above). "X" &gt; "Y" iff "X" − "Y" is positive. "Comparison of monic transmonomials:" formula_42 – these are the only equalities in our construction. formula_43 formula_44 iff formula_45 (also formula_46). "Multiplication:" formula_47 formula_48 This essentially applies the distributive law to the product; because the series is well-based, the inner sum is always finite. "Differentiation:" formula_49 formula_50 formula_51 formula_52 (division is defined using multiplication). With these definitions, transseries is an ordered differential field. Transseries is also a valued field, with the valuation formula_53 given by the leading monic transmonomial, and the corresponding asymptotic relation defined for formula_54 by formula_55 if formula_56 (where formula_57 is the absolute value). Other constructions. Log-exp transseries as iterated Hahn series. Log-free transseries. We first define the subfield formula_58 of formula_0 of so-called "log-free transseries". Those are transseries which exclude any logarithmic term. "Inductive definition:" For formula_59 we will define a linearly ordered multiplicative group of "monomials" formula_60. We then let formula_61 denote the field of "well-based series" formula_62. This is the set of maps formula_63 with well-based (i.e. reverse well-ordered) support, equipped with pointwise sum and Cauchy product (see Hahn series). In formula_61, we distinguish the (non-unital) subring formula_64 of "purely large transseries", which are series whose support contains only monomials lying strictly above formula_65. We start with formula_66 equipped with the product formula_67 and the order formula_68. If formula_69 is such that formula_60, and thus formula_61 and formula_64 are defined, we let formula_70 denote the set of formal expressions formula_71 where formula_72 and formula_73. This forms a linearly ordered commutative group under the product formula_74 and the lexicographic order formula_75 if and only if formula_76 or (formula_77 and formula_78). The natural inclusion of formula_79 into formula_80 given by identifying formula_81 and formula_82 inductively provides a natural embedding of formula_60 into formula_70, and thus a natural embedding of formula_61 into formula_83. We may then define the linearly ordered commutative group formula_84 and the ordered field formula_85 which is the field of log-free transseries. The field formula_86 is a proper subfield of the field formula_87 of well-based series with real coefficients and monomials in formula_88. Indeed, every series formula_12 in formula_86 has a bounded exponential depth, i.e. the least positive integer formula_89 such that formula_90, whereas the series formula_91 has no such bound. "Exponentiation on formula_86:" The field of log-free transseries is equipped with an exponential function which is a specific morphism formula_92. Let formula_12 be a log-free transseries and let formula_93 be the exponential depth of formula_12, so formula_90. Write formula_12 as the sum formula_94 in formula_95 where formula_73, formula_96 is a real number and formula_97 is infinitesimal (any of them could be zero). Then the formal Hahn sum formula_98 converges in formula_61, and we define formula_99 where formula_100 is the value of the real exponential function at formula_96. "Right-composition with formula_26:" A right composition formula_101 with the series formula_26 can be defined by induction on the exponential depth by formula_102 with formula_103. It follows inductively that monomials are preserved by formula_104 so at each inductive step the sums are well-based and thus well defined. Log-exp transseries. "Definition:" The function formula_105 defined above is not onto formula_106 so the logarithm is only partially defined on formula_107: for instance the series formula_2 has no logarithm. Moreover, every positive infinite log-free transseries is greater than some positive power of formula_2. In order to move from formula_86 to formula_0, one can simply "plug" into the variable formula_2 of series formal iterated logarithms formula_108 which will behave like the formal reciprocal of the formula_89-fold iterated exponential term denoted formula_109. For formula_110 let formula_111 denote the set of formal expressions formula_112 where formula_113. We turn this into an ordered group by defining formula_114, and defining formula_115 when formula_116. We define formula_117. If formula_118 and formula_119 we embed formula_111 into formula_120 by identifying an element formula_112 with the term formula_121 We then obtain formula_0 as the directed union formula_122 On formula_123 the right-composition formula_124 with formula_125 is naturally defined by formula_126 "Exponential and logarithm:" Exponentiation can be defined on formula_0 in a similar way as for log-free transseries, but here also formula_105 has a reciprocal formula_36 on formula_127. Indeed, for a strictly positive series formula_128, write formula_129 where formula_130 is the dominant monomial of formula_12 (largest element of its support), formula_96 is the corresponding positive real coefficient, and formula_131 is infinitesimal. The formal Hahn sum formula_132 converges in formula_133. Write formula_134 where formula_113 itself has the form formula_135 where formula_136 and formula_72. We define formula_137. We finally set formula_138 Using surreal numbers. Direct construction of log-exp transseries. One may also define the field of log-exp transseries as a subfield of the ordered field formula_139 of surreal numbers. The field formula_139 is equipped with Gonshor-Kruskal's exponential and logarithm functions and with its natural structure of field of well-based series under Conway normal form. Define formula_140, the subfield of formula_139 generated by formula_141 and the simplest positive infinite surreal number formula_142 (which corresponds naturally to the ordinal formula_142, and as a transseries to the series formula_2). Then, for formula_93, define formula_143 as the field generated by formula_144, exponentials of elements of formula_144 and logarithms of strictly positive elements of formula_144, as well as (Hahn) sums of summable families in formula_144. The union formula_145 is naturally isomorphic to formula_0. In fact, there is a unique such isomorphism which sends formula_142 to formula_2 and commutes with exponentiation and sums of summable families in formula_146 lying in formula_147. Other fields of transseries. The Berarducci-Mantova derivation on formula_139 coincides on formula_0 with its natural derivation, and is unique to satisfy compatibility relations with the exponential ordered field structure and generalized series field structure of formula_9 and formula_154 Contrary to formula_123 the derivation in formula_9 and formula_155 is not surjective: for instance the series formula_156 doesn't have an antiderivative in formula_9 or formula_157 (this is linked to the fact that those fields contain no transexponential function). Additional properties. Operations on transseries. Operations on the differential exponential ordered field. Transseries have very strong closure properties, and many operations can be defined on transseries: formula_158 Note 1. The last two properties mean that formula_0 is "Liouville closed". Note 2. Just like an elementary nontrigonometric function, each positive infinite transseries formula_12 has integral exponentiality, even in this strong sense: formula_165 The number formula_166 is unique, it is called the "exponentiality" of formula_12. Composition of transseries. An original property of formula_0 is that it admits a composition formula_167 (where formula_168 is the set of positive infinite log-exp transseries) which enables us to see each log-exp transseries formula_12 as a function on formula_168. Informally speaking, for formula_169 and formula_162, the series formula_170 is obtained by replacing each occurrence of the variable formula_2 in formula_12 by formula_13. formula_188 where the sum is a formal Hahn sum of a summable family. Decidability and model theory. Theory of differential ordered valued differential field. The formula_193 theory of formula_0 is decidable and can be axiomatized as follows (this is Theorem 2.2 of Aschenbrenner et al.): formula_198 where "P" is a differential polynomial, i.e. a polynomial in formula_199 In this theory, exponentiation is essentially defined for functions (using differentiation) but not constants; in fact, every definable subset of formula_200 is semialgebraic. Theory of ordered exponential field. The formula_201 theory of formula_0 is that of the exponential real ordered exponential field formula_202, which is model complete by Wilkie's theorem. Hardy fields. formula_203 is the field of accelero-summable transseries, and using accelero-summation, we have the corresponding Hardy field, which is conjectured to be the maximal Hardy field corresponding to a subfield of formula_204. (This conjecture is informal since we have not defined which isomorphisms of Hardy fields into differential subfields of formula_204 are permitted.) formula_203 is conjectured to satisfy the above axioms of formula_204. Without defining accelero-summation, we note that when operations on convergent transseries produce a divergent one while the same operations on the corresponding germs produce a valid germ, we can then associate the divergent transseries with that germ. A Hardy field is said "maximal" if it is properly contained in no Hardy field. By an application of Zorn's lemma, every Hardy field is contained in a maximal Hardy field. It is conjectured that all maximal Hardy fields are elementary equivalent as differential fields, and indeed have the same first order theory as formula_0. Logarithmic-transseries do not themselves correspond to a maximal Hardy field for not every transseries corresponds to a real function, and maximal Hardy fields always contain transsexponential functions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{T}^{LE}" }, { "math_id": 1, "text": "\\sum_{n=0}^\\infty \\frac{a_n}{x^n}" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "f," }, { "math_id": 4, "text": "\\sum_{n=1}^\\infty \\frac{e^{x^{\\frac{1}{n}}}}{n!} + x^3 + \\log x + \\log\\log x +\\sum_{n=0}^\\infty x^{-n} + \\sum _{i=1}^\\infty e^{-\\sum_{j=1}^\\infty e^{ix^2-jx}}." }, { "math_id": 5, "text": "\\sum_{m,n \\in \\N} x^{\\frac{1}{m+1}}e^{-(\\log x)^n}." }, { "math_id": 6, "text": "\\sum_{n \\in \\N} x^n" }, { "math_id": 7, "text": "\\log x + \\log \\log x+ \\log \\log \\log x+ \\cdots" }, { "math_id": 8, "text": "\\frac{1}{2}x+e^{\\frac{1}{2}\\log x}+e^{e^{\\frac{1}{2}\\log \\log x}}+\\cdots" }, { "math_id": 9, "text": "\\mathbb{T}^{EL}" }, { "math_id": 10, "text": "\\R\\langle\\langle \\omega \\rangle\\rangle" }, { "math_id": 11, "text": "(\\mathbb{R},+,\\times,<,\\exp)" }, { "math_id": 12, "text": "f" }, { "math_id": 13, "text": "g" }, { "math_id": 14, "text": "f \\leq_{\\infty} g" }, { "math_id": 15, "text": "g \\leq_{\\infty} f" }, { "math_id": 16, "text": "f\\leq_{\\infty}g" }, { "math_id": 17, "text": "\\exists x. \\forall y>x. f(y)\\leq g(y)" }, { "math_id": 18, "text": "f \\leq_{\\infty} g \\wedge g \\leq_{\\infty} f" }, { "math_id": 19, "text": "(\\sum_{k=0}^n x^{-k})_{n \\in \\mathbb{N}}" }, { "math_id": 20, "text": "\\sum_{k=0}^\\infty x^{-k}" }, { "math_id": 21, "text": "\\sum_{k=0}^\\infty x^{-k}-e^{-x}" }, { "math_id": 22, "text": "e^{-x}" }, { "math_id": 23, "text": "\\sin x" }, { "math_id": 24, "text": "\\sum _{k \\in \\mathbb{N}} k!e^{x^{-\\frac{k}{k+1}}}" }, { "math_id": 25, "text": "e^{e^{.^{.^{.^{e^x}}}}}" }, { "math_id": 26, "text": "e^x" }, { "math_id": 27, "text": "e_{\\omega}" }, { "math_id": 28, "text": "e^{e_{\\omega}(x)}=e_{\\omega}(x+1)" }, { "math_id": 29, "text": "\\sum a_i m_i," }, { "math_id": 30, "text": "a_i" }, { "math_id": 31, "text": "m_i" }, { "math_id": 32, "text": "a_i m_i" }, { "math_id": 33, "text": "a_i = 1" }, { "math_id": 34, "text": "m_{i_1} < m_{i_2} < m_{i_3} < \\cdots" }, { "math_id": 35, "text": "x^n = e^{n \\log x}" }, { "math_id": 36, "text": "\\log" }, { "math_id": 37, "text": "x^n e^\\cdots" }, { "math_id": 38, "text": "x^a e^\\cdots" }, { "math_id": 39, "text": "\\sum a_i m_i" }, { "math_id": 40, "text": "m_i>1" }, { "math_id": 41, "text": "\\sum a_i m_i + \\sum b_i m_i = \\sum(a_i + b_i) m_i" }, { "math_id": 42, "text": "x = e^{\\log x}, \\log x = e^{\\log \\log x}, \\ldots" }, { "math_id": 43, "text": "x > \\log x > \\log \\log x > \\cdots >1 >0." }, { "math_id": 44, "text": "e^a < e^b" }, { "math_id": 45, "text": "a < b" }, { "math_id": 46, "text": "e^0 = 1" }, { "math_id": 47, "text": "e^a e^b = e^{a+b}" }, { "math_id": 48, "text": "\\left(\\sum a_i x_i\\right) \\left(\\sum b_j y_j\\right) = \\sum_k \\left( \\sum_{i,j\\,:\\,z_k=x_i y_j} a_i b_j\\right) z_k." }, { "math_id": 49, "text": "\\left(\\sum a_i x_i\\right)' = \\sum a_i x_i'" }, { "math_id": 50, "text": "1' = 0, x' = 1" }, { "math_id": 51, "text": "(e^y)' = y' e^y" }, { "math_id": 52, "text": "(\\log y)' = y'/y" }, { "math_id": 53, "text": "\\nu" }, { "math_id": 54, "text": "0\\neq f,g \\in \\mathbb{T}^{LE}" }, { "math_id": 55, "text": "f \\prec g" }, { "math_id": 56, "text": " \\forall 0<r \\in \\R, |f| < r |g|" }, { "math_id": 57, "text": "|f|=\\max(f,-f)" }, { "math_id": 58, "text": "\\mathbb{T}^{E}" }, { "math_id": 59, "text": "n \\in \\N," }, { "math_id": 60, "text": "\\mathfrak{M}_n" }, { "math_id": 61, "text": "\\mathbb{T}^E_n" }, { "math_id": 62, "text": "\\R[[\\mathfrak{M}_n]]" }, { "math_id": 63, "text": "\\R\\to \\mathfrak{M}_n" }, { "math_id": 64, "text": "\\mathbb{T}^E_{n,\\succ}" }, { "math_id": 65, "text": "1" }, { "math_id": 66, "text": "\\mathfrak{M}_0=x^{\\R}" }, { "math_id": 67, "text": "x^a x^b:=x^{a+b}" }, { "math_id": 68, "text": "x^a \\prec x^b \\leftrightarrow a<b" }, { "math_id": 69, "text": "n\\in \\N" }, { "math_id": 70, "text": "\\mathfrak{M}_{n+1}" }, { "math_id": 71, "text": "x^a e^{\\theta}" }, { "math_id": 72, "text": "a \\in \\R" }, { "math_id": 73, "text": "\\theta \\in \\mathbb{T}^E_{n,\\succ}" }, { "math_id": 74, "text": "(x^a e^{\\theta})(x^{a'} e^{\\theta'})=(x^{a+a'}) e^{\\theta+\\theta'}" }, { "math_id": 75, "text": "x^a e^{\\theta} \\prec x^{a'} e^{\\theta'}" }, { "math_id": 76, "text": "\\theta<\\theta'" }, { "math_id": 77, "text": "\\theta=\\theta'" }, { "math_id": 78, "text": "a<a'" }, { "math_id": 79, "text": "\\mathfrak{M}_0" }, { "math_id": 80, "text": "\\mathfrak{M}_1" }, { "math_id": 81, "text": "x^a" }, { "math_id": 82, "text": "x^a e^0" }, { "math_id": 83, "text": "\\mathbb{T}^E_{n+1}" }, { "math_id": 84, "text": "\\mathfrak{M}=\\bigcup_{n \\in \\N} \\mathfrak{M}_n" }, { "math_id": 85, "text": "\\mathbb{T}^E=\\bigcup_{n \\in \\N} \\mathbb{T}^E_n" }, { "math_id": 86, "text": "\\mathbb{T}^E" }, { "math_id": 87, "text": "\\R[[\\mathfrak{M}]]" }, { "math_id": 88, "text": "\\mathfrak{M}" }, { "math_id": 89, "text": "n" }, { "math_id": 90, "text": "f \\in \\mathbb{T}^E_n" }, { "math_id": 91, "text": "e^{-x}+e^{-e^x}+e^{-e^{e^x}}+ \\cdots \\in \\R[[\\mathfrak{M}]]" }, { "math_id": 92, "text": "\\exp:(\\mathbb{T}^E,+)\\to(\\mathbb{T}^{E,>}, \\times)" }, { "math_id": 93, "text": "n \\in \\N" }, { "math_id": 94, "text": "f=\\theta+r+\\varepsilon" }, { "math_id": 95, "text": "\\mathbb{T}^E_n," }, { "math_id": 96, "text": "r" }, { "math_id": 97, "text": "\\varepsilon" }, { "math_id": 98, "text": "E(\\varepsilon):=\\sum_{k \\in \\N} \\frac{\\varepsilon^k}{k!}" }, { "math_id": 99, "text": "\\exp(f)=e^{\\theta}\\exp(r) E(\\varepsilon) \\in \\mathbb{T}^E_{n+1}" }, { "math_id": 100, "text": "\\exp(r)" }, { "math_id": 101, "text": "\\circ_{e^x}" }, { "math_id": 102, "text": "\\left (\\sum f_{\\mathfrak{m}} \\mathfrak{m} \\right ) \\circ e^x:=\\sum f_{\\mathfrak{m}} (\\mathfrak{m} \\circ e^x)," }, { "math_id": 103, "text": "x^r \\circ e^x:=e^{rx}" }, { "math_id": 104, "text": "\\circ_{e^x}," }, { "math_id": 105, "text": "\\exp" }, { "math_id": 106, "text": "\\mathbb{T}^{E,>}" }, { "math_id": 107, "text": " \\mathbb{T}^E " }, { "math_id": 108, "text": "\\ell_n,n \\in \\N" }, { "math_id": 109, "text": "e_n" }, { "math_id": 110, "text": "m,n \\in \\N," }, { "math_id": 111, "text": "\\mathfrak{M}_{m,n}" }, { "math_id": 112, "text": "\\mathfrak{u} \\circ \\ell_n" }, { "math_id": 113, "text": "\\mathfrak{u} \\in \\mathfrak{M}_m" }, { "math_id": 114, "text": "(\\mathfrak{u} \\circ \\ell_n)(\\mathfrak{v} \\circ \\ell_n(x)):=(\\mathfrak{u}\\mathfrak{v}) \\circ \\ell_n" }, { "math_id": 115, "text": "\\mathfrak{u} \\circ \\ell_n\\prec \\mathfrak{v}\\circ \\ell_n" }, { "math_id": 116, "text": "\\mathfrak{u}\\prec \\mathfrak{v}" }, { "math_id": 117, "text": "\\mathbb{T}^{LE}_{m,n}:=\\R[[\\mathfrak{M}_{m,n}]]" }, { "math_id": 118, "text": "n'> n" }, { "math_id": 119, "text": "m' \\geq m+(n'-n)," }, { "math_id": 120, "text": "\\mathfrak{M}_{m',n'}" }, { "math_id": 121, "text": "\\left (\\mathfrak{u} \\circ \\overbrace{e^x \\circ \\cdots \\circ e^x}^{n'-n} \\right ) \\circ \\ell_{n'}." }, { "math_id": 122, "text": "\\mathbb{T}^{LE}=\\bigcup_{m,n \\in \\N} \\mathbb{T}^{LE}_{m,n}." }, { "math_id": 123, "text": "\\mathbb{T}^{LE}," }, { "math_id": 124, "text": "\\circ_{\\ell}" }, { "math_id": 125, "text": "\\ell" }, { "math_id": 126, "text": "\\mathbb{T}^{LE}_{m,n} \\ni \\left (\\sum f_{\\mathfrak{m} \\circ \\ell_n} \\mathfrak{m} \\circ \\ell_n \\right ) \\circ \\ell:= \\sum f_{\\mathfrak{m} \\circ \\ell_n} \\mathfrak{m} \\circ \\ell_{n+1}\\in \\mathbb{T}^{LE}_{m,n+1}." }, { "math_id": 127, "text": "\\mathbb{T}^{LE,>}" }, { "math_id": 128, "text": "f \\in \\mathbb{T}^{LE,>}_{m,n}" }, { "math_id": 129, "text": "f=\\mathfrak{m} r(1+\\varepsilon)" }, { "math_id": 130, "text": "\\mathfrak{m}" }, { "math_id": 131, "text": "\\varepsilon:=\\frac{f}{\\mathfrak{m} r}-1" }, { "math_id": 132, "text": "L(1+\\varepsilon):=\\sum_{k \\in \\N}\\frac{(-\\varepsilon)^k}{k+1}" }, { "math_id": 133, "text": "\\mathbb{T}^{LE}_{m,n}" }, { "math_id": 134, "text": "\\mathfrak{m}=\\mathfrak{u}\\circ \\ell_n" }, { "math_id": 135, "text": "\\mathfrak{u}=x^ae^{\\theta}" }, { "math_id": 136, "text": "\\theta \\in \\mathbb{T}^E_{m,\\succ}" }, { "math_id": 137, "text": "\\ell(\\mathfrak{m}):=a \\ell_{n+1} +\\theta \\circ \\ell_n " }, { "math_id": 138, "text": "\\log(f):=\\ell(\\mathfrak{m})+\\log(c)+L(1+\\varepsilon) \\in \\mathbb{T}^{LE}_{m,n+1}." }, { "math_id": 139, "text": "\\mathbf{No}" }, { "math_id": 140, "text": "F^{LE}_0=\\R(\\omega)" }, { "math_id": 141, "text": "\\R" }, { "math_id": 142, "text": "\\omega" }, { "math_id": 143, "text": "F^{LE}_{n+1}" }, { "math_id": 144, "text": "F^{LE}_n" }, { "math_id": 145, "text": "F^{LE}_{\\omega}=\\bigcup_{n \\in \\N} F^{LE}_n" }, { "math_id": 146, "text": "F^{LE}_{\\omega}" }, { "math_id": 147, "text": "F_{\\omega}" }, { "math_id": 148, "text": "\\mathbf{Ord}" }, { "math_id": 149, "text": "\\R\\langle\\langle\\omega\\rangle\\rangle" }, { "math_id": 150, "text": "F^{LE}_0" }, { "math_id": 151, "text": "F^{EL}_0:=\\R(\\omega,\\log \\omega, \\log \\log \\omega, \\ldots)" }, { "math_id": 152, "text": "n\\in \\N, F^{EL}_{n+1}" }, { "math_id": 153, "text": "F^{EL}_n" }, { "math_id": 154, "text": "\\R\\langle\\langle\\omega\\rangle\\rangle." }, { "math_id": 155, "text": "\\R \\langle\\langle\\omega\\rangle\\rangle" }, { "math_id": 156, "text": "\\frac{1}{\\omega \\log \\omega \\log \\log \\omega \\cdots}:=\\exp(-(\\log \\omega+\\log \\log\\omega+\\log \\log \\log \\omega+ \\cdots)) \\in \\mathbb{T}^{EL}" }, { "math_id": 157, "text": "\\R \\langle \\langle\\omega \\rangle\\rangle" }, { "math_id": 158, "text": "\\exp(x^{-1}) = \\sum_{n=0}^\\infty \\frac{1}{n!}x^{-n} \\quad \\text{and} \\quad \\log(x+\\ell)=\\ell+\\sum_{n=0}^{\\infty} \\frac{(x^{-1}\\ell)^n}{n+1}." }, { "math_id": 159, "text": "F \\in \\mathbb{T}^{LE}" }, { "math_id": 160, "text": "F'=f" }, { "math_id": 161, "text": "F_1=0" }, { "math_id": 162, "text": "f\\in \\mathbb{T}^{LE}" }, { "math_id": 163, "text": "h\\in \\mathbb{T}^{LE}" }, { "math_id": 164, "text": "f'=f h'" }, { "math_id": 165, "text": "\\exists k,n \\in \\N: \\quad \\ell_{n-k} -1\\leq \\ell_n \\circ f \\leq \\ell_{n-k}+1." }, { "math_id": 166, "text": "k" }, { "math_id": 167, "text": "\\circ :\\mathbb{T}^{LE} \\times \\mathbb{T}^{LE,>,\\succ} \\to \\mathbb{T}^{LE}" }, { "math_id": 168, "text": "\\mathbb{T}^{LE,>,\\succ}" }, { "math_id": 169, "text": "g\\in\\mathbb{T}^{LE,>,\\succ}" }, { "math_id": 170, "text": "f \\circ g" }, { "math_id": 171, "text": "f \\in \\mathbb{T}^{LE} " }, { "math_id": 172, "text": "g,h \\in \\mathbb{T}^{LE,>,\\succ}" }, { "math_id": 173, "text": "g\\circ h \\in \\mathbb{T}^{LE,>,\\succ}" }, { "math_id": 174, "text": "f \\circ (g\\circ h)=(f \\circ g) \\circ h" }, { "math_id": 175, "text": "g\\in \\mathbb{T}^{LE,>,\\succ}" }, { "math_id": 176, "text": "\\circ_g:f\\mapsto f \\circ g" }, { "math_id": 177, "text": "\\exp(g)" }, { "math_id": 178, "text": "\\log(g)" }, { "math_id": 179, "text": "\\circ_x=\\operatorname{id}_{\\mathbb{T}^{LE}}" }, { "math_id": 180, "text": "g\\mapsto f \\circ g" }, { "math_id": 181, "text": "f'" }, { "math_id": 182, "text": "f \\in \\mathbb{T}^{LE}\\times " }, { "math_id": 183, "text": "g \\in \\mathbb{T}^{LE,>,\\succ}" }, { "math_id": 184, "text": "(f \\circ g)'=g'f' \\circ g" }, { "math_id": 185, "text": "h \\in \\mathbb{T}^{LE,>,\\succ}" }, { "math_id": 186, "text": "g \\circ h= h \\circ g= x" }, { "math_id": 187, "text": "\\varepsilon \\in \\mathbb{T}^{LE}" }, { "math_id": 188, "text": "f\\circ (g+\\varepsilon)=\\sum_{k \\in \\N} \\frac{f^{(k)}\\circ g}{k!}\\varepsilon^k" }, { "math_id": 189, "text": "f \\in \\mathbb{T}^{LE,>,\\succ}" }, { "math_id": 190, "text": "0" }, { "math_id": 191, "text": "a" }, { "math_id": 192, "text": "f^a" }, { "math_id": 193, "text": "\\left\\langle+,\\times,\\partial,<,\\prec\\right\\rangle" }, { "math_id": 194, "text": "f > 0 \\wedge f \\succ 1 \\Longrightarrow f' > 0" }, { "math_id": 195, "text": "f \\prec 1 \\Longrightarrow f' \\prec 1" }, { "math_id": 196, "text": "\\forall f \\exists g: \\quad g' = f" }, { "math_id": 197, "text": "\\forall f \\exists h: \\quad h' = fh" }, { "math_id": 198, "text": "P(f) < 0 \\wedge P(g) > 0 \\Longrightarrow \\exists h: \\quad P(h) = 0," }, { "math_id": 199, "text": "f, f', f'', \\ldots, f^{(k)}." }, { "math_id": 200, "text": "\\R^n" }, { "math_id": 201, "text": "\\langle+,\\times,\\exp,< \\rangle" }, { "math_id": 202, "text": "(\\R,+,\\times,\\exp,<)" }, { "math_id": 203, "text": "\\mathbb{T}_\\mathrm{as}" }, { "math_id": 204, "text": "\\mathbb{T}" } ]
https://en.wikipedia.org/wiki?curid=58151408
581610
Kummer theory
Theory in abstract algebra In abstract algebra and number theory, Kummer theory provides a description of certain types of field extensions involving the adjunction of "n"th roots of elements of the base field. The theory was originally developed by Ernst Eduard Kummer around the 1840s in his pioneering work on Fermat's Last Theorem. The main statements do not depend on the nature of the field – apart from its characteristic, which should not divide the integer "n" – and therefore belong to abstract algebra. The theory of cyclic extensions of the field "K" when the characteristic of "K" does divide "n" is called Artin–Schreier theory. Kummer theory is basic, for example, in class field theory and in general in understanding abelian extensions; it says that in the presence of enough roots of unity, cyclic extensions can be understood in terms of extracting roots. The main burden in class field theory is to dispense with extra roots of unity ('descending' back to smaller fields); which is something much more serious. Kummer extensions. A Kummer extension is a field extension "L"/"K", where for some given integer "n" &gt; 1 we have For example, when "n" = 2, the first condition is always true if "K" has characteristic ≠ 2. The Kummer extensions in this case include quadratic extensions formula_0 where "a" in "K" is a non-square element. By the usual solution of quadratic equations, any extension of degree 2 of "K" has this form. The Kummer extensions in this case also include biquadratic extensions and more general multiquadratic extensions. When "K" has characteristic 2, there are no such Kummer extensions. Taking "n" = 3, there are no degree 3 Kummer extensions of the rational number field Q, since for three cube roots of 1 complex numbers are required. If one takes "L" to be the splitting field of "X"3 − "a" over Q, where "a" is not a cube in the rational numbers, then "L" contains a subfield "K" with three cube roots of 1; that is because if α and β are roots of the cubic polynomial, we shall have (α/β)3 =1 and the cubic is a separable polynomial. Then "L"/"K" is a Kummer extension. More generally, it is true that when "K" contains "n" distinct "n"th roots of unity, which implies that the characteristic of "K" doesn't divide "n", then adjoining to "K" the "n"th root of any element "a" of "K" creates a Kummer extension (of degree "m", for some "m" dividing "n"). As the splitting field of the polynomial "Xn" − "a", the Kummer extension is necessarily Galois, with Galois group that is cyclic of order "m". It is easy to track the Galois action via the root of unity in front of formula_1 Kummer theory provides converse statements. When "K" contains "n" distinct "n"th roots of unity, it states that any abelian extension of "K" of exponent dividing "n" is formed by extraction of roots of elements of "K". Further, if "K"× denotes the multiplicative group of non-zero elements of "K", abelian extensions of "K" of exponent "n" correspond bijectively with subgroups of formula_2 that is, elements of "K"× modulo "n"th powers. The correspondence can be described explicitly as follows. Given a subgroup formula_3 the corresponding extension is given by formula_4 where formula_5 In fact it suffices to adjoin "n"th root of one representative of each element of any set of generators of the group Δ. Conversely, if "L" is a Kummer extension of "K", then Δ is recovered by the rule formula_6 In this case there is an isomorphism formula_7 given by formula_8 where α is any "n"th root of "a" in "L". Here formula_9 denotes the multiplicative group of "n"th roots of unity (which belong to "K") and formula_10 is the group of continuous homomorphisms from formula_11 equipped with Krull topology to formula_9 with discrete topology (with group operation given by pointwise multiplication). This group (with discrete topology) can also be viewed as Pontryagin dual of formula_11, assuming we regard formula_9 as a subgroup of circle group. If the extension "L"/"K" is finite, then formula_11 is a finite discrete group and we have formula_12 however the last isomorphism isn't natural. Recovering "a"1/"n" from a primitive element. For formula_13 prime, let formula_14 be a field containing formula_15 and formula_16 a degree formula_13 Galois extension. Note the Galois group is cyclic, generated by formula_17. Let formula_18 Then formula_19 Since formula_20 and formula_21, where the formula_22 sign is formula_23 if formula_13 is odd and formula_24 if formula_25. When formula_26 is an abelian extension of degree formula_27 square-free such that formula_28, apply the same argument to the subfields formula_29 Galois of degree formula_30 to obtain formula_31 where formula_32. The Kummer Map. One of the main tools in Kummer theory is the Kummer map. Let formula_33 be a positive integer and let formula_14 be a field, not necessarily containing the formula_33th roots of unity. Letting formula_34 denote the algebraic closure of formula_14, there is a short exact sequence formula_35 Choosing an extension formula_26 and taking formula_36-cohomology one obtains the sequence formula_37 By Hilbert's Theorem 90 formula_38, and hence we get an isomorphism formula_39. This is the Kummer map. A version of this map also exists when all formula_33 are considered simultaneously. Namely, since formula_40, taking the direct limit over formula_33 yields an isomorphism formula_41, where "tors" denotes the torsion subgroup of roots of unity. For Elliptic Curves. Kummer theory is often used in the context of elliptic curves. Let formula_42 be an elliptic curve. There is a short exact sequence formula_43, where the multiplication by formula_33 map is surjective since formula_44 is divisible. Choosing an algebraic extension formula_26 and taking cohomology, we obtain the Kummer sequence for formula_44: formula_45. The computation of the weak Mordell-Weil group formula_46 is a key part of the proof of the Mordell-Weil theorem. The failure of formula_47 to vanish adds a key complexity to the theory. Generalizations. Suppose that "G" is a profinite group acting on a module "A" with a surjective homomorphism π from the "G"-module "A" to itself. Suppose also that "G" acts trivially on the kernel "C" of π and that the first cohomology group H1("G","A") is trivial. Then the exact sequence of group cohomology shows that there is an isomorphism between "A""G"/π("A""G") and Hom("G","C"). Kummer theory is the special case of this when "A" is the multiplicative group of the separable closure of a field "k", "G" is the Galois group, π is the "n"th power map, and "C" the group of "n"th roots of unity. Artin–Schreier theory is the special case when "A" is the additive group of the separable closure of a field "k" of positive characteristic "p", "G" is the Galois group, π is the Frobenius map minus the identity, and "C" the finite field of order "p". Taking "A" to be a ring of truncated Witt vectors gives Witt's generalization of Artin–Schreier theory to extensions of exponent dividing "pn".
[ { "math_id": 0, "text": "L= K(\\sqrt{a})" }, { "math_id": 1, "text": "\\sqrt[n]{a}." }, { "math_id": 2, "text": "K^{\\times}/(K^{\\times})^n," }, { "math_id": 3, "text": "\\Delta \\subseteq K^{\\times}/(K^{\\times})^n," }, { "math_id": 4, "text": "K \\left (\\Delta^{\\frac{1}{n}} \\right)," }, { "math_id": 5, "text": "\\Delta^{\\frac{1}{n}} = \\left \\{ \\sqrt[n]{a}:a\\in K^{\\times}, a \\cdot \\left (K^{\\times} \\right )^n \\in \\Delta \\right \\}." }, { "math_id": 6, "text": "\\Delta = \\left (K^\\times \\cap (L^\\times)^n \\right )/(K^{\\times})^n." }, { "math_id": 7, "text": "\\Delta \\cong \\operatorname{Hom}_{\\text{c}}(\\operatorname{Gal}(L/K), \\mu_n)" }, { "math_id": 8, "text": "a \\mapsto \\left(\\sigma \\mapsto \\frac{\\sigma(\\alpha)}{\\alpha}\\right)," }, { "math_id": 9, "text": "\\mu_n" }, { "math_id": 10, "text": "\\operatorname{Hom}_{\\text{c}}(\\operatorname{Gal}(L/K), \\mu_n)" }, { "math_id": 11, "text": "\\operatorname{Gal}(L/K)" }, { "math_id": 12, "text": "\\Delta \\cong \\operatorname{Hom}(\\operatorname{Gal}(L/K), \\mu_n) \\cong \\operatorname{Gal}(L/K)," }, { "math_id": 13, "text": "p" }, { "math_id": 14, "text": "K" }, { "math_id": 15, "text": "\\zeta_p" }, { "math_id": 16, "text": "K(\\beta)/K" }, { "math_id": 17, "text": "\\sigma" }, { "math_id": 18, "text": "\\alpha= \\sum_{l=0}^{p-1} \\zeta_p^{l} \\sigma^l(\\beta) \\in K(\\beta)" }, { "math_id": 19, "text": "\\zeta_p \\sigma(\\alpha) = \\sum_{l=0}^{p-1} \\zeta_p^{l+1} \\sigma^{l+1}(\\beta) = \\alpha." }, { "math_id": 20, "text": "\\alpha\\ne \\sigma(\\alpha), K(\\alpha) = K(\\beta)" }, { "math_id": 21, "text": "\\alpha^p = \\pm \\prod_{l=0}^{p-1} \\zeta_p^{-l} \\alpha = \\pm \\prod_{l=0}^{p-1} \\sigma^l(\\alpha) = \\pm N_{K(\\beta)/K}(\\alpha) \\in K" }, { "math_id": 22, "text": "\\pm" }, { "math_id": 23, "text": "+" }, { "math_id": 24, "text": "-" }, { "math_id": 25, "text": "p=2" }, { "math_id": 26, "text": "L/K" }, { "math_id": 27, "text": "n= \\prod_{j=1}^m p_j" }, { "math_id": 28, "text": "\\zeta_n \\in K" }, { "math_id": 29, "text": "K(\\beta_j)/K" }, { "math_id": 30, "text": "p_j" }, { "math_id": 31, "text": "L = K \\left (a_1^{1/p_1},\\ldots,a_m^{1/p_m} \\right ) = K \\left (A^{1/p_1},\\ldots,A^{1/p_m} \\right )= K \\left (A^{1/n} \\right )" }, { "math_id": 32, "text": "A = \\prod_{j=1}^m a_j^{n/p_j} \\in K" }, { "math_id": 33, "text": "m" }, { "math_id": 34, "text": "\\overline{K}" }, { "math_id": 35, "text": "0\\xrightarrow{} \\overline{K}^{\\times}[m] \\xrightarrow{} \\overline{K}^{\\times} \\xrightarrow{z\\mapsto z^m} \\overline{K}^{\\times}\\xrightarrow{} 0" }, { "math_id": 36, "text": "\\mathrm{Gal}(\\overline{K}/L)" }, { "math_id": 37, "text": "0\\xrightarrow{} L^{\\times}/(L^{\\times})^{m} \\xrightarrow{} H^1\\left(L, \\overline{K}^{\\times}[m]\\right) \\xrightarrow{} H^1\\left(L,\\overline{K}^{\\times}\\right)[m]\\xrightarrow{}0" }, { "math_id": 38, "text": "H^1\\left(L,\\overline{K}^{\\times}\\right)=0" }, { "math_id": 39, "text": "\\delta: L^{\\times}/\\left(L^{\\times}\\right)^m\\xrightarrow{\\sim}H^1\\left(L,\\overline{K}^{\\times}[m]\\right)" }, { "math_id": 40, "text": "L^{\\times}/(L^{\\times})^m=L^{\\times}\\otimes m^{-1} \\mathbb{Z}/\\mathbb{Z}" }, { "math_id": 41, "text": "\\delta: L^{\\times} \\otimes \\mathbb{Q}/\\mathbb{Z} \\xrightarrow{\\sim} H^1\\left(L, \\overline{K}_{tors}\\right) " }, { "math_id": 42, "text": "E/K" }, { "math_id": 43, "text": "0\\xrightarrow{} E[m]\\xrightarrow{} E \\xrightarrow{P\\mapsto m\\cdot P} E \\xrightarrow{} 0" }, { "math_id": 44, "text": "E" }, { "math_id": 45, "text": "0\\xrightarrow{} E(L)/mE(L)\\xrightarrow{} H^1(L, E[m])\\xrightarrow{} H^1(L,E)[m]\\xrightarrow{}0" }, { "math_id": 46, "text": "E(L)/mE(L)" }, { "math_id": 47, "text": "H^1(L,E)" } ]
https://en.wikipedia.org/wiki?curid=581610
5817043
Holonomic function
Type of functions, in mathematical analysis In mathematics, and more specifically in analysis, a holonomic function is a smooth function of several variables that is a solution of a system of linear homogeneous differential equations with polynomial coefficients and satisfies a suitable dimension condition in terms of D-modules theory. More precisely, a holonomic function is an element of a holonomic module of smooth functions. Holonomic functions can also be described as differentiably finite functions, also known as D-finite functions. When a power series in the variables is the Taylor expansion of a holonomic function, the sequence of its coefficients, in one or several indices, is also called "holonomic". Holonomic sequences are also called P-recursive sequences: they are defined recursively by multivariate recurrences satisfied by the whole sequence and by suitable specializations of it. The situation simplifies in the univariate case: any univariate sequence that satisfies a linear homogeneous recurrence relation with polynomial coefficients, or equivalently a linear homogeneous difference equation with polynomial coefficients, is holonomic. Holonomic functions and sequences in one variable. Definitions. Let formula_0 be a field of characteristic 0 (for example, formula_1 or formula_2). A function formula_3 is called "D-finite" (or "holonomic") if there exist polynomials formula_4 such that formula_5 holds for all "x". This can also be written as formula_6 where formula_7 and formula_8 is the differential operator that maps formula_9 to formula_10. formula_11 is called an "annihilating operator" of "f" (the annihilating operators of formula_12 form an ideal in the ring formula_13, called the "annihilator" of formula_12). The quantity "r" is called the "order" of the annihilating operator. By extension, the holonomic function "f" is said to be of order "r" when an annihilating operator of such order exists. A sequence formula_14 is called "P-recursive" (or "holonomic") if there exist polynomials formula_15 such that formula_16 holds for all "n". This can also be written as formula_17 where formula_18 and formula_19 the shift operator that maps formula_20 to formula_21. formula_11 is called an "annihilating operator" of "c" (the annihilating operators of formula_22 form an ideal in the ring formula_23, called the "annihilator" of formula_22). The quantity "r" is called the "order" of the annihilating operator. By extension, the holonomic sequence "c" is said to be of order "r" when an annihilating operator of such order exists. Holonomic functions are precisely the generating functions of holonomic sequences: if formula_9 is holonomic, then the coefficients formula_24 in the power series expansion formula_25 form a holonomic sequence. Conversely, for a given holonomic sequence formula_24, the function defined by the above sum is holonomic (this is true in the sense of formal power series, even if the sum has a zero radius of convergence). Closure properties. Holonomic functions (or sequences) satisfy several closure properties. In particular, holonomic functions (or sequences) form a ring. They are not closed under division, however, and therefore do not form a field. If formula_26 and formula_27 are holonomic functions, then the following functions are also holonomic: A crucial property of holonomic functions is that the closure properties are effective: given annihilating operators for formula_12 and formula_38, an annihilating operator for formula_39 as defined using any of the above operations can be computed explicitly. Examples of holonomic functions and sequences. Examples of holonomic functions include: The class of holonomic functions is a strict superset of the class of hypergeometric functions. Examples of special functions that are holonomic but not hypergeometric include the Heun functions. Examples of holonomic sequences include: Hypergeometric functions, Bessel functions, and classical orthogonal polynomials, in addition to being holonomic functions of their variable, are also holonomic sequences with respect to their parameters. For example, the Bessel functions formula_56 and formula_57 satisfy the second-order linear recurrence formula_58. Examples of nonholonomic functions and sequences. Examples of nonholonomic functions include: Examples of nonholonomic sequences include: Algorithms and software. Holonomic functions are a powerful tool in computer algebra. A holonomic function or sequence can be represented by a finite amount of data, namely an annihilating operator and a finite set of initial values, and the closure properties allow carrying out operations such as equality testing, summation and integration in an algorithmic fashion. In recent years, these techniques have allowed giving automated proofs of a large number of special function and combinatorial identities. Moreover, there exist fast algorithms for evaluating holonomic functions to arbitrary precision at any point in the complex plane, and for numerically computing any entry in a holonomic sequence. Software for working with holonomic functions includes: See also. Dynamic Dictionary of Mathematical functions , an online software, based on holonomic functions for automatically studying many classical and special functions (evaluation at a point, Taylor series and asymptotic expansion to any user-given precision, differential equation, recurrence for the coefficients of the Taylor series, derivative, indefinite integral, plotting, ...) Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{K}" }, { "math_id": 1, "text": "\\mathbb{K} = \\mathbb{Q}" }, { "math_id": 2, "text": "\\mathbb{K} = \\mathbb{C}" }, { "math_id": 3, "text": "f = f(x)" }, { "math_id": 4, "text": "0 \\neq a_r(x), a_{r-1}(x), \\ldots, a_0(x) \\in \\mathbb{K}[x]" }, { "math_id": 5, "text": "a_r(x) f^{(r)}(x) + a_{r-1}(x) f^{(r-1)}(x) + \\cdots + a_1(x) f'(x) + a_0(x) f(x) = 0" }, { "math_id": 6, "text": "A f = 0" }, { "math_id": 7, "text": "A = \\sum_{k=0}^r a_k D_x^k" }, { "math_id": 8, "text": "D_x" }, { "math_id": 9, "text": "f(x)" }, { "math_id": 10, "text": "f'(x)" }, { "math_id": 11, "text": "A" }, { "math_id": 12, "text": "f" }, { "math_id": 13, "text": "\\mathbb{K}[x][D_x]" }, { "math_id": 14, "text": "c = c_0, c_1, \\ldots" }, { "math_id": 15, "text": "a_r(n), a_{r-1}(n), \\ldots, a_0(n) \\in \\mathbb{K}[n]" }, { "math_id": 16, "text": "a_r(n) c_{n+r} + a_{r-1}(n) c_{n+r-1} + \\cdots + a_0(n) c_n = 0" }, { "math_id": 17, "text": "A c = 0" }, { "math_id": 18, "text": "A = \\sum_{k=0}^r a_k S_n" }, { "math_id": 19, "text": "S_n" }, { "math_id": 20, "text": "c_0, c_1, \\ldots" }, { "math_id": 21, "text": "c_1, c_2, \\ldots" }, { "math_id": 22, "text": "c" }, { "math_id": 23, "text": "\\mathbb{K}[n][S_n]" }, { "math_id": 24, "text": "c_n" }, { "math_id": 25, "text": "f(x) = \\sum_{n=0}^{\\infty} c_n x^n" }, { "math_id": 26, "text": "f(x) = \\sum_{n=0}^{\\infty} f_n x^n" }, { "math_id": 27, "text": "g(x) = \\sum_{n=0}^{\\infty} g_n x^n" }, { "math_id": 28, "text": "h(x) = \\alpha f(x) + \\beta g(x)" }, { "math_id": 29, "text": "\\alpha" }, { "math_id": 30, "text": "\\beta" }, { "math_id": 31, "text": "h(x) = f(x) g(x)" }, { "math_id": 32, "text": "h(x) = \\sum_{n=0}^{\\infty} f_n g_n x^n" }, { "math_id": 33, "text": "h(x) = \\int_0^x f(t) dt" }, { "math_id": 34, "text": "h(x) = \\sum_{n=0}^{\\infty} (\\sum_{k=0}^n f_k) x^n" }, { "math_id": 35, "text": "h(x) = f(a(x))" }, { "math_id": 36, "text": "a(x)" }, { "math_id": 37, "text": "a(f(x))" }, { "math_id": 38, "text": "g" }, { "math_id": 39, "text": "h" }, { "math_id": 40, "text": "{}_pF_q(a_1,\\ldots,a_p, b_1, \\ldots, b_q, x)" }, { "math_id": 41, "text": "x" }, { "math_id": 42, "text": "a_i" }, { "math_id": 43, "text": "b_i" }, { "math_id": 44, "text": "\\operatorname{erf}(x)" }, { "math_id": 45, "text": "J_n(x)" }, { "math_id": 46, "text": "Y_n(x)" }, { "math_id": 47, "text": "I_n(x)" }, { "math_id": 48, "text": "K_n(x)" }, { "math_id": 49, "text": "\\operatorname{Ai}(x)" }, { "math_id": 50, "text": "\\operatorname{Bi}(x)" }, { "math_id": 51, "text": "F_n" }, { "math_id": 52, "text": "n!" }, { "math_id": 53, "text": "{n \\choose k}" }, { "math_id": 54, "text": "H_n = \\sum_{k=1}^n \\frac{1}{k}" }, { "math_id": 55, "text": "H_{n,m} = \\sum_{k=1}^n \\frac{1}{k^m}" }, { "math_id": 56, "text": "J_n" }, { "math_id": 57, "text": "Y_n" }, { "math_id": 58, "text": "x (f_{n+1} + f_{n-1}) = 2 n f_n" }, { "math_id": 59, "text": "\\frac{x}{e^x-1}" }, { "math_id": 60, "text": "\\log(n)" }, { "math_id": 61, "text": "n^{\\alpha}" }, { "math_id": 62, "text": "\\alpha \\not\\in \\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=5817043
5817304
Angular eccentricity
Angular eccentricity is one of many parameters which arise in the study of the ellipse or ellipsoid. It is denoted here by α (alpha). It may be defined in terms of the eccentricity, "e", or the aspect ratio, "b/a" (the ratio of the semi-minor axis and the semi-major axis): formula_0 Angular eccentricity is not currently used in English language publications on mathematics, geodesy or map projections but it does appear in older literature. Any non-dimensional parameter of the ellipse may be expressed in terms of the angular eccentricity. Such expressions are listed in the following table after the conventional definitions. in terms of the semi-axes. The notation for these parameters varies. Here we follow Rapp: The alternative expressions for the flattenings would guard against large cancellations in numerical work. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha=\\sin^{-1}\\!e=\\cos^{-1}\\left(\\frac{b}{a}\\right).\n \\,\\!" } ]
https://en.wikipedia.org/wiki?curid=5817304
5817627
Antiparallelogram
Polygon with four crossed edges of two lengths In geometry, an antiparallelogram is a type of self-crossing quadrilateral. Like a parallelogram, an antiparallelogram has two opposite pairs of equal-length sides, but these pairs of sides are not in general parallel. Instead, each pair of sides is antiparallel with respect to the other, with sides in the longer pair crossing each other as in a scissors mechanism. Whereas a parallelogram's opposite angles are equal and oriented the same way, an antiparallelogram's are equal but oppositely oriented. Antiparallelograms are also called contraparallelograms or crossed parallelograms. Antiparallelograms occur as the vertex figures of certain nonconvex uniform polyhedra. In the theory of four-bar linkages, the linkages with the form of an antiparallelogram are also called butterfly linkages or bow-tie linkages, and are used in the design of non-circular gears. In celestial mechanics, they occur in certain families of solutions to the 4-body problem. Every antiparallelogram has an axis of symmetry, with all four vertices on a circle. It can be formed from an isosceles trapezoid by adding the two diagonals and removing two parallel sides. The signed area of every antiparallelogram is zero. Geometric properties. An antiparallelogram is a special case of a crossed quadrilateral, with two pairs of equal-length edges. In general, crossed quadrilaterals can have unequal edges. A special form of the antiparallelogram is a crossed rectangle, in which two opposite edges are parallel. Every antiparallelogram is a cyclic quadrilateral, meaning that its four vertices all lie on a single circle. Additionally, the four extended sides of any antiparallelogram are the bitangents of two circles, making antiparallelograms closely related to the tangential quadrilaterals, ex-tangential quadrilaterals, and kites (which are both tangential and ex-tangential). Every antiparallelogram has an axis of symmetry through its crossing point. Because of this symmetry, it has two pairs of equal angles and two pairs of equal sides. The four midpoints of its sides lie on a line perpendicular to the axis of symmetry; that is, for this kind of quadrilateral, the Varignon parallelogram is a degenerate quadrilateral of area zero, consisting of four collinear points. The convex hull of an antiparallelogram is an isosceles trapezoid, and every antiparallelogram may be formed from an isosceles trapezoid (or its special cases, the rectangles and squares) by replacing two parallel sides by the two diagonals of the trapezoid. Because an antiparallelogram forms two congruent triangular regions of the plane, but loops around those two regions in opposite directions, its signed area is the difference between the regions' areas and is therefore zero. The polygon's unsigned area (the total area it surrounds) is the sum, rather than the difference, of these areas. For an antiparallelogram with two parallel diagonals of lengths formula_0 and formula_1, separated by height formula_2, this sum is formula_3. It follows from applying the triangle inequality to these two triangular regions that the crossing pair of edges in an antiparallelogram must always be longer than the two uncrossed edges. Applications. In polyhedra. Several nonconvex uniform polyhedra, including the tetrahemihexahedron, cubohemioctahedron, octahemioctahedron, small rhombihexahedron, small icosihemidodecahedron, and small dodecahemidodecahedron, have antiparallelograms as their vertex figures, the cross-sections formed by slicing the polyhedron by a plane that passes near a vertex, perpendicularly to the axis between the vertex and the center. One form of a non-uniform but flexible polyhedron, the Bricard octahedron, can be constructed as a bipyramid over an antiparallelogram. Four-bar linkages. The antiparallelogram has been used as a form of four-bar linkage, in which four rigid beams of fixed length (the four sides of the antiparallelogram) may rotate with respect to each other at joints placed at the four vertices of the antiparallelogram. In this context it is also called a "butterfly" or "bow-tie linkage". As a linkage, it has a point of instability in which it can be converted into a parallelogram and vice versa, but either of these linkages can be braced to prevent this instability. For both the parallelogram and antiparallelogram linkages, if one of the long (crossed) edges of the linkage is fixed as a base, the free joints move on equal circles, but in a parallelogram they move in the same direction with equal velocities while in the antiparallelogram they move in opposite directions with unequal velocities. As James Watt discovered, if an antiparallelogram has its long side fixed in this way, the midpoint of the unfixed long edge will trace out a lemniscate or figure eight curve. For the antiparallelogram formed by the sides and diagonals of a square, it is the lemniscate of Bernoulli. The antiparallelogram with its long side fixed is a variant of Watt's linkage. An antiparallelogram is an important feature in the design of Hart's inversor, a linkage that (like the Peaucellier–Lipkin linkage) can convert rotary motion to straight-line motion. An antiparallelogram-shaped linkage can also be used to connect the two axles of a four-wheeled vehicle, decreasing the turning radius of the vehicle relative to a suspension that only allows one axle to turn. A pair of nested antiparallelograms was used in a linkage defined by Alfred Kempe as part of Kempe's universality theorem, stating that any algebraic curve may be traced out by the joints of a suitably defined linkage. Kempe called the nested-antiparallelogram linkage a "multiplicator", as it could be used to multiply an angle by an integer. Used in the other direction, to divide angles, it can be used for angle trisection (although not as a straightedge and compass construction). Kempe's original constructions using this linkage overlooked the parallelogram-antiparallelogram instability, but bracing the linkages fixes his proof of the universality theorem. Gear design. Suppose that one of the uncrossed edges of an antiparallelogram linkage is fixed in place, and the remaining linkage moves freely. As the linkage moves, each antiparallelogram formed can be divided into two congruent triangles meeting at the crossing point. In the triangle based on the fixed edge, the lengths of the two moving sides sum to the constant length of one of the antiparallelogram's crossed edges, and therefore the moving crossing point traces out an ellipse with the fixed points as its foci. Symmetrically, the second (moving) uncrossed edge of the antiparallelogram has as its endpoints the foci of a second ellipse, formed from the first one by reflection across a tangent line through the crossing point. Because the second ellipse rolls around the first, this construction of ellipses from the motion of an antiparallelogram can be used in the design of elliptical gears that convert uniform rotation into non-uniform rotation or vice versa. Celestial mechanics. In the n-body problem, the study of the motions of point masses under Newton's law of universal gravitation, an important role is played by central configurations, solutions to the "n"-body problem in which all of the bodies rotate around some central point as if they were rigidly connected to each other. For instance, for three bodies, there are five solutions of this type, given by the five Lagrangian points. For four bodies, with two pairs of the bodies having equal masses (but with the ratio between the masses of the two pairs varying continuously), numerical evidence indicates that there exists a continuous family of central configurations, related to each other by the motion of an antiparallelogram linkage. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p" }, { "math_id": 1, "text": "q" }, { "math_id": 2, "text": "h" }, { "math_id": 3, "text": "hpq/(p+q)" } ]
https://en.wikipedia.org/wiki?curid=5817627
581763
Singularity theory
Mathematical theory In mathematics, singularity theory studies spaces that are almost manifolds, but not quite. A string can serve as an example of a one-dimensional manifold, if one neglects its thickness. A singularity can be made by balling it up, dropping it on the floor, and flattening it. In some places the flat string will cross itself in an approximate "X" shape. The points on the floor where it does this are one kind of singularity, the double point: one bit of the floor corresponds to more than one bit of string. Perhaps the string will also touch itself without crossing, like an underlined "U". This is another kind of singularity. Unlike the double point, it is not "stable", in the sense that a small push will lift the bottom of the "U" away from the "underline". Vladimir Arnold defines the main goal of singularity theory as describing how objects depend on parameters, particularly in cases where the properties undergo sudden change under a small variation of the parameters. These situations are called perestroika (), bifurcations or catastrophes. Classifying the types of changes and characterizing sets of parameters which give rise to these changes are some of the main mathematical goals. Singularities can occur in a wide range of mathematical objects, from matrices depending on parameters to wavefronts. How singularities may arise. In singularity theory the general phenomenon of points and sets of singularities is studied, as part of the concept that manifolds (spaces without singularities) may acquire special, singular points by a number of routes. Projection is one way, very obvious in visual terms when three-dimensional objects are projected into two dimensions (for example in one of our eyes); in looking at classical statuary the folds of drapery are amongst the most obvious features. Singularities of this kind include caustics, very familiar as the light patterns at the bottom of a swimming pool. Other ways in which singularities occur is by degeneration of manifold structure. The presence of symmetry can be good cause to consider orbifolds, which are manifolds that have acquired "corners" in a process of folding up, resembling the creasing of a table napkin. Singularities in algebraic geometry. Algebraic curve singularities. Historically, singularities were first noticed in the study of algebraic curves. The "double point" at (0, 0) of the curve formula_0 and the cusp there of formula_1 are qualitatively different, as is seen just by sketching. Isaac Newton carried out a detailed study of all cubic curves, the general family to which these examples belong. It was noticed in the formulation of Bézout's theorem that such "singular points" must be counted with multiplicity (2 for a double point, 3 for a cusp), in accounting for intersections of curves. It was then a short step to define the general notion of a singular point of an algebraic variety; that is, to allow higher dimensions. The general position of singularities in algebraic geometry. Such singularities in algebraic geometry are the easiest in principle to study, since they are defined by polynomial equations and therefore in terms of a coordinate system. One can say that the "extrinsic" meaning of a singular point isn't in question; it is just that in "intrinsic" terms the coordinates in the ambient space don't straightforwardly translate the geometry of the algebraic variety at the point. Intensive studies of such singularities led in the end to Heisuke Hironaka's fundamental theorem on resolution of singularities (in birational geometry in characteristic 0). This means that the simple process of "lifting" a piece of string off itself, by the "obvious" use of the cross-over at a double point, is not essentially misleading: all the singularities of algebraic geometry can be recovered as some sort of very general "collapse" (through multiple processes). This result is often implicitly used to extend affine geometry to projective geometry: it is entirely typical for an affine variety to acquire singular points on the hyperplane at infinity, when its closure in projective space is taken. Resolution says that such singularities can be handled rather as a (complicated) sort of compactification, ending up with a "compact" manifold (for the strong topology, rather than the Zariski topology, that is). The smooth theory and catastrophes. At about the same time as Hironaka's work, the catastrophe theory of René Thom was receiving a great deal of attention. This is another branch of singularity theory, based on earlier work of Hassler Whitney on critical points. Roughly speaking, a "critical point" of a smooth function is where the level set develops a singular point in the geometric sense. This theory deals with differentiable functions in general, rather than just polynomials. To compensate, only the "stable" phenomena are considered. One can argue that in nature, anything destroyed by tiny changes is not going to be observed; the visible "is" the stable. Whitney had shown that in low numbers of variables the stable structure of critical points is very restricted, in local terms. Thom built on this, and his own earlier work, to create a "catastrophe theory" supposed to account for discontinuous change in nature. Arnold's view. While Thom was an eminent mathematician, the subsequent fashionable nature of elementary catastrophe theory as propagated by Christopher Zeeman caused a reaction, in particular on the part of Vladimir Arnold. He may have been largely responsible for applying the term singularity theory to the area including the input from algebraic geometry, as well as that flowing from the work of Whitney, Thom and other authors. He wrote in terms making clear his distaste for the too-publicised emphasis on a small part of the territory. The foundational work on smooth singularities is formulated as the construction of equivalence relations on singular points, and germs. Technically this involves group actions of Lie groups on spaces of jets; in less abstract terms Taylor series are examined up to change of variable, pinning down singularities with enough derivatives. Applications, according to Arnold, are to be seen in symplectic geometry, as the geometric form of classical mechanics. Duality. An important reason why singularities cause problems in mathematics is that, with a failure of manifold structure, the invocation of Poincaré duality is also disallowed. A major advance was the introduction of intersection cohomology, which arose initially from attempts to restore duality by use of strata. Numerous connections and applications stemmed from the original idea, for example the concept of perverse sheaf in homological algebra. Other possible meanings. The theory mentioned above does not directly relate to the concept of mathematical singularity as a value at which a function is not defined. For that, see for example isolated singularity, essential singularity, removable singularity. The monodromy theory of differential equations, in the complex domain, around singularities, does however come into relation with the geometric theory. Roughly speaking, "monodromy" studies the way a covering map can degenerate, while "singularity theory" studies the way a "manifold" can degenerate; and these fields are linked. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "y^2 = x^2 + x^3 " }, { "math_id": 1, "text": "y^2 = x^3\\ " } ]
https://en.wikipedia.org/wiki?curid=581763
581797
Matching (graph theory)
Set of edges without common vertices In the mathematical discipline of graph theory, a matching or independent edge set in an undirected graph is a set of edges without common vertices. In other words, a subset of the edges is a matching if each vertex appears in at most one edge of that matching. Finding a matching in a bipartite graph can be treated as a network flow problem. Definitions. Given a graph "G" = ("V", "E"), a matching "M" in "G" is a set of pairwise non-adjacent edges, none of which are loops; that is, no two edges share common vertices. A vertex is matched (or saturated) if it is an endpoint of one of the edges in the matching. Otherwise the vertex is unmatched (or unsaturated). A maximal matching is a matching "M" of a graph "G" that is not a subset of any other matching. A matching "M" of a graph "G" is maximal if every edge in "G" has a non-empty intersection with at least one edge in "M". The following figure shows examples of maximal matchings (red) in three graphs. A maximum matching (also known as maximum-cardinality matching) is a matching that contains the largest possible number of edges. There may be many maximum matchings. The matching number formula_0 of a graph G is the size of a maximum matching. Every maximum matching is maximal, but not every maximal matching is a maximum matching. The following figure shows examples of maximum matchings in the same three graphs. A perfect matching is a matching that matches all vertices of the graph. That is, a matching is perfect if every vertex of the graph is incident to an edge of the matching. A matching is perfect if formula_1. Every perfect matching is maximum and hence maximal. In some literature, the term complete matching is used. In the above figure, only part (b) shows a perfect matching. A perfect matching is also a minimum-size edge cover. Thus, the size of a maximum matching is no larger than the size of a minimum edge cover: &amp;NoBreak;&amp;NoBreak;. A graph can only contain a perfect matching when the graph has an even number of vertices. A near-perfect matching is one in which exactly one vertex is unmatched. Clearly, a graph can only contain a near-perfect matching when the graph has an odd number of vertices, and near-perfect matchings are maximum matchings. In the above figure, part (c) shows a near-perfect matching. If every vertex is unmatched by some near-perfect matching, then the graph is called factor-critical. Given a matching "M", an alternating path is a path that begins with an unmatched vertex and whose edges belong alternately to the matching and not to the matching. An augmenting path is an alternating path that starts from and ends on free (unmatched) vertices. Berge's lemma states that a matching "M" is maximum if and only if there is no augmenting path with respect to "M". An induced matching is a matching that is the edge set of an induced subgraph. Properties. In any graph without isolated vertices, the sum of the matching number and the edge covering number equals the number of vertices. If there is a perfect matching, then both the matching number and the edge cover number are |"V" | / 2. If "A" and "B" are two maximal matchings, then |"A"| ≤ 2|"B"| and |"B"| ≤ 2|"A"|. To see this, observe that each edge in "B" \ "A" can be adjacent to at most two edges in "A" \ "B" because "A" is a matching; moreover each edge in "A" \ "B" is adjacent to an edge in "B" \ "A" by maximality of "B", hence formula_2 Further we deduce that formula_3 In particular, this shows that any maximal matching is a 2-approximation of a maximum matching and also a 2-approximation of a minimum maximal matching. This inequality is tight: for example, if "G" is a path with 3 edges and 4 vertices, the size of a minimum maximal matching is 1 and the size of a maximum matching is 2. A spectral characterization of the matching number of a graph is given by Hassani Monfared and Mallik as follows: Let formula_4 be a graph on formula_5 vertices, and formula_6 be formula_7 distinct nonzero purely imaginary numbers where formula_8. Then the matching number of formula_4 is formula_7 if and only if (a) there is a real skew-symmetric matrix formula_9 with graph formula_4 and eigenvalues formula_10 and formula_11 zeros, and (b) all real skew-symmetric matrices with graph formula_4 have at most formula_12 nonzero eigenvalues. Note that the (simple) graph of a real symmetric or skew-symmetric matrix formula_9 of order formula_5 has formula_5 vertices and edges given by the nonozero off-diagonal entries of formula_9. Matching polynomials. A generating function of the number of "k"-edge matchings in a graph is called a matching polynomial. Let "G" be a graph and "mk" be the number of "k"-edge matchings. One matching polynomial of "G" is formula_13 Another definition gives the matching polynomial as formula_14 where "n" is the number of vertices in the graph. Each type has its uses; for more information see the article on matching polynomials. Algorithms and computational complexity. Maximum-cardinality matching. A fundamental problem in combinatorial optimization is finding a "maximum matching". This problem has various algorithms for different classes of graphs. In an "unweighted bipartite graph", the optimization problem is to find a maximum cardinality matching. The problem is solved by the Hopcroft-Karp algorithm in time time, and there are more efficient randomized algorithms, approximation algorithms, and algorithms for special classes of graphs such as bipartite planar graphs, as described in the main article. Maximum-weight matching. In a "weighted" "bipartite graph," the optimization problem is to find a maximum-weight matching; a dual problem is to find a minimum-weight matching. This problem is often called maximum weighted bipartite matching, or the assignment problem. The Hungarian algorithm solves the assignment problem and it was one of the beginnings of combinatorial optimization algorithms. It uses a modified shortest path search in the augmenting path algorithm. If the Bellman–Ford algorithm is used for this step, the running time of the Hungarian algorithm becomes formula_15, or the edge cost can be shifted with a potential to achieve formula_16 running time with the Dijkstra algorithm and Fibonacci heap. In a "non-bipartite weighted graph", the problem of maximum weight matching can be solved in time formula_17 using Edmonds' blossom algorithm. Maximal matchings. A maximal matching can be found with a simple greedy algorithm. A maximum matching is also a maximal matching, and hence it is possible to find a "largest" maximal matching in polynomial time. However, no polynomial-time algorithm is known for finding a minimum maximal matching, that is, a maximal matching that contains the "smallest" possible number of edges. A maximal matching with "k" edges is an edge dominating set with "k" edges. Conversely, if we are given a minimum edge dominating set with "k" edges, we can construct a maximal matching with "k" edges in polynomial time. Therefore, the problem of finding a minimum maximal matching is essentially equal to the problem of finding a minimum edge dominating set. Both of these two optimization problems are known to be NP-hard; the decision versions of these problems are classical examples of NP-complete problems. Both problems can be approximated within factor 2 in polynomial time: simply find an arbitrary maximal matching "M". Counting problems. The number of matchings in a graph is known as the Hosoya index of the graph. It is #P-complete to compute this quantity, even for bipartite graphs. It is also #P-complete to count perfect matchings, even in bipartite graphs, because computing the permanent of an arbitrary 0–1 matrix (another #P-complete problem) is the same as computing the number of perfect matchings in the bipartite graph having the given matrix as its biadjacency matrix. However, there exists a fully polynomial time randomized approximation scheme for counting the number of bipartite matchings. A remarkable theorem of Kasteleyn states that the number of perfect matchings in a planar graph can be computed exactly in polynomial time via the FKT algorithm. The number of perfect matchings in a complete graph "K""n" (with "n" even) is given by the double factorial ("n" − 1)!!. The numbers of matchings in complete graphs, without constraining the matchings to be perfect, are given by the telephone numbers. The number of perfect matchings in a graph is also known as the hafnian of its adjacency matrix. Finding all maximally matchable edges. One of the basic problems in matching theory is to find in a given graph all edges that may be extended to a maximum matching in the graph (such edges are called maximally matchable edges, or allowed edges). Algorithms for this problem include: Online bipartite matching. The problem of developing an online algorithm for matching was first considered by Richard M. Karp, Umesh Vazirani, and Vijay Vazirani in 1990. In the online setting, nodes on one side of the bipartite graph arrive one at a time and must either be immediately matched to the other side of the graph or discarded. This is a natural generalization of the secretary problem and has applications to online ad auctions. The best online algorithm, for the unweighted maximization case with a random arrival model, attains a competitive ratio of 0.696. Characterizations. Kőnig's theorem states that, in bipartite graphs, the maximum matching is equal in size to the minimum vertex cover. Via this result, the minimum vertex cover, maximum independent set, and maximum vertex biclique problems may be solved in polynomial time for bipartite graphs. Hall's marriage theorem provides a characterization of bipartite graphs which have a perfect matching and the Tutte theorem provides a characterization for arbitrary graphs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu(G)" }, { "math_id": 1, "text": "|M|=|V|/2" }, { "math_id": 2, "text": "|A \\setminus B| \\le 2|B \\setminus A |." }, { "math_id": 3, "text": "|A| = |A \\cap B| + |A \\setminus B| \\le 2|B \\cap A| + 2|B \\setminus A| = 2|B|." }, { "math_id": 4, "text": "G" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "\\lambda_1 > \\lambda_2 > \\ldots > \\lambda_k>0" }, { "math_id": 7, "text": "k" }, { "math_id": 8, "text": "2k \\leq n" }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": "\\pm \\lambda_1, \\pm\\lambda_2,\\ldots,\\pm\\lambda_k" }, { "math_id": 11, "text": "n-2k" }, { "math_id": 12, "text": "2k" }, { "math_id": 13, "text": "\\sum_{k\\geq0} m_k x^k." }, { "math_id": 14, "text": "\\sum_{k\\geq0} (-1)^k m_k x^{n-2k}," }, { "math_id": 15, "text": "O(V^2 E)" }, { "math_id": 16, "text": "O(V^2 \\log{V} + V E)" }, { "math_id": 17, "text": "O(V^{2}E)" }, { "math_id": 18, "text": "O(VE)" }, { "math_id": 19, "text": "\\tilde{O}(V^{2.376}) " }, { "math_id": 20, "text": "O(V+E)" } ]
https://en.wikipedia.org/wiki?curid=581797
5818194
Acyclic object
In mathematics, in the field of homological algebra, given an abelian category formula_0 having enough injectives and an additive (covariant) functor formula_1, an acyclic object with respect to formula_2, or simply an formula_2-acyclic object, is an object formula_3 in formula_0 such that formula_4 for all formula_5, where formula_6 are the right derived functors of formula_2. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{C}" }, { "math_id": 1, "text": "F :\\mathcal{C}\\to\\mathcal{D}" }, { "math_id": 2, "text": "F" }, { "math_id": 3, "text": "A" }, { "math_id": 4, "text": " {\\rm R}^i F (A) = 0 \\,\\!" }, { "math_id": 5, "text": " i>0 \\,\\!" }, { "math_id": 6, "text": "{\\rm R}^i F" } ]
https://en.wikipedia.org/wiki?curid=5818194
581859
Invertible sheaf
Type of sheaf In mathematics, an invertible sheaf is a sheaf on a ringed space which has an inverse with respect to tensor product of sheaves of modules. It is the equivalent in algebraic geometry of the topological notion of a line bundle. Due to their interactions with Cartier divisors, they play a central role in the study of algebraic varieties. Definition. Let ("X", "O""X") be a ringed space. Isomorphism classes of sheaves of "O""X"-modules form a monoid under the operation of tensor product of "O""X"-modules. The identity element for this operation is "O""X" itself. Invertible sheaves are the invertible elements of this monoid. Specifically, if "L" is a sheaf of "O""X"-modules, then "L" is called invertible if it satisfies any of the following equivalent conditions: Every locally free sheaf of rank one is invertible. If "X" is a locally ringed space, then "L" is invertible if and only if it is locally free of rank one. Because of this fact, invertible sheaves are closely related to line bundles, to the point where the two are sometimes conflated. Examples. Let "X" be an affine scheme Spec "R". Then an invertible sheaf on "X" is the sheaf associated to a rank one projective module over "R". For example, this includes fractional ideals of algebraic number fields, since these are rank one projective modules over the rings of integers of the number field. The Picard group. Quite generally, the isomorphism classes of invertible sheaves on "X" themselves form an abelian group under tensor product. This group generalises the ideal class group. In general it is written formula_5 with "Pic" the Picard functor. Since it also includes the theory of the Jacobian variety of an algebraic curve, the study of this functor is a major issue in algebraic geometry. The direct construction of invertible sheaves by means of data on "X" leads to the concept of Cartier divisor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L \\otimes_{\\mathcal{O}_X} M \\cong \\mathcal{O}_X" }, { "math_id": 1, "text": "L \\otimes_{\\mathcal{O}_X} L^\\vee \\to \\mathcal{O}_X" }, { "math_id": 2, "text": "L^\\vee" }, { "math_id": 3, "text": "\\underline{\\operatorname{Hom}}(L, \\mathcal{O}_X)" }, { "math_id": 4, "text": "F \\mapsto F \\otimes_{\\mathcal{O}_X} L" }, { "math_id": 5, "text": "\\mathrm{Pic}(X)\\ " } ]
https://en.wikipedia.org/wiki?curid=581859
581888
Luminous flux
Perceived luminous power In photometry, luminous flux or luminous power is the measure of the perceived power of light. It differs from radiant flux, the measure of the total power of electromagnetic radiation (including infrared, ultraviolet, and visible light), in that luminous flux is adjusted to reflect the varying sensitivity of the human eye to different wavelengths of light. Units. The SI unit of luminous flux is the lumen (lm). One lumen is defined as the luminous flux of light produced by a light source that emits one candela of luminous intensity over a solid angle of one steradian. formula_0 In other systems of units, luminous flux may have units of power. Weighting. The luminous flux accounts for the sensitivity of the eye by weighting the power at each wavelength with the luminosity function, which represents the eye's response to different wavelengths. The luminous flux is a weighted sum of the power at all wavelengths in the visible band. Light outside the visible band does not contribute. The ratio of the total luminous flux to the radiant flux is called the luminous efficacy. This model of the human visual brightness perception, is standardized by the CIE and ISO. Context. Luminous flux is often used as an objective measure of the useful light emitted by a light source, and is typically reported on the packaging for light bulbs, although it is not always prominent. Consumers commonly compare the luminous flux of different light bulbs since it provides an estimate of the apparent amount of light the bulb will produce, and a lightbulb with a higher ratio of luminous flux to consumed power is more efficient. Luminous flux is not used to compare brightness, as this is a subjective perception which varies according to the distance from the light source and the angular spread of the light from the source. Measurement. Luminous flux of artificial light sources is typically measured using an integrating sphere, or a goniophotometer outfitted with a photometer or a spectroradiometer. &lt;templatestyles src="Reflist/styles.css" /&gt; Relationship to luminous intensity. Luminous flux (in lumens) is a measure of the total amount of light a lamp puts out. The luminous intensity (in candelas) is a measure of how bright the beam in a particular direction is. If a lamp has a 1 lumen bulb and the optics of the lamp are set up to focus the light evenly into a 1 steradian beam, then the beam would have a luminous intensity of 1 candela. If the optics were changed to concentrate the beam into 1/2 steradian then the source would have a luminous intensity of 2 candela. The resulting beam is narrower and brighter, however the luminous flux remains the same. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " 1\\ \\text{lm} = 1\\ \\text{cd} \\times 1\\ \\text{sr}" } ]
https://en.wikipedia.org/wiki?curid=581888
58192917
Schlichting jet
Schlichting jet is a steady, laminar, round jet, emerging into a stationary fluid of the same kind with very high Reynolds number. The problem was formulated and solved by Hermann Schlichting in 1933, who also formulated the corresponding planar Bickley jet problem in the same paper. The Landau-Squire jet from a point source is an exact solution of Navier-Stokes equations, which is valid for all Reynolds number, reduces to Schlichting jet solution at high Reynolds number, for distances far away from the jet origin. Flow description. Consider an axisymmetric jet emerging from an orifice, located at the origin of a cylindrical polar coordinates formula_0, with formula_1 being the jet axis and formula_2 being the radial distance from the axis of symmetry. Since the jet is in constant pressure, the momentum flux in the formula_1 direction is constant and equal to the momentum flux at the origin, formula_3 where formula_4 is the constant density, formula_5 are the velocity components in formula_2 and formula_1 direction, respectively and formula_6 is the known momentum flux at the origin. The quantity formula_7 is called as the "kinematic momentum flux". The boundary layer equations are formula_8 where formula_9 is the kinematic viscosity. The boundary conditions are formula_10 The Reynolds number of the jet, formula_11 is a large number for the Schlichting jet. Self-similar solution. A self-similar solution exist for the problem posed. The self-similar variables are formula_12 Then the boundary layer equation reduces to formula_13 with boundary conditions formula_14. If formula_15 is a solution, then formula_16 is also a solution. A particular solution which satisfies the condition at formula_17 is given by formula_18 The constant formula_19 can be evaluated from the momentum condition, formula_20 Thus the solution is formula_21 Unlike the momentum flux, the volume flow rate in the formula_1 is not constant, but increases due to slow entrainment of the outer fluid by the jet, formula_22 increases linearly with distance along the axis. Schneider flow describes the flow induced by the jet due to the entrainment. Other variations. Schlichting jet for the compressible fluid has been solved by M.Z. Krzywoblocki and D.C. Pack. Similarly, Schlichting jet with swirling motion is studied by H. Görtler. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(r,x)" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "J=2\\pi\\rho \\int_0^\\infty ru^2 d r = \\text{constant}," }, { "math_id": 4, "text": "\\rho" }, { "math_id": 5, "text": "(v,u)" }, { "math_id": 6, "text": "J" }, { "math_id": 7, "text": "K=J/\\rho" }, { "math_id": 8, "text": "\\begin{align}\n\\frac{\\partial u}{\\partial x} + \\frac{1}{r}\\frac{\\partial (rv)}{\\partial r} &=0,\\\\\nu \\frac{\\partial u}{\\partial x} + v \\frac{\\partial u}{\\partial r} &= \\frac{\\nu}{r}\\frac{\\partial }{\\partial r}\\left(r\\frac{\\partial u}{\\partial r}\\right),\n\\end{align}" }, { "math_id": 9, "text": "\\nu" }, { "math_id": 10, "text": "\\begin{align}\nr=0: &\\quad v=0,\\quad \\frac{\\partial u}{\\partial r} =0, \\\\\nr\\rightarrow\\infty: &\\quad u=0.\n\\end{align}" }, { "math_id": 11, "text": "Re = \\frac{1}{\\nu}\\left(\\frac{J}{2\\pi\\rho}\\right)^{1/2}=\\frac{1}{\\nu}\\left(\\frac{K}{2\\pi}\\right)^{1/2}\\gg 1" }, { "math_id": 12, "text": "\\eta = \\frac{r}{x}, \\quad u = \\frac{\\nu}{x}\\frac{F'(\\eta)}{\\eta}, \\quad v = \\frac{\\nu}{x}\\left[F'(\\eta)-\\frac{F(\\eta)}{\\eta}\\right]." }, { "math_id": 13, "text": "\\eta F'' + F F' - F' =0" }, { "math_id": 14, "text": "F(0)=F'(0)=0" }, { "math_id": 15, "text": "F(\\eta)" }, { "math_id": 16, "text": "F(\\gamma\\eta)=F(\\xi)" }, { "math_id": 17, "text": "\\eta=0" }, { "math_id": 18, "text": "F=\\frac{4\\xi^2}{4+\\xi^2} = \\frac{4\\gamma^2\\eta^2}{4+\\gamma^2\\eta^2}." }, { "math_id": 19, "text": "\\gamma" }, { "math_id": 20, "text": "\\gamma^2 = \\frac{3J}{16\\pi\\rho\\nu^2}=\\frac{3 {\\rm Re}^2}{8}." }, { "math_id": 21, "text": "F(\\eta)=\\frac{4({\\rm Re}\\,\\eta)^2}{32/3+({\\rm Re}\\,\\eta)^2}." }, { "math_id": 22, "text": "Q = 2\\pi\\int_0^\\infty r u dr = 8 \\pi \\nu x," } ]
https://en.wikipedia.org/wiki?curid=58192917
581974
Coq (software)
Proof assistant Coq is an interactive theorem prover first released in 1989. It allows for expressing mathematical assertions, mechanically checks proofs of these assertions, helps find formal proofs, and extracts a certified program from the constructive proof of its formal specification. Coq works within the theory of the calculus of inductive constructions, a derivative of the calculus of constructions. Coq is not an automated theorem prover but includes automatic theorem proving tactics (procedures) and various decision procedures. The Association for Computing Machinery awarded Thierry Coquand, Gérard Huet, Christine Paulin-Mohring, Bruno Barras, Jean-Christophe Filliâtre, Hugo Herbelin, Chetan Murthy, Yves Bertot, and Pierre Castéran with the 2013 ACM Software System Award for Coq. The name "Coq" is a wordplay on the name of Thierry Coquand, Calculus of Constructions or "CoC" and follows the French computer science tradition of naming software after animals ("coq" in French meaning rooster). On October 11th, 2023, the development team announced that Coq will be renamed "The Rocq Prover" in the coming months, and has started updating the code base, website and associated tools. Overview. When viewed as a programming language, Coq implements a dependently typed functional programming language; when viewed as a logical system, it implements a higher-order type theory. The development of Coq has been supported since 1984 by INRIA, now in collaboration with École Polytechnique, University of Paris-Sud, Paris Diderot University, and CNRS. In the 1990s, ENS Lyon was also part of the project. The development of Coq was initiated by Gérard Huet and Thierry Coquand, and more than 40 people, mainly researchers, have contributed features to the core system since its inception. The implementation team has successively been coordinated by Gérard Huet, Christine Paulin-Mohring, Hugo Herbelin, and Matthieu Sozeau. Coq is mainly implemented in OCaml with a bit of C. The core system can be extended by way of a plug-in mechanism. The name means 'rooster' in French and stems from a French tradition of naming research development tools after animals. Up until 1991, Coquand was implementing a language called the Calculus of Constructions and it was simply called CoC at this time. In 1991, a new implementation based on the extended Calculus of Inductive Constructions was started and the name was changed from CoC to Coq in an indirect reference to Coquand, who developed the Calculus of Constructions along with Gérard Huet and contributed to the Calculus of Inductive Constructions with Christine Paulin-Mohring. Coq provides a specification language called Gallina ("hen" in Latin, Spanish, Italian and Catalan). Programs written in Gallina have the weak normalization property, implying that they always terminate. This is a distinctive property of the language, since infinite loops (non-terminating programs) are common in other programming languages, and is one way to avoid the halting problem. As an example, a proof of commutativity of addition on natural numbers in Coq: plus_comm = fun n m : nat =&gt; nat_ind (fun n0 : nat =&gt; n0 + m = m + n0) (plus_n_0 m) (fun (y : nat) (H : y + m = m + y) =&gt; eq_ind (S (m + y)) (fun n0 : nat =&gt; S (y + m) = n0) (f_equal S H) (m + S y) (plus_n_Sm m y)) n : forall n m : nat, n + m = m + n stands for mathematical induction, for substitution of equals, and for taking the same function on both sides of the equality. Earlier theorems are referenced showing formula_0 and formula_1. Notable uses. Four color theorem and SSReflect extension. Georges Gonthier of Microsoft Research in Cambridge, England and Benjamin Werner of INRIA used Coq to create a surveyable proof of the four color theorem, which was completed in 2002. Their work led to the development of the SSReflect ("Small Scale Reflection") package, which was a significant extension to Coq. Despite its name, most of the features added to Coq by SSReflect are general-purpose features and are not limited to the computational reflection style of proof. These features include: SSReflect 1.11 is freely available, dual-licensed under the open source CeCILL-B or CeCILL-2.0 license, and compatible with Coq 8.11. Tactic language. In addition to constructing Gallina terms explicitly, Coq supports the use of "tactics" written in the built-in language Ltac or in OCaml. These tactics automate the construction of proofs, carrying out trivial or obvious steps in proofs. Several tactics implement decision procedures for various theories. For example, the "ring" tactic decides the theory of equality modulo ring or semiring axioms via associative-commutative rewriting. For example, the following proof establishes a complex equality in the ring of integers in just one line of proof: Require Import ZArith. Open Scope Z_scope. Goal forall a b c:Z, (a + b + c) ^ 2 = a * a + b ^ 2 + c * c + 2 * a * b + 2 * a * c + 2 * b * c. intros; ring. Qed. Built-in decision procedures are also available for the empty theory ("congruence"), propositional logic ("tauto"), quantifier-free linear integer arithmetic ("lia"), and linear rational/real arithmetic ("lra"). Further decision procedures have been developed as libraries, including one for Kleene algebras and another for certain geometric goals. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m = m + 0" }, { "math_id": 1, "text": "S (m + y) = m + S y" } ]
https://en.wikipedia.org/wiki?curid=581974
582024
Pushout (category theory)
Most general completion of a commutative square given two morphisms with same domain In category theory, a branch of mathematics, a pushout (also called a fibered coproduct or fibered sum or cocartesian square or amalgamated sum) is the colimit of a diagram consisting of two morphisms "f" : "Z" → "X" and "g" : "Z" → "Y" with a common domain. The pushout consists of an object "P" along with two morphisms "X" → "P" and "Y" → "P" that complete a commutative square with the two given morphisms "f" and "g". In fact, the defining universal property of the pushout (given below) essentially says that the pushout is the "most general" way to complete this commutative square. Common notations for the pushout are formula_0 and formula_1. The pushout is the categorical dual of the pullback. Universal property. Explicitly, the pushout of the morphisms "f" and "g" consists of an object "P" and two morphisms "i"1 : "X" → "P" and "i"2 : "Y" → "P" such that the diagram commutes and such that ("P", "i"1, "i"2) is universal with respect to this diagram. That is, for any other such triple ("Q", "j"1, "j"2) for which the following diagram commutes, there must exist a unique "u" : "P" → "Q" also making the diagram commute: As with all universal constructions, the pushout, if it exists, is unique up to a unique isomorphism. Examples of pushouts. Here are some examples of pushouts in familiar categories. Note that in each case, we are only providing a construction of an object in the isomorphism class of pushouts; as mentioned above, though there may be other ways to construct it, they are all equivalent. formula_16 Graphically this means that two pushout squares, placed side by side and sharing one morphism, form a larger pushout square when ignoring the inner shared morphism. Construction via coproducts and coequalizers. Pushouts are equivalent to coproducts and coequalizers (if there is an initial object) in the sense that: All of the above examples may be regarded as special cases of the following very general construction, which works in any category "C" satisfying: In this setup, we obtain the pushout of morphisms "f" : "Z" → "X" and "g" : "Z" → "Y" by first forming the coproduct of the targets "X" and "Y". We then have two morphisms from "Z" to this coproduct. We can either go from "Z" to "X" via "f", then include into the coproduct, or we can go from "Z" to "Y" via "g", then include into the coproduct. The pushout of "f" and "g" is the coequalizer of these new maps. Application: the Seifert–van Kampen theorem. The Seifert–van Kampen theorem answers the following question. Suppose we have a path-connected space "X", covered by path-connected open subspaces "A" and "B" whose intersection "D" is also path-connected. (Assume also that the basepoint * lies in the intersection of "A" and "B".) If we know the fundamental groups of "A", "B", and their intersection "D", can we recover the fundamental group of "X"? The answer is yes, provided we also know the induced homomorphisms formula_19 and formula_20 The theorem then says that the fundamental group of "X" is the pushout of these two induced maps. Of course, "X" is the pushout of the two inclusion maps of "D" into "A" and "B". Thus we may interpret the theorem as confirming that the fundamental group functor preserves pushouts of inclusions. We might expect this to be simplest when "D" is simply connected, since then both homomorphisms above have trivial domain. Indeed, this is the case, since then the pushout (of groups) reduces to the free product, which is the coproduct in the category of groups. In a most general case we will be speaking of a free product with amalgamation. There is a detailed exposition of this, in a slightly more general setting (covering groupoids) in the book by J. P. May listed in the references. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P = X \\sqcup_Z Y" }, { "math_id": 1, "text": "P = X +_Z Y" }, { "math_id": 2, "text": "P = (X \\sqcup Y)/\\!\\sim" }, { "math_id": 3, "text": " X \\cup Y \\subseteq W" }, { "math_id": 4, "text": "f \\colon X \\to Y" }, { "math_id": 5, "text": "X \\sqcup Y" }, { "math_id": 6, "text": "x \\in X \\subseteq X \\sqcup Y" }, { "math_id": 7, "text": "f(x) \\in Y \\subseteq X \\sqcup Y" }, { "math_id": 8, "text": "X \\cup_{f} Y" }, { "math_id": 9, "text": "X \\vee Y" }, { "math_id": 10, "text": "f : 0 \\to A" }, { "math_id": 11, "text": "g : 0 \\to B" }, { "math_id": 12, "text": "A \\otimes_{C} B" }, { "math_id": 13, "text": "g': A \\rightarrow A \\otimes_{C} B" }, { "math_id": 14, "text": "f': B \\rightarrow A \\otimes_{C} B" }, { "math_id": 15, "text": " f' \\circ g = g' \\circ f " }, { "math_id": 16, "text": "A \\otimes_{C} B = \\left\\{\\sum_{i \\in I} (a_i,b_i) \\; \\big| \\; a_i \\in A, b_i \\in B \\right\\} \\Bigg/ \\bigg\\langle (f(c)a,b) - (a,g(c)b) \\; \\big| \\; a \\in A, b \\in B, c \\in C \\bigg\\rangle " }, { "math_id": 17, "text": "\\mathbf{Z}_+" }, { "math_id": 18, "text": "\\left(\\frac{\\operatorname{lcm}(m,n)}{m}, \\frac{\\operatorname{lcm}(m,n)}{n}\\right)" }, { "math_id": 19, "text": "\\pi_1(D,*) \\to \\pi_1(A,*)" }, { "math_id": 20, "text": "\\pi_1(D,*) \\to \\pi_1(B,*)." } ]
https://en.wikipedia.org/wiki?curid=582024
58203281
Introduction to Electrodynamics
Undergraduate textbook by David J. Griffiths Introduction to Electrodynamics is a textbook by physicist David J. Griffiths. Generally regarded as a standard undergraduate text on the subject, it began as lecture notes that have been perfected over time. Its most recent edition, the fifth, was published in 2023 by Cambridge University. This book uses SI units (the mks convention) exclusively. A table for converting between SI and Gaussian units is given in Appendix C. Griffiths said he was able to reduce the price of his textbook on quantum mechanics simply by changing the publisher, from Pearson to Cambridge University Press. He has done the same with this one. (See the ISBN in the box to the right.) Reception. Paul D. Scholten, a professor at Miami University (Ohio), opined that the first edition of this book offers a streamlined, though not always in-depth, coverage of the fundamental physics of electrodynamics. Special topics such as superconductivity or plasma physics are not mentioned. Breaking with tradition, Griffiths did not give solutions to all the odd-numbered questions in the book. Another unique feature of the first edition is the informal, even emotional, tone. The author sometimes referred to the reader directly. Physics received the primary focus. Equations are derived and explained, and common misconceptions are addressed. According to Robert W. Scharstein from the Department of Electrical Engineering at the University of Alabama, the mathematics used in the third edition is just enough to convey the subject and the problems are valuable teaching tools that do not involve the "plug and chug disease." Although students of electrical engineering are not expected to encounter complicated boundary-value problems in their career, this book is useful to them as well, because of its emphasis on conceptual rather than mathematical issues. He argued that with this book, it is possible to skip the more mathematically involved sections to the more conceptually interesting topics, such as antennas. Moreover, the tone is clear and entertaining. Using this book "rejuvenated" his enthusiasm for teaching the subject.Colin Inglefield, an associate professor of physics at Weber State University (Utah), commented that the third edition is notable for its informal and conversational style that may appeal to a large class of students. The ordering of its chapters and its contents are fairly standard and are similar to texts at the same level. The first chapter offers a valuable review of vector calculus, which is essential for understanding this subject. While most other authors, including those aimed at a more advanced audience, denote the distance from the source point to the field point by formula_0, Griffiths uses a script formula_1 (see figure). Unlike some comparable books, the level of mathematical sophistication is not particularly high. For example, Green's functions are not anywhere mentioned. Instead, physical intuition and conceptual understanding are emphasized. In fact, care is taken to address common misconceptions and pitfalls. It contains no computer exercises. Nevertheless, it is perfectly adequate for undergraduate instruction in physics. As of June 2005, Inglefield has taught three semesters using this book. Physicists Yoni Kahn of Princeton University and Adam Anderson of the Fermi National Accelerator Laboratory indicated that Griffiths' "Electrodynamics" offers a dependable treatment of all materials in the electromagnetism section of the Physics Graduate Record Examinations (Physics GRE) except circuit analysis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|\\mathbf{x} - \\mathbf{x}'|" }, { "math_id": 1, "text": "r" } ]
https://en.wikipedia.org/wiki?curid=58203281
58203815
Generation Z in the United States
American generation born between the mid-to-late 1990s and early 2010s Generation Z (or Gen Z for short), colloquially known as Zoomers, is the demographic cohort succeeding Millennials and preceding Generation Alpha. Members of Generation Z, were born between the mid-to-late 1990s and the early 2010s, with the generation typically being defined as those born from 1997 to 2012. In other words, the first wave came of age during the second decade of the twenty-first century, a time of significant demographic change due to declining birthrates, population aging, and immigration. Girls of the early twenty-first century reach puberty earlier than their counterparts from the previous generations. They have higher incidents of eye problems, allergies, awareness and reporting of mental health issues, suicide, and sleep deprivation, but lower rates of adolescent pregnancy. They drink alcohol and smoke traditional tobacco cigarettes less often, but are more likely to consume marijuana and electronic cigarettes. Americans who grew up in the 2000s and 2010s saw gains in IQ points, but loss in creativity. During the 2000s and 2010s, while Western educators in general and American schoolteachers in particular concentrated on helping struggling rather than gifted students, American students of the 2010s were trailing behind their counterparts from other countries, especially East Asia, in reading and in STEM. They ranked above the OECD average in science and computer literacy, but below average in mathematics. Mathematical literacy and reading proficiency among American schoolchildren both fell in the 2010s. They read books less often than their predecessors and spend more time in front of a screen. They tend to become familiar with the Internet and portable digital devices at a young age, but are not necessarily digitally literate. Spending so much time on social media can distort their view of the world, hamper their social development, and harm their mental health. Although they trust traditional news media more than what they see online, they tend to be more skeptical of the news than their parents. Politically, young Americans of the late 2010s and early 2020s tend to hold similar views to the Millennials, but are generally more interested in advancing their careers than pursuing idealistic political causes. They are also more likely to be irreligious than older cohorts. On the whole, they are financially cautious, and are increasingly interested in alternatives to attending institutions of higher education, with young men being primarily responsible for the trend. Among those who choose to go to college, grades and standards have fallen because of disruptions in learning due to COVID-19. As consumers, Generation Z's actual purchases do not reflect the widely held notion that they hold liberal or progressive social and environmental views. Although American youth culture has become highly fragmented by the start of the early twenty-first century, a product of growing individualism, nostalgia is a major feature of youth culture in the 2010s and 2020s. Nomenclature and date range. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; While there is no scientific process for deciding when a name has stuck, the momentum is clearly behind Gen Z. Michael Dimmock, Pew Research Center (2019) The name "Generation Z" is a reference to the fact that it is the second generation after Generation X, continuing the alphabetical sequence from Generation Y (Millennials). Other proposed names for the generation include "iGeneration", "Homeland Generation", "Net Gen", "Digital Natives", "Neo-Digital Natives", "Pluralist Generation", "Centennials", and "Post-Millennials". The term "Internet Generation" is in reference to the fact that the generation is the first to have been born after the mass-adoption of the Internet. The Pew Research Center surveyed the various names for this cohort on Google Trends in 2019 and found that in the U.S., the term "Generation Z" was overwhelmingly the most popular. The Merriam-Webster Dictionary has an official entry for "Generation Z". "Zoomer" is an informal term used to refer to members of Generation Z, often in an ironic, humorous, or mocking tone. It combines the term "boomer," referring to baby boomers, with the "Z" from "Generation Z". Prior to this, "Zoomer" was used in the 2000s to describe particularly active baby boomers. "Zoomer" in its current incarnation skyrocketed in popularity in 2018, when it was used in a 4chan Internet meme mocking Gen Z adolescents via a Wojak caricature dubbed a "Zoomer". Merriam-Webster's records suggest the use of the term "zoomer" in the sense of Generation Z dates back at least as far as 2016. It was added to the dictionary in October 2021. The international American consulting company McKinsey &amp; Company calls representatives of generation Z of people born from about 1995 to 2010 the real "digital aborigines". Professor of Psychology at San Diego State University Gene Twenge calls this generation iGeneration — born in 1995-2012. In January 2019, Pew Research Center defined "Post-Millennials" as people born from 1997 onward, choosing this date for "different formative experiences" for the purposes of demographic analysis. Common generational experiences include technological developments and socio-economic trends, including the widespread availability of wireless internet access and high-bandwidth cellular service, and key world events, such as growing up in a world after the September 11th terrorist attacks. During a 2019 analysis, Pew stated that they have yet to set the endpoint of Generation Z, but did use the year 2012 to complete their analysis. In a 2022 article, U.S. Census economists Neil Bennett and Briana Sullivan described Generation Z as those born 1997 to 2013. Psychologists Jean Twenge and Jonathan Haidt argue that even though the concept of a social generation remains debated, there is evidence for significant differences between the different demographic cohorts. Those born between (on the cusp of) the Millennial generation and Generation Z are commonly known as Zillennials. Arts and culture. Museum attendance and reading. According to the 2009 Museum Financial Survey conducted by the American Alliance of Museums, museums in the U.S. spend 75% of their education budgets on activities for K-12 students and receive 55 million visits from students traveling in field trips with their schools each year. Data collected by the Institute of Museum and Library Services reveal that the total number of programs per 1,000 people increased between 2007 and 2016. Attendance per 1,000 people also rose. In particular, the number of children's programs offered per 1,000 people and program attendance per 1,000 went up noticeably. During the same time period, the number of children's material in circulation per person did not change. In the fiscal year of 2014, there were more museums in the United States (35,000) than the total numbers of Starbucks locations (11,000) and McDonald's restaurants (14,000) combined. A research team headed by psychologist Jean Twenge analyzed data sets from Monitoring the Future, an ongoing survey of a nationally representative sample of 50,000 teenagers each year from grades eight, ten, and twelve, from 1976 to 2016, for a grand total of formula_0, with 51% being female. Originally, there were only twelfth graders; eighth- and tenth graders were added in 1991. They concluded that "compared with previous generations, teens in the 2010s spent more time online and less time with traditional media, such as books, magazines and television. Time on digital media has displaced time once spent enjoying a book or watching TV." Between 2006 and 2016, usage of digital media increased 100% among twelfth-graders, 75% among tenth-graders, and 68% among eighth-graders. Twelfth-graders spent a grand total of six hours each day texting, social networking, or gaming in the mid-2010s. In 2016, only two out of a hundred tenth-graders read a newspaper every day, down from one in three in the early 1990s. That same year, only 16% of twelfth-graders read a book or a magazine daily, down from 60% in the 1970s. Twelfth-graders also read on average two fewer books per year in the mid-2016 than the mid-1970s, and a third did not read books at all (including e-books) compared to one-ninth in the 1970s. Gaps along sexual, racial, or socioeconomic lines were statistically insignificant. This secular decline in leisure reading came as a surprise for the researchers because "It's so convenient to read books and magazines on electronic devices like tablets. There's no more going to the mailbox or the bookstore—you just download the magazine issue or book and start reading." Twenge further noted that the analyses of the Pew Research Center on reading did not distinguish between reading for school or work and reading for pleasure. News media. For its 2019 Digital News Report, the Reuters Institute for the Study of Journalism at Oxford University asked people in the U.S. and U.K. what the first sources of news for them were. 45% of people aged 18 to 24 gave the smartphone as their answer. Of those, 57% said social media, including messaging apps, were the source of their first news. Among the social networks, Facebook was the top source for news for 48% of Generation Z, while YouTube was in second place at 32%. Only 23% said their first contact with the first news they read or watch each day came directly from the original sources. Meanwhile, 19% answered TV, 11% the radio, 5% desktop computers, and 4% newspapers as their first sources of news for the day. For Generation Z, the news brands are not as important as they are for those aged 35 or over. Most, however, do have a go-to source when a story breaks; CNN and "The New York Times" are the most common ones for American youths thanks to parental influence. Similarly, a 2019 survey by Barnes and Nobles Education found that "The New York Times", "The Washington Post", "The Wall Street Journal", CNN and "USA Today" are deemed the most trustworthy news sources by Generation Z. They also found that Generation Z consider traditional print media to be the most trustworthy while words of mouth and what they see on social media to be the least trustworthy. Nevertheless, according to the Reuters Institute, while Generation Z understands the importance of traditional news agencies, they tend to be less loyal than their parents. Young Americans (and Britons) are concerned about the perceived bias, lack of context, negativity, and sensationalism in the news media. American youths today want news stories that are not only fun and meaningful but also accurate and fair. A 2016 poll by Gallup found that barely one in three Americans had a "great deal" or "fair" amount of trust in the news media, whose public image has been in decline since the 1990s. This trend holds across different age groups, though people aged 18 to 49 are less likely than those 50 years of age or older to trust the media. Globally, trust in the news media is falling, too. While visual story-telling has proven to be popular, 58% of Generation Z still prefer text to videos. This number goes up for people who are older. In the U.S., despite a bump due to the 2016 Presidential election, the number of people paying a subscription fee for online news has stabilized at about 16%. When asked what they would choose if they could have only one subscription, only 7% picked the news while 37% chose a video service and 15% selected music. Entertainment. In its 2018 Gen Z Music Consumption &amp; Spending Report, digital media company Sweety High found that Generation Z was listening to more diverse music than generations past. That they tended to switch seamlessly between different genres is significant because music preferences tend to solidify when people are 13 to 14 years old. Spotify was the most popular source of music Generation Z (61%) and terrestrial radio ranked second (55%). YouTube was the preferred platform for music discovery (75%). However, only one in four teens uses it for regular listening, making it less popular than even CDs (38%). A 2019 poll by Ypulse found that among teenagers (13 to 18) the top musicians were Billie Eilish and Ariana Grande whereas among young adults (19 to 26), the most liked were Taylor Swift and Ariana Grande. Using artificial intelligence, Joan Serra and his team at the Spanish National Research Council studied the massive Million Song Dataset and found that between 1955 and 2010, popular music has gotten louder, while the chords, melodies, and types of sounds used have become increasingly homogenized. While the music industry has long been accused of producing songs that are louder and blander, this is the first time the quality of songs is comprehensively studied and measured. Another study from the Lawrence Technological University found that between 1950 and 2016, the lyrics of most popular songs (found on the Billboard Hot 100 each year) have gotten less joyful and sadder and angrier. According to the researchers, this does not necessarily imply that music has changed or that musicians have been expressing what they want to express, but only that consumers' tastes have evolved. China-based music video app musical.ly was highly popular among American teens. This app was acquired by ByteDance, sometimes called the "Buzzfeed of China" because it is a hub for stories that go viral on the Internet. Unlike Buzzfeed, however, ByteDance does not employ young staff writers to create its contents; rather, it relies on algorithms equipped with artificial intelligence to collect and modify contents in order to optimize viewership. In 2018, musical.ly was shut down and its users were transferred to TikTok, a competing app developed by ByteDance. Both musical.ly and TikTok enable users to create short videos with popular songs as background music and with numerous special effects, including lip-synchronizing. According to Asha Choksi, vice president of global research and insights for the educational publisher Pearson, about one in every three members of Generation Z spends at least four hours per day watching videos online. Data from the Bureau of Labor Statistics show that while people aged 15 to 24 spent more time playing video games in 2018, the time spent on computers and televisions remained virtually unchanged. 2020 Nielsen figures revealed that the viewership of children's cable television channels such as Cartoon Network or Nickelodeon continued their steady decline, which was merely decelerated due to the COVID-19 pandemic, which forced many parents and their children to stay at home. On the other hand, streaming services saw healthy growth. A 2021 survey by Deloitte found that among people born between 1997 and 2007, 26% called playing video games their favorite entertainment activity, 14% listening to music, 12% surfing the Internet, 11% browsing social media, and 10% watching a movie or TV show at home. Dystopian fiction, such as "The Hunger Games" and "Divergent", has been popular among teenagers. On the other hand, Generation Z continues to enjoy comfort television shows, including those from before their time, such as "The Office" (2005–2013) and "Friends" (1994–2004), as well shows featuring characters roughly their age, like "Young Sheldon" (2017–2024), and long-running television drama, for example, "Grey's Anatomy" (2005–present). Other aspects. The 9/11 terrorist attacks continue to haunt Americans, including those born after 2001. Since members of Generation Z were not yet cognizant when the 9/11 attacks occurred or had not yet been born at that time, there is no generational memory of a time the United States has not been at war with the loosely defined forces of global terrorism. Even twenty years after the attacks had taken place, concerns over national security as well as personal safety remain. The 2007–2008 financial crisis is another particularly important historical event that has shaped Generation Z, due to the ways in which their childhoods may have been affected by the recession's financial stresses felt by their parents. A 2013 survey found that 47% of Generation Z in the United States (considered here to be those between the ages of 14 and 23) were concerned about student debt, while 36% were worried about being able to afford a college education at all. This generation is faced with a growing income gap and a shrinking middle-class, which all have led to increasing stress levels in families. According to Public Relations Society of America, the Great Recession has taught Generation Z to be independent, and has led to an entrepreneurial desire, after seeing their parents and older siblings struggle in the workforce. Psychologist Jean Twenge argued that as the typical American family has fewer children and as parents pay more attention to each of their children—for example by not allowing them to walk home from school—and to their education, the average American teenager in the mid- to late-2010s tended to be 'slow life-history strategists', meaning they delay taking part in adult activities such as drinking alcohol, having sexual intercourse, or driving. A 2014 study "Generation Z Goes to College" found that Generation Z students self-identify as being loyal, compassionate, thoughtful, open-minded, responsible, and determined. How they see their Generation Z peers is quite different from their own self-identity. They view their peers as competitive, spontaneous, adventuresome, and curious—all characteristics that they do not see readily in themselves. Generation Z today is more likely than young people in the past to try out new cuisines. There is also a growing interest in vegetarian foods. Demographics. When Congress passed the Immigration and Nationality Act of 1965, as urged by President Lyndon B. Johnson, which abolished national quotas for immigrants and replaced it with a system that admits a fixed number of persons per year based in qualities such as skills and the need for refuge, immigration surged from elsewhere in North America (especially Canada and Mexico), Asia, Central America, and the West Indies. By the late 1990s and early 2000s, Asia and Latin America became the top sources of immigrants to the U.S. A report by demographer William Frey of the Brookings Institution stated that in the United States, the Millennials are a bridge between the largely Caucasian pre-Millennials (Generation X and their predecessors) and the more diverse post-Millennials (Generation Z and their successors). Frey's analysis of U.S. Census data suggests that as of 2019, 50.9% of Generation Z is white, 13.8% is black, 25.0% Hispanic, and 5.3% Asian. (See figure below.) 29% of Generation Z are children of immigrants or immigrants themselves, compared to 23% of Millennials when they were at the same age. As of 2019, 13.7% of the U.S. population is foreign-born, compared to 9.7% in 1997, when the first members of Generation Z had their birth cries. Indeed, according to the Pew Research Center, in spite of the diminished flow of immigrants to the United States following the Great Recession, Generation Z is the most ethnically diverse yet seen. 52% of this generation is white. 25% is Hispanic. 14% is black, and 4% is Asian. Approximately 4% is multiracial, and this number has risen rapidly between 2000 and 2010. More specifically, the number of Americans who identify as mixed white and black has grown by 134% and those of both white and Asian extraction by 87%. For comparison, 44% of Millennials, 40% of Generation X, and 28% of the Baby Boomers identify as non-white. Research by the demographer Bill Frey suggests that at the national level, Hispanics and Asians are the fastest-growing racial minority groups in the United States while the number of Caucasians under the age of 18 has been declining since 2000. Overall, the number of births to Caucasian women in the United States dropped 7% between 2000 and 2018. Among foreign-born Caucasian women, however, the number of births increased by 1% in the same period. Although the number of births to foreign-born Hispanic women fell from 58% in 2000 to 50% in 2018, the share of births due to U.S.-born Hispanic women increased from 20% in 2000 to 24% in 2018. The number of births to foreign-born Asian women rose from 19% in 2000 to 24% in 2018 while that due to U.S.-born Asian women went from 1% in 2000 to 2% in 2018. In all, between 2000 and 2017, more births were to foreign-born than U.S.-born women.Members of Generation Z are slightly less likely to be foreign-born than Millennials; the fact that more American Latinos are born in the U.S. rather than abroad plays a role in making the first wave of Generation Z appear better educated than their predecessors. However, researchers note that this trend could be altered by changing immigration patterns and the younger members of Generation Z choosing alternate educational paths. 29% of Generation Z are children of immigrants or immigrants themselves, compared to 23% of Millennials when they were at the same age. As of 2019, 13.7% of the U.S. population is foreign-born, compared to 9.7% in 1997, when the first members of Generation Z had their birth cries. Not only are Americans becoming more and more racially diverse, but racial minorities are also becoming more geographically dispersed than ever before, as new immigrants settle in places other than the large metropolitan areas historically populated by migrants, such as New York City, Los Angeles, and San Francisco. A majority of Generation Z live in urban areas and are less inclined to change address than their predecessors. Similar to the Millennials, roughly two thirds of Generation Z come from households of married parents. By contrast, this living arrangement was essentially the norm for Generation X and the Baby Boomers, at 73% and 85%, respectively. As a demographic cohort, Generation Z is smaller than the Baby Boomers or their children, the Millennials. (See population pyramid.) According to the U.S. Census Bureau, Generation Z makes up about one quarter of the U.S. population. This demographic change could have social, cultural, and political implications for the decades ahead. Generation Z are usually the children of Generation X, and sometimes Millennials. Jason Dorsey, who works for the Center of Generational Kinetics, observed that Generation Z is not an extreme version of the Millennials but is rather different, and the differences can largely be attributed to parenting. Like parents from Generation X, members of Generation Z tend to be autonomous and pessimistic. They need validation less than the Millennials and typically become financially literate at an earlier age as many of their parents bore the full brunt of the Great Recession. Between 2009 and 2016, the number of grandparents raising their grandchildren went up 7%. This is due to a variety of factors, such as military deployment, the growing incarceration of women, drug addictions, and mental health issues. After declining for many years, the number of children in foster care increased 1% in 2013 and 3.4% in 2014. In response, states are sending more and more children who have been taken from their parents to their relatives, drawing from research showing that children tend to be better off being cared for by their own families than strangers and to save taxpayers' money. Economic trends and outlooks. Spending and savings habits. Consumer behavior. According to an analysis by Goldman Sachs, people from Generation Z tend to be more pragmatic about money and more entrepreneurial than the Millennials. A survey on a thousand members of Generation Z conducted by the Center for Generational Kinetics found that 77% of them were spending the money they made themselves and 38% planned to work as university students. The development of technology has also given mobility and immediacy to Generation Z's consumption habits. They can take advantage of the on-demand economy, defined as "the economic activity created by technology companies that fulfill consumer demand via the immediate provisioning of goods and service." Generation Z tends to value utility and quality over brand name. Authenticity is critical. Having been raised by Generation X and grown up in a recession, members of Generation Z are quick to verify claims. Being heavy users of the Internet in general and social media in particular, they frequently employ these tools to learn more about a certain product or service they are interested in. Product specifications, vendor ratings, and peer reviews are all important. They tend to be skeptical and will shun firms whose actions and values are contradictory. A survey conducted by Morning Consult in May 2019 found that among adults aged 18 to 21, 55% of females and 40% of males preferred to do their shopping in person rather than using their computers, or Amazon's Alexa. For all other adults, that number was 53%. Two out of three young adults said they went shopping for pleasure at least once per month. An A.T. Kearney survey sampling 1,500 consumers from four demographic cohorts found that the people aged 14 to 24 tended to purchase more products in the health and wellness category than any other groups and were more willing than their elders to halt a purchase if they disliked the customer service. 81% said they preferred brick-and-mortar stores and 73% said they would like to discover new products in such a store. 67% said they preferred products whose ingredients they can understand and 65% preferred simple packaging. 58% told the pollsters manually browsing products helps them disconnect from the digital world, a sort of "retail therapy." 22% said social media is a source of stress for them, a number that is higher than that for the Millennials (18%), Generation X (13%), and the Baby Boomers (8%). In addition, while more than half said they sought environmentally friendly products, only 38% were willing to pay extra for them. Young American consumers of this cohort are less likely to pay a premium for what they want compared to their counterparts from emerging economies. While majorities might signal their support for certain ideals such as "environmental consciousness" to pollsters, actual purchases do not reflect their stated views, as can be seen from their high demand for cheap but not durable clothing ("fast fashion"), or preference for rapid delivery. Nor are they willing to shun firms whose owners have "conservative" values, such as Chick-fil-A, which remains one of the most popular fast-food restaurant chains in the United States among teenagers. While majorities of older cohorts prefer American products, Generation Z is mixed on whether or not to buy goods made in China. Milk consumption has declined among young people and the growing rate of lactose intolerance among the ethnically diverse Generation Z is part of the reason why.Nostalgia is a major theme of the behavior of consumers from Generation Z. For example, 2000s vintage electronics and fashion are back in vogue by the early 2020s. Youths who came of age during the 2010s and 2020s feel nostalgic about (simpler) eras of history they have never lived in due to the uncertainties and stress of modern life, with student loan debts and the threat of terrorism being the top concerns. For Millennials and older generations, nostalgia consumption is a way of reliving and cherishing the past they had experienced; for Generation Z, it is not only a source of comfort but also joy and excitement. Because of the era into which they were born, members of Generation Z likely have never used items that were popular in the decades before them, such as floppy disks, cassettes, VHS tapes, typewriters, television sets with antennas, television guides, answering machines, phone books, address books, pagers, fax machines, payphones, rotary and corded phones, print encyclopedias, paper maps, and non-digital projectors. Nevertheless, despite having the reputation for "killing" many things valued by older generations, Millennials and Generation Z are nostalgically preserving Polaroid cameras, vinyl records, needlepoint, and home gardening, to name just some. In fact, while "dumb phones" (or feature phones rather than smartphones) are on the decline around the world, in the U.S., their sales are growing among Generation Z. According to a 2019 YouGov poll, 31% of the U.S. population is willing to pay for music on vinyl, including 26% of Generation Z. As a matter of fact, Millennials and Generation Z have given life to nostalgia as an industry in the late 2010s and early 2020s, a trend that coincides with the resurgence of some cultural phenomena of the 2000s, such the television series "Friends" (1994–2004), something that is well-received among young people despite its age. Nostalgia has also contributed to the anticipation and subsequent commercial success of the summer film "Barbie" (2023) based on a well-known doll of the same name, even though it was intended for adults rather than children. Financial security. According to the "USA Today", 69% of Generation Z turn to their parents for financial advice compared to 52% of younger Millennials. Friends followed at a distant second place with 24% and 19%, respectively. Unlike their predecessors, members of Generation Z are highly averse to debts and many are already saving for retirement. Their money-saving habits are reminiscent of those who came of age during the Great Depression. According to Morning Consult, four in ten of those aged 18 to 22 incur no debt at all. However, because they are spending so much time online playing video games, they make many in-game purchases that add up over time without realizing how much money they are actually spending. In any case, in the second quarter of 2019, the number of people from Generation Z carrying a credit card balance increased by 41% compared to that of 2018 (from 5,483,000 to 7,746,000), as the first wave of this demographic cohort became old enough to take out a mortgage, a loan, or to have credit-card debt, according to TransUnion. In 2019, credit cards became the most common form of debt for Generation Z, overtaking auto loans. This is despite the fact that they grew up during the Great Recession. The financial industry expects continued growth in credit activity by Generation Z, whose rate of credit delinquency is comparable to those of the Millennials and Generation X. According to a 2019 report from the financial firm Northwestern Mutual, student loans were the top source of debt for Generation Z, at 25%. For comparison, mortgages were the top source of debt for the Baby Boomers (28%) and Generation X (30%); for the Millennials, it was credit card bills (25%). In a study conducted in 2015 the Center for Generational Kinetics found that American members of Generation Z, defined here as those born 1996 and onward, are less optimistic about the state of the US economy than their immediate predecessors, the Millennials. However, Generation Z (58%) is more likely to say they expect to be more successful than their parents were than younger Millennials (52%). Whereas 13% of younger Millennials said they expected to be less successful than their parents, only 10% of Generation Z said the same. A total of 58% of Generation Z said they had not experienced a quarter-life crisis, compared with 46% of younger Millennials. Americans aged 15 to 21 expect to be financially independent in their early twenties while their parents generally expect them to become so by their mid-twenties. By contrast, about one out of five Millennials expect to still be dependent on their parents beyond the age of 30. While the Millennials tend to prefer flexibility, Generation Z is more interested in certainty and stability. Whereas 23% of Millennials would leave a job if they thought they were not appreciated, only 15% of Generation Z would do the same, according to a Deloitte survey. According to the World Economic Forum (WEF), 77% of Generation Z expects to work harder than previous generations. Tourism and housing markets. Research by the online travel booking company Booking.com reveals that 54% of Generation Z considered environmental impact of their travels to be an important factor, 56% said they would like to stay in environmentally friendly lodging, and 60% were interested in greener modes of transport once they reached their destinations. Meanwhile, the online booking firm Expedia Group found that cost was crucial for 82% of tourists from Generation Z. Therefore, the desire of Generation Z to travel and see the world comes into conflict with what they can afford and their wish to limit their environmental impact. With regards to the housing market, they typically look for properties with amenities that are comparable to what they experienced as university students. A 2019 Bank of America survey found that over half of people aged 18 to 23 were already saving for a home, with 59% saying they planned to do so within five years. More than one in two members of Generation Z said the top reason why they wanted to own a home was to start a family. For comparison, this number was 40% for the Millennials, 17% for Generation X, and 10% for the Baby Boomers. The survey also found that if they were given $5,000, most members of Generation Z would rather save that money for a down payment rather than spending it on a dream wedding, shopping, or a vacation. A majority of Generation Z was willing to take a second job, attend a less costly university, or move back to their parents' place in order to save money. They are also quite willing to live with people they did not know previously in order to save on rent in large metropolitan areas. According to the personal finance company Credit Karma, 43% of Generation Z said they had had strangers as roommates and 30% said they were willing to move in with roommates they did not know. House cleaning, a potential point of friction, is handled by maids. Nationwide, about one in four Americans have lived with someone they had no prior relationship with. Data from TransUnion reveals that as the Millennials enter the housing market in large numbers, taking out more mortgages in 2018 than any other living generation, Generation Z's number of new mortgages is also increasing dramatically, from 150,000 in the second quarter of 2018 to 319,000 in the second quarter of 2019, an increase of 112%. Generation Z consumers took out 41% more auto loans in the second quarter of 2019 than in the same period the previous year. While one out of five Millennial renters said they expected to continue to do so indefinitely, according to a survey by Apartment List, a Freddie Mac study found that 86% of Generation Z desired to own a home by the age of 30. Freddie Mac did note, however, that Generation Z understood the challenges of home prices, making down payments, and student loan debts. According to the real-estate company realtor.com, the first wave of Generation Z was buying homes at about the same rate as their grandparents the Silent Generation in late 2019, when they owned about 2% of the housing market. Judging by the amounts of mortgages, the top housing markets tend to have strong local economies and low cost of living. They also tend to be university towns; many young people prefer to live where they studied. In general, Generation Z appears most interested in owning a home in the Midwest and the South. Broadly speaking, while the Millennials are migrating North, Generation Z is moving South. The median price of a home purchased by Generation Z in 2019 was $160,600 and increasing, but remains lower than that of the Millennials, $256,500. Employment expectations and prospects. Quantitative historian Peter Turchin observed that demand for labor in the United States had been stagnant since 2000 and would likely continue to 2020 as the nation went through the negative part of the Kondratiev wave. (See right.) Moreover, the share of people in their 20s continued to grow till the end of the 2010s according to projections by the U.S. Census Bureau, meaning the youth bulge would likely not fade away before the 2020s. As such the gap between the supply and demand in the labor market would likely not fall before then, and falling or stagnant wages generate sociopolitical stress. Current trends suggest that developments in artificial intelligence and robotics will not result in mass unemployment but can actually create high-skilled jobs. However, in order to take advantage of this situation, one needs a culture and an education system that promote lifelong learning. Honing skills that machines have not yet mastered, such as teamwork, will be crucial. Generation Z's top career choices—becoming business people, doctors, engineers, artists, and IT workers—are not that different from generations past. For them, the most important qualities in a job are income, fulfillment, work-life balance, and job security. Most prefer to work for a medium or large company rather than a startup or a government agency. A Harvard Business Review article from 2015 stated that about 70% of Generation Z was self-employed, e.g. selling things online, and only 12% had "traditional" teen jobs, such as waiting tables. Heavy use of the Internet has made self-employment much easier than it was in the past. A Morgan Stanley report, called the Blue Paper, projected that the Millennials and Generation Z have been responsible in a surge in labor participation in the U.S., and that while the U.S. labor force expands, that of other G10 countries will contract. This development alleviates concerns over America's aging population which jeopardizes the solvency of various welfare programs. As of 2019, Millennials and Generation Z account for 38% of the U.S. workforce; that number will rise to 58% in the incoming decade. Whenever they find themselves short on skills, Generation Z will pick up what they need using the Internet. While there is agreement across generations that it is very important for employees to learn new skills, Millennials and Generation Z are overwhelmingly more likely than Baby Boomers to think that it is the job of employees to train themselves. Baby Boomers tend to think it is the employer's responsibility. Moreover, Millennials and Generation Z (74%) tend to have more colleagues working remotely for a significant portion of their time compared to the Baby Boomers (58%). According to the Bureau of Labor Statistics, the occupations with the fastest projected growth rate between 2018 and 2028 are solar cell and wind turbine technicians, healthcare and medical aides, cyber security experts, statisticians, speech-language pathologists, genetic counselors, mathematicians, operations research analysts, software engineers, forest fire inspectors and prevention specialists, post-secondary health instructors, and phlebotomists. Their projected growth rates are between 23% (medical assistants) and 63% (solar cell installers); their annual median pays range between roughly US$24,000 (personal care aides) to over US$108,000 (physician assistants). Occupations with the highest projected numbers of jobs added between 2018 and 2028 are healthcare and personal aides, nurses, restaurant workers (including cooks and waiters), software developers, janitors and cleaners, medical assistants, construction workers, freight laborers, marketing researchers and analysts, management analysts, landscapers and groundskeepers, financial managers, tractor and truck drivers, and medical secretaries. The total numbers of jobs added ranges from 881,000 (personal care aides) to 96,400 (medical secretaries). Annual median pays range from over US$24,000 (fast-food workers) to about US$128,000 (financial managers). 2019 data from the U.S. government reveals that there are over half a million vacant manufacturing jobs in the country, a record high, thanks to an increasing number of Baby Boomers entering retirement. However, in order to attract new workers to overcome this "Silver Tsunami," manufacturers need to debunk a number of misconceptions about their industries. For example, the American public tends to underestimate the salaries of manufacturing workers. Nevertheless, almost one in three members of Generation Z said a career in manufacturing was recommended to them, which is higher than 18% of the general public and 13% of Millennials, and over one quarter said they would consider a job in that sector. In addition, the number of people doubting the viability of American manufacturing has declined to 54% in 2019, compared to 70% in 2018, the L2L Manufacturing Index measured. After the Great Recession, the number of U.S. manufacturing jobs reached a minimum of 11.5 million in February 2010. It rose to 12.8 million in September 2019. It was 14 million in March 2007. As of 2019, manufacturing industries made up 12% of the U.S. economy, which is increasingly reliant on service industries, as is the case for other advanced economies around the world. Nevertheless, twenty-first-century manufacturing is increasingly sophisticated, using advanced robotics, 3D printing, cloud computing, among other modern technologies, and technologically savvy employees are precisely what employers need. Four-year university degrees are unnecessary; technical or vocational training, or perhaps apprenticeship would do. America's shortage of skilled tradespeople, such as plumbers—whose incomes are in six digits—continues in early 2021 despite the COVID-19 pandemic and despite rising salaries and potential employers offering to pay their recruits during training. Due to declining interest in higher education and a tight labor market, in the early 2020s, young Americans could expect to be hired right after graduating high school. In May 2023, the unemployment rate of Americans aged 16 to 24 was 7.5%, the lowest in 70 years. Among teenagers 16 to 19, employment numbers have gone up, though not to the level seen among the Baby Boomers and Generation X when they were teenagers due to a variety of factors, including jobs being automated, outsourced, or given to immigrants, and state governments regulating the job market more tightly. Yet despite a strong labor market and falling inflation, economist Karen Dynan observed that young Americans tend to be pessimistic about their economic prospects, worrying about a possible recession, expensive housing, and the possibility of being laid off. Transportation choices. According to the Pew Research Center, young people are more likely to ride public transit. In 2016, 21% of adults aged 18 to 21 took public transit on a daily, almost daily, or weekly basis. By contrast, this number of all U.S. adults was 11%. Nationwide, about three quarters of American commuters drive their own cars. Also according to Pew, 51% of U.S. adults aged 18 to 29 used a ride-hailing service such as Lyft or Uber in 2018 compared to 28% in 2015. That number for all U.S. adults were 15% in 2015 and 36% in 2018. In general, ride-hailing service users tend to be urban residents, young (18–29), university graduates, and high-income earners ($75,000 a year or more). Although Generation Z by and large no longer views car ownership as a status symbol, a life milestone, or a ticket to freedom, the automotive industry hopes that, like the Millennials, members of Generation Z will later purchase cars in great numbers. Indeed, the number of Generation Z consumers taking out auto loans is rising drastically, from 3,072,000 in the second quarter of 2018 to 4,376,000 in the second quarter of 2019, an increase of 42%. However, 27% of Generation Z consider environmental friendliness to be an important factor, which is higher than their predecessors when they were at the same age. Much more important, though, is the price (77%). Generation Z is not particularly concerned with style or brand; they are more interested in safety. They tend to be more receptive towards self-driving cars. They are also interested in the interior electronic technology of the cars they might purchase. More specifically, they would like to be able to hook up their smartphones to the Bluetooth-capable audio systems and backup cameras. Education. A 2022 poll by YPulse found that the top five sets of skills Millennials and Generation Z wished they had learned at school were managing mental health, self-defense, survival skills and basic first aid, cooking, and personal finance. The COVID-19 pandemic has badly disrupted the American education system. In the early 2020s, different school districts report significant shortages of teachers, many of whom have left their positions or the profession itself due to low pay, stressful work environments, the lack of respect for them, and the hostility towards them from some politicians and parents. This problem is not new but it mostly affects students in economically deprived areas. Well-endowed suburban schools do not have this problem. Public schools across the United States also presently face falling enrollment due to population decline and defections to private schools and home schooling. As a result, their funding has also fallen. Many students are finding themselves in the midst of an escalating cultural conflict in which political activists are demanding that books dealing with sensitive topics relating to race and sexuality and those that include coarse language and explicit violence be removed from school libraries. The teaching of race and sexuality also proven to be politically controversial in recent years, so much so that following the COVID-19 pandemic, support for parents' rights in deciding their children's educational contents and school choice, or the redirecting of tax money via vouchers to fund private schools chosen by the parents, has grown considerably. Many parents have also used school vouchers to send children to religious or parochial schools, taking advantage of a 2022 Supreme Court ruling. To pacify angry parents and to comply with new state laws, many schoolteachers have opted to remove certain items from their lessons altogether. Despite the general consensus that mathematics education in the United States is mediocre, as indicated by international test scores in the late 2010s, there is strong partisan disagreement over how to address this issue because people are divided between the more traditional teacher-led approach and the student-led or inquiry-based method. An emphasis on rote memorization and speed gives as many as one in three students age five and up mathematical anxiety. Meanwhile, an increasing number of parents opted to send their children to enrichment and accelerated learning after-school or summer programs in the subject. However, many school officials turned their backs on these programs, believing that their primary beneficiaries are affluent white and Asian families, prompting parents to pick private institutions or math circles. Some public schools serving low-income neighborhoods even denied the existence of mathematically gifted students. By the mid-2010s, however, some public schools have begun offering enrichment programs to their students. Nevertheless, American students ranked above the OECD average in science and computer literacy, as of 2021. Pre-school and kindergarten. In 2018, the American Academy of Pediatrics released a policy statement summarizing progress on developmental and neurological research on unstructured time spent by children, colloquially 'play', and noting the importance of play time for social, cognitive, and language skills development. This is because to many educators and parents, play has come to be seen as outdated and irrelevant. In fact, between 1981 and 1997, time spent by children on unstructured activities dropped by 25% due to increased amounts of time spent on structured activities. Unstructured time tended to be spent on screens at the expense of active play. The statement encourages parents and children to spend more time on "playful learning," which reinforces the intrinsic motivation to learn and discover and strengthens the bond between children and their parents and other caregivers. It also helps children handle stress and prevents "toxic stress," something that hampers development. Dr. Michael Yogman, lead author of the statement, noted that play does not necessarily have to involve fancy toys; common household items would do as well. Moreover, parents reading to children also counts as play, because it encourages children to use their imaginations. Primary. Although the Common Core standards eliminated the requirement that public elementary schools teach cursive writing in 2010, lawmakers from many states, including Illinois, Ohio, and Texas, have introduced legislation to teach it in theirs in 2019. Some studies point to the benefits of handwriting—print or cursive—for the development of cognitive and motor skills as well as memory and comprehension. For example, one 2012 neuroscience study suggests that handwriting "may facilitate reading acquisition in young children." Cursive writing has been used to help students with learning disabilities, such as dyslexia, a disorder that makes it difficult to interpret words, letters, and other symbols. Unfortunately, lawmakers often cite them out of context, conflating handwriting in general with cursive handwriting. In any case, some 80% of historical records and documents of the United States, such as the correspondence of Abraham Lincoln, was written by hand in cursive, and students today tend to be unable to read them. Indeed, historian and former Harvard University president Drew Gilpin Faust noted that the Generation Z never learned to read and write cursive. Historically, cursive writing was regarded as a mandatory, almost military, exercise. But today, it is thought of as an art form by those who pursue it, both adults and children. The percentage of American fourth-graders proficient in reading declined during the late 2010s, according to the National Assessment of Educational Progress. Secondary. While courses on home economics, also known as family and consumer sciences (FCS), were commonplace in the United States during the twentieth century, they were on the decline in the early twenty-first for a variety of reasons, ranging from a shortage of qualified teachers to funding cuts. This is despite attempts to revise them for life in the contemporary era. FCS courses in the past taught the basics of cooking and housework but now also teach nutrition, community gardening, composting, personal finance, among other topics; they are intended to fill in the gaps of knowledge that parents in the olden days taught their children but in many cases can no longer do because both parents are working. In 2012, there were only 3.5 million students enrolled in FCS courses in secondary schools, a drop of 38% from the previous decade. A survey by the Pew Research Center found that one in three girls aged 13 to 17 felt excited every day or almost every day about something they learned at school. For boys of the same age group, this number is just above one in five. Since the early 2010s, a number of U.S. states have taken steps to strengthen teacher education. Ohio, Tennessee, and Texas had the top programs in 2014. Meanwhile, Rhode Island, which previously had the nation's lowest bar on who can train to become a school teacher, has been admitting education students with higher and higher average SAT, ACT, and GRE scores. The state aims to accept only those with standardized test scores in the top third of the national distribution by 2020, which would put it in the ranks of education superpowers such as Finland and Singapore. In Finland, studying to become a teacher is as tough and prestigious as studying to become a medical doctor or a lawyer. During the 2000s and 2010s, whereas the Asian polities (especially China, Hong Kong, South Korea, and Singapore) actively sought out gifted students and steered them towards competitive programs, Europe and the United States emphasized inclusion and focused on helping struggling students. Developmental cognitive psychologist David Geary observed that Western educators remained "resistant" to the possibility that even the most talented of schoolchildren needed encouragement and support. In addition, even though it is commonly believed that past a certain IQ benchmark (typically 120), practice becomes much more important than cognitive abilities in mastering new knowledge, recently published research papers based on longitudinal studies, such as the Study of Mathematically Precocious Youth (SMPY) and the Duke University Talent Identification Program, suggest otherwise. According to the 2018 National Assessment of Educational Progress, 73% of American eighth and twelfth graders had deficient writing skills. There have been numerous reports in the 2010s on how U.S. students were falling behind their international counterparts in the STEM subjects, especially those from (East) Asia. For example, American schoolchildren put up a mediocre performance on the OECD-sponsored Program for International Student Assessment (PISA), administered every three years to fifteen-year-old students around the world on reading comprehension, mathematics, and science, falling in the middle of the pack of some 71 countries and territories that participated in 2015. In fact, reading scores dropped for all ethnic groups except Asians in the late 2010s, according to the National Assessment of Educational Progress. This is a source of concern for some because academically gifted students in STEM can have an inordinately positive impact on the national economy. In addition, while American students are less focused on STEM, students from China and India are not only outperforming them but are also coming to the United States in large numbers for higher education. Although passing a high school physics course is linked to graduating from college with a STEM degree, something that is increasingly popular among Generation Z, just under two-fifths of high school graduates did in 2013, according to the American Institute of Physics. With few high school students taking physics, even fewer will study the subject in college and be able to teach it, a vicious cycle. The shortage of high school physics teachers is even more acute than that of mathematics or chemistry teachers. Many American public schools suffer from inadequate or dated facilities. Some schools even leak when it rains. It is 2017 report on American infrastructure, the American Society of Civil Engineers (ASCE) gave public schools a score of D+. In 2013, less than a third of American public schools have access to broadband Internet service, according to the non-profit EducationSuperHighway. By 2019, however, that number reached 99%. This has increased the frequency of digital learning. Sex education has been reformed. While traditional lessons involving bananas and condoms remain common, newer approaches that emphasize financial responsibility and character development have been implemented. These reforms play a role in the significant drop in teenage birthrates. By the mid-2010s, over four-fifths of American high school students graduate on time and over 70% enroll in college right after graduation. However, nationally, only one-quarter of American high school seniors are able to do grade-level math and only 37% are proficient in reading, yet about half graduate from high school as A students, prompting concerns of grade inflation. In addition, while 93% of middle school students said they want to attend college, only 26% go on to do so and graduate within six years. Critics argue that American high schools are not giving students they need for their future lives and careers. On top of the high costs of collegiate education, the vacancy of potentially millions of skilled jobs that do not require a university degree is making lawmakers reconsider their stance on tertiary education. Post-secondary. Generation Z is revolutionizing the educational system in many aspects. Thanks in part to a rise in the popularity of entrepreneurship and advancements in technology, high schools and colleges across the globe are including entrepreneurship in their curriculum. Generation Z is more likely to search for the information they need on the Internet rather than going through a book and are accustomed to learning by watching videos. Technical, trades, and vocational schools. According to the WEF, over one in five members of Generation Z are interested in attending a trade or technical school instead of a college or university. In the United States today, high school students are generally encouraged to attend college or university after graduation while the options of technical school and vocational training are often neglected. Historically, high schools separated students on career tracks, with programs aimed at students bound for higher education and those bound for the workforce. Students with learning disabilities or behavioral issues were often directed towards vocational or technical schools. All this changed in the late 1980s and early 1990s thanks to a major effort in the large cities to provide more abstract academic education to everybody. The mission of high schools became preparing students for college, referred to as "high school to Harvard." However, this program faltered in the 2010s, as institutions of higher education came under heightened skepticism due to high costs and disappointing results. People became increasingly concerned about debts and deficits. No longer were promises of educating "citizens of the world" or estimates of economic impact coming from abstruse calculations sufficient. Colleges and universities found it necessary to prove their worth by clarifying how much money from which industry and company-funded research, and how much it would cost to attend. According to the Department of Education, people with technical or vocational training are slightly more likely to be employed than those with a bachelor's degree and significantly more likely to be employed in their fields of specialty. The United States currently suffers from a shortage of skilled tradespeople. If nothing is done, this problem will get worse as aging workers retire and the market tightens due to falling unemployment rates. Economists argue that raising wages could incentivize more young people to pursue these careers. Many manufacturers are partnering with community colleges to create apprenticeship and training programs. However, they still have an image problem as people perceive manufacturing jobs as unstable, given the mass layoffs during the Great Recession of 2007–8. Career counselors are in extremely high demand, being called for not just appointments but also career fairs and orientation sessions for new students. Whereas 75% of Americans have never had someone pointing out the options of a trade or vocational school to them, this number is only 59% for Generation Z, according to the L2L Manufacturing Index. Members of Generation Z have witnessed their elders the Millennials accumulating large amounts of debts in order to pay for expensive university tuition fees and are consequently more open to alternative educational routes and career options. According to the 2018 CNBC All-American Economic Survey, only 40% of Americans believed that the financial cost of a four-year university degree is justified, down from 44% five years before. Moreover, only 50% believed a four-year program is the best kind of training, down from 60%, and the number of people who saw value in a two-year program jumped to 26% from 18%. These findings are consistent with other reports. College preparation. High school students bound for university in the United States often take standardized exams such as the SAT. Between 2005 and 2015, when the test was revised, there was a clear decline in test scores, despite various attempts at educational reforms. Improvements in elementary-school grades amounted to nothing in high school. Reasons for this include poverty, urban decay, low parental educational attainment, and weak linguistic skills. Direct score comparisons between different U.S. states and the District of Columbia is difficult, however, because the lower the participation, the higher the scores, and different states had different numbers of test takers. Cyndie Schmeiser of the College Board, which administers the SAT, told "The Washington Post" that between 2010 and 2015, only 42% of students scored at least 1550, considered the benchmark for college readiness. In 2016, a new grading scheme was introduced. While the old test had a maximum score of 2400 and had three sections, mathematics, reading, and writing, the new one has a maximum score of 1600 and only two sections, mathematics and "evidence-based reading and writing." Final scores were re-scaled, such that the combined mathematics and verbal score on the old SAT do "not" correspond to the total score from the new one. Thus, for example, a new 1400 corresponds to an old 1340. In general, new scores are inflated between 60 and 80 points. This does not mean the test has become easier or that the students have become better prepared, however. In general, scores are positively correlated with family income and privately educated students tend to do better. The College Board announced a partnership with the non-profit organization Khan Academy to offer free test-preparation materials to help level the playing field for students from low-income families. As students' scores fell for all subjects, especially in mathematics, among students of all backgrounds, the entire cohort of college students in the 2022–23 academic year have lower average grades and mathematical standards. Colleges and universities. The Pew Research Center found that 59% of Generation Z aged 20 to 22 were enrolled in college in 2019, compared to 53% of Millennials in 2002. The number of young people attending university was 44% in 1986. Nevertheless, undergraduate enrollment has been in decline for some time. Due to population aging, the number of college-aged people in the United States will fall after 2025, making it easier for people born in the late 2000s and after to get admitted. Demand for the top 100 American institutions, however, will likely remain largely unchanged. Overall, the 2010s have proven to be a turbulent period for higher education in the United States, as small private colleges from across the country face deep financial trouble due to higher tuition discounts in order to attract students at a time of expensive higher education costs, tougher regulation, and the fact that the college-aged population has declined. Institutions address these challenges by dropping programs with low student interest, including many in the liberal arts and the humanities, like gender studies and critical race theory, and creating majors for emerging fields, such as artificial intelligence, or professional programs, such as law enforcement, and investing in online learning programs. As of 2019, some 800 schools faced the prospects of either closure or consolidation with existing ones. Public colleges and university systems are also consolidating in order to cut costs. Meanwhile, the for-profit sector has been nose-diving due to increased regulatory oversight and poor outcomes. Economists Clayton Christensen and Michael Horn independently predicted in 2019 that large numbers of the over 4,000 colleges and universities in the U.S. will permanently close within 15 to 20 years. Rising administrative costs, sluggish middle-class wages, demographic decline (especially in the Northeast and Midwest), new forms of learning, stronger competition from better endowed universities, and higher demands of technical training undermine the financial viability of many schools. "It's going to be brutal across American higher education," Horn told CBS News. A 2019 analysis by Moody's Investor Services estimated that about 20% of all small private liberal arts colleges in the United States were in serious financial trouble. Such were the trends before 2020, and the arrival of SARS-CoV-2 in the United States in 2020 merely accelerated the process. The novel pneumonia virus not only wrought havoc on the nation but also caused a severe economic downturn. Consequently, families chose to either delay or avoid sending their children to institutions of higher education altogether. Worse still, colleges and universities have become dependent on foreign students for revenue because they pay full tuition fees and the international restrictions imposed to alleviate the spread of the pandemic mean that this stream of revenue will shrink substantially. On top of that, many schools face lawsuits by students who believed they had received substandard online services in the wake of the pandemic. Numerous institutions, including elite ones, have suspended graduate programs in the humanities and liberal arts due to low student interest and dim employment prospects. Historically, university students were more likely to be male than female. This trend continued into the very early twenty-first century, but by the late 2010s, the situation has reversed. Women are now more likely to enroll in university than men. By the end of the 2020–21 academic year, 59.5% of university students were women. Compared to five years ago, the number of students enrolled in American institutions of higher education has declined by about 1.5 million, with men being responsible for 71% of the decline. This growing sex gap has been growing for four decades in the United States in parallel with other countries of middle to high income. In addition, among those who attend college or university, women are more likely then men to graduate with a degree within six years. There is little support for initiatives to encourage and assist men to attend college due to identity politics, which paints men, especially white men, as a privileged group. The COVID-19 pandemic accelerated the trend of declining interest in higher education among young men. On the other hand, the number of women's colleges continues to fall, following a decades-long trend. As Generation Z enters high school, and they start preparing for college, a primary concern is paying for a college education without acquiring debt. Students report working hard in high school in hopes of earning scholarships and the hope that parents will pay the college costs not covered by scholarships. As of 2019, the total college debt has exceeded $1.5 trillion, and two out of three college graduates are saddled with debt. The average borrower owes $37,000, up $10,000 from ten years before. A 2019 survey found that over 30% of Generation Z and 18% of Millennials said they have considered taking a gap year between high school and college. In order to address the challenges of expensive tuition and student debt, many colleges have diversified their revenue, especially by changing enrollment, recruitment, and retention, and introduced further tuition discounts. Between the academic years 2007-8 and 2018–9, tuition discounts increased significantly. Almost nine in every ten first-time full-time freshmen received some kind of financial aid in the academic year 2017–8. Students also report interest in Reserve Officers' Training Corps (ROTC) programs as a means of covering college costs. Indeed, college subsidies are one of the most attractive things about signing up for military service. Another enticement is signing bonuses, whose amounts vary according to specialty. However, in 2010, quantitative historian Peter Turchin noted that the United States was overproducing university graduates—he termed this elite overproduction—in the 2000s and predicted, using historical trends, that this would be one of the causes of political instability in the 2020s, alongside income inequality, stagnating or declining real wages, growing public debt. According to Turchin, intensifying competition among graduates, whose numbers were larger than what the economy could absorb, leads to political polarization, social fragmentation, and even violence as many become disgruntled with their dim prospects despite having attained a high level of education. He warned that the turbulent 1960s and 1970s could return, as having a massive young population with university degrees was one of the key reasons for the instability of the past. Members of Generation Z are anxious to pick majors that teach them marketable skills. According to the World Economic Forum (WEF), some 88% consider job preparation to be the point of college. 39% are aiming for a career in medicine or healthcare, 20% in the natural sciences, 18% in biology or biotechnology, and 17% in business. A 2018 Gallup poll on over 32,000 university students randomly selected from 43 schools from across the United States found that just over half (53%) of them thought their chosen major would lead to gainful employment. STEM students expressed the highest confidence (62%) while those in the liberal arts were the least confident (40%). Just over one in three thought they would learn the skills and knowledge needed to become successful in the workplace. Because jobs (that matched what one studied) were so difficult to find in the few years following the Great Recession, the value of getting a liberal arts degree and studying the humanities at university came into question, their ability to develop a well-rounded and broad-minded individual notwithstanding. While the number of students majoring in the humanities has fallen significantly, those in science, technology, engineering, and mathematics, or STEM, have risen sharply. Furthermore, those who majored in the humanities and the liberal arts in the 2010s were most likely to regret having done so, whereas those in STEM, especially computer science and engineering, were the least likely. Indeed, STEM workers tend to earn more than their non-STEM counterparts; the difference widens after the bachelor's degree. 54% of people in the life sciences have an advanced degree, making this group the most educated overall among STEM workers. While about half of STEM graduates work in non-STEM jobs, people with collegiate STEM training still tend to earn more, regardless of whether or not their job is STEM-related or not. About a quarter of American university students failed to graduate within six years in the late 2010s and those who did faced diminishing wage premiums. Ever since it was introduced in the 1960s, affirmative action has been a controversial topic in the United States. In late June 2023, the Supreme Court of the United States ruled against race-based admissions. Subsequently, schools faced pressure to end legacy admissions as well. Health issues. General. A 2020 study of data from 1999 to 2015 suggests that children living with married parents tended to have lower rates of early-life mortality than those living with unmarried or single parents and non-parents. Puberty. Among girls from both Europe and the United States, the average age of the onset of puberty was around 13 in the early twenty-first century, down from about 16 a hundred years earlier. Early puberty is associated with a variety of mental health issues (such as anxiety and depression), early sexual activity, and substance abuse, among other problems. Furthermore, factors known for prompting mental health problems—early childhood stress, absent fathers, domestic conflict, and low socioeconomic status—are themselves linked to early pubertal onset. According to Dr. Dr. Shruthi Mahalingaiah, body-mass index is a strong predictor of precocious puberty. Possible causes of early puberty could be positive, namely improved nutrition, or negative, such as obesity, stress, trauma, exposure to hormone-disrupting chemicals, air pollutants, heavy metals. In the United States, African girls on average enter puberty first, followed by those of Hispanic, European, and Asian extraction, in that order. But African-American girls are less likely to face the negative effects of puberty than their counterparts of European descent. A 2019 meta-analysis and review of the research literature from all inhabited continents found that between 1977 and 2013, the age of pubertal onset among girls has fallen by an average of almost three months per decade, but with significant regional variations, ranging from 10.1 to 13.2 years in Africa to 8.8 to 10.3 years in the United States. This investigation relies on measurements of thelarche (initiation of breast tissue development) using the Tanner scale rather than self-reported menarche (first menstruation) and MRI brain scans for signs of the hypothalamic-pituitary-gonadal axis being reactivated. Furthermore, there is evidence that sexual maturity and psychosocial maturity no longer coincide; twenty-first-century youths appear to be reaching the former before the latter. Neither adolescents nor societies are prepared for this mismatch. Physical. Data from the NCES showed that in the academic year 2018–19, 15% of students receiving special education under the Individuals with Disabilities Education Act was suffering from "other health impairments"—such as asthma, diabetes, epilepsy, heart problems, hemophilia, lead poisoning, leukemia, nephritis, rheumatic fever, sickle cell anemia, and tuberculosis. Vision. A 2015 study found that the frequency of nearsightedness has doubled in the United Kingdom within the last 50 years. Ophthalmologist Steve Schallhorn, chairman of the Optical Express International Medical Advisory Board, noted that research have pointed to a link between the regular use of handheld electronic devices and eyestrain. The American Optometric Association sounded the alarm on a similar vein. According to a spokeswoman, digital eyestrain, or computer vision syndrome, is "rampant, especially as we move toward smaller devices and the prominence of devices increase in our everyday lives." Symptoms include dry and irritated eyes, fatigue, eye strain, blurry vision, difficulty focusing, headaches. However, the syndrome does not cause vision loss or any other permanent damage. In order to alleviate or prevent eyestrain, the Vision Council recommends that people limit screen time, take frequent breaks, adjust screen brightness, change the background from bright colors to gray, increase text sizes, and blinking more often. Parents should not only limit their children's screen time but should also lead by example. Allergies. While food allergies have been observed by doctors since ancient times and virtually all foods can be allergens, research by the Mayo Clinic in Minnesota found they are becoming increasingly common since the early 2000s. Today, one in twelve American children has a food allergy, with peanut allergy being the most prevalent type. Nut allergies, in general, have quadrupled and shellfish allergies have increased 40% between 2004 and 2019. In all, about 36% of American children have some kind of allergy. By comparison, this number among the Amish in Indiana is 7%. Allergies have also risen ominously in other Western countries. In general, the better developed the country, the higher the rates of allergies. Reasons for this remain poorly understood. One possible explanation, supported by the National Institute of Allergy and Infectious Diseases, is that parents keep their children "too clean for their own good." They recommend exposing newborn babies to a variety of potentially allergenic foods, such as peanut butter, before they reach the age of six months. According to this "hygiene hypothesis," such exposures give the infant's immune system some exercise, making it less likely to overreact. Evidence for this includes the fact that children living on a farm are consistently less likely to be allergic than their counterparts who are raised in the city, and that children born in a developed country to parents who immigrated from developing nations are more likely to be allergic than their parents are. Mental. A survey conducted the Fall of 2018 by the American Psychological Association revealed that Generation Z had the weakest mental health of any living generation; some 91% of this demographic cohort reported physical or emotion symptoms associated with stress. Some 54% of workers under the age of 23 said they felt stressed within the last month, compared to 40% for Millennials. The national average was 34%. Experts have not reached a consensus on what might be the cause of this spike in mental health issues. Some suggest it is because of the current state of the world while others argue it is due to increased willingness to discuss such topics. Perhaps both are at play. There is a growing body of evidence that there is a direct link between having access to social media at a young age and weak mental health. Across the United States, university students are besieging the offices of health service workers seeking mental health support. Indeed, Generation Z is the most likely to report having mental health issues than any other living generations. According to "The Economist", while teenagers from wealthier households are less likely to have behavioral problems, mental health is an issue that affects every teen regardless of family background. Moreover, the number of university students reporting mental health issues has been rising since the 1950s, if not earlier. Today, one in five American adults suffer from a mental condition, according to the National Institute of Mental Health. Anxiety, depression, and suicidal ideation. A research paper published in the Journal of the American Medical Association (JAMA) in June 2019 found a marked increase in suicide rates among adolescents. In 2017, the suicide rate for people aged 15 to 19 was 11.8 per 100,000, the highest point since 2000, when it was 8 per 100,000. In 2017, 6,241 Americans aged 15 to 19 committed suicide, of whom 5,016 were male and 1,225 were female. A flaw in this study is that cause-of-death reports may occasionally be inaccurate. The Center for Disease Control and Prevention (CDC) reported a 30% increase in suicide across all age groups in the United States between 2000 and 2016. There could be a variety of reasons for this. The authors of the JAMA paper suggest an increased willingness by families and coroners to label a death as a result of suicide, depression or opioid usage. Nadine Kaslow, a psychiatrist and behavioral scientist at the Emory University School of Medicine, who was not involved in the paper, pointed to the weakening of familial and other social bonds and the heavy use of modern communications technology, which exposes people to the risk of cyber-bullying. She noted that other studies have shown higher suicide rates, too, especially among adolescents and young adults. The number of American teenagers who suffered from the classic symptoms of depression rose 33% between 2010 and 2015. During the same period, the number of those aged 13 to 21 who committed suicide jumped 31% between 2010 and 2015. Psychologist Jean Twenge and her colleagues found that this growth of mental health issues was not divided along the lines of socioeconomic class, race/ethnicity, or geographical location. Rather, it was associated with spending more time in front of a screen. In general, suicide risk factors—depression, contemplating, planning, and attempting suicide—increase significantly if the subject spends more than two to three hours online. Especially, those who spent five or more hours had their suicide risk factors increase 71%. It is not clear, however, whether depression causes a teenager to spend more time online or the other way around. At the same time, teens who spent more time online were more likely to not have enough sleep, a major predictor of depression. Many teenagers told researchers they used a smartphone or a tablet right before bed, kept the device close, and used it as an alarm clock. But the blue light emitted by these devices, texting, and social networking are known for perturbing sleep. Besides mental problems like depression and anxiety, sleep deprivation is also linked to reduced performance in school and obesity. Parents can address the problem of sleep deprivation simply by imposing limits on screen time and buying simple alarm clocks. Sleep deprivation. Research from the American Academy of Pediatrics analyzing responses from the parents of caregivers of 49,050 children aged six to seventeen in the combined 2016-2017 National Survey of Children's Health revealed that only 47.6% of American children slept for nine hours on most days, meaning a significant number was sleep deprived. Compared with children who did not get enough sleep most nights, those who did were 44% more likely to be curious about new things, 33% more likely to finish their homework, 28% more likely to care about their academic performance, and 14% more likely to finish the tasks they started. The researchers identified the risk factors associated with sleep deprivation among children to be the low educational attainment of parents or caregivers, being from families living below the federal poverty line, higher digital media usage, more negative childhood experiences, and mental illnesses. American teenagers share a common habit of having their smartphones on at night, at the expense of the quality of their own sleep. Cognitive abilities. According to the National Center for Education Statistics (NCES), between the academic years 2011–12 and 2018–19, the number of students aged three to twenty-one receiving special education under the Individuals with Disabilities Education Act (IDEA) increased from 6.4 million to 7.1 million. Of these, one in three suffered from a specific learning disability, such as having more difficulty than usual with reading or understanding mathematics. Among students enrolled in public schools in that age group, the share receiving special education rose from 13% to 14% during the same time period. Amerindians (18%) and blacks (16%) were the most likely to receive special education while Pacific Islanders (11%) and Asians (7%) were the least likely. After specific learning disabilities, the most common types of learning disorders included speech and language impairment (19%), autism (11%), and developmental delay (7%). In a 2018 paper, cognitive scientists James R. Flynn and Michael Shayer presented evidence that from the 1990s until the 2010s, the observed gains in IQ during the twentieth century—commonly known as the Flynn effect—had either stagnated (as in the case of Australia, France, and the Netherlands), became mixed (in the German-speaking nations), or reversed (in the Nordic countries and the United Kingdom). This, however, was not the case in South Korea or the United States, as the U.S. continued its historic march towards higher IQ, a rate of 0.38 per decade, at least up until 2014 while South Korea saw its IQ scores growing at twice the average U.S. rate. While U.S. IQ scores continued to increase, creativity scores, as measured by the Torrance Test of Creative Thinking, were in decline between the 1990s and the late 2000s. Educational psychologist Kyung Hee Kim reached this conclusion after analyzing data samples of kindergartens to high-school students and adults in 1974, 1984, 1990, and 2008, a grand total of 272,599 individuals. Previously, U.S. educational success was attributed to the encouragement of creative thinking, something education reformers in China, Taiwan, South Korea, and Japan sought to replicate. But U.S. educators decided to go in the opposite direction, emphasizing standardization and test scores at the expense of creativity. On the parenting side, giving children little play time and letting them spend large amounts of time in front of a screen likely contributed to the trend. Creativity has real-life consequences, not just in the arts but also in academia and in life outcomes. Political views and participation. General trends. In 2018, Gallup conducted a survey of almost 14,000 Americans from all 50 states and the District of Columbia aged 18 and over on their political sympathies. They found that overall, younger adults tended to lean liberal while older adults tilted conservative. (See chart.) More specifically, groups with strong conservative leanings included the elderly, residents of the Midwest and the South, and people with some or no college education. Groups with strong liberal leanings were adults with advanced degrees, whereas those with moderate liberal leanings included younger adults (18 to 29 and 30 to 49), women, and residents of the East. Gallup found little variations by income groups compared to the national average. However, these broad trends conceal a significant gender divide, with young women under 30 years of age being vocally left-leaning and young men being fiercely right-leaning on a variety of issues from immigration to sexual harassment. According to Gallup, the gap as of early 2024 was 30 percentage points. According to political scientists Roger Eatwell and Matthew Goodwin, early research suggests that more of the youngest of Americans eligible to vote in the late 2010s identified as conservatives than members of Generation X when they were teenagers during the 1980s, at 30%. Growing up during an era of significant economic hardship and rapid ethnic change, many feel anxious about the flow of immigrants into their country. As is the case with many European countries, empirical evidence poses real challenges to the popular argument that the surge of nationalism and populism is an ephemeral phenomenon due to 'angry white old men' who would inevitably be replaced by younger and more liberal voters; there is no guarantee that the West, in general, is on a one-way trip towards a liberal and progressive future. As of 2023, many members of Generation Z remained open to new political ideas and as such could be captured by either major political parties of the United States. Many expected more substantive information on policies before casting their vote. Prior to the 2024 Presidential election, the top issues for college students are healthcare reforms, the cost of education, and civil rights. Most do not care about Middle Eastern conflicts and the associated protests, but are more interested in lucrative careers after graduation. Because they spend so much time on social media networks, people below the age of 30 have little concern over online privacy and national security and many oppose restrictions or bans on popular platforms, such as TikTok. Moreover, for the Democratic Party, heavily dependent on the youth vote, allowing continued access to the platform and being present on it means ensuring continued support among this age group. Elections. Despite the hype surrounding the political engagement and record turnout among young voters, their overall voting power has actually declined. In round terms, the share of voters between the ages of 18 and 24 will fall from 13% in 2000 to 12% in 2020 while that of voters aged 65 and over will rise from 18% to 23% during the same period, according to Richard Fry of the Pew Research Center. A consistent trend in the U.K., the U.S., and many other countries is that older people are more likely to vote than their younger countrymen, and they tend to vote for more right-leaning (or conservative) candidates. According to Sean Simpsons of Ipsos, people are more likely to vote when they have more at stake, such as children to raise, homes to maintain, and income taxes to pay. A survey conducted before the 2020 U.S. presidential election by Barnes and Nobles Education on 1,500 college students nationwide found that just one third of respondents believe who they vote for is "private information" and three quarters of them find it difficult to find unbiased news sources. For the first wave of Generation Z, 2018 was the first midterm election in their adult lives, where they cast 4% of the votes. A Reuters-Ipsos survey of 16,000 registered voters aged 18 to 34 conducted in the first three months of 2018 showed that support for Democratic Party among such voters fell by nine percent between 2016 and 2018 and that an increasing number favored the Republican Party's approach to the economy. This is despite the fact that almost two-thirds of young voters disapproved of the performance of Republican President Donald J. Trump. Although American voters below the age of 30 helped Joe Biden win the 2020 U.S. presidential election, their support for him fell quickly afterwards. By late 2021, only 29% of adults in this age group approved of his performance as president whereas 50% disapproved, a gap of 21 points, the largest of all age groups. In the 2022 midterm election, voters below the age of 30 were the only major age group supporting the Democratic Party, and their numbers were large enough to prevent a 'red wave'. Trust in the institutions. In 2019, the Pew Research Center interviewed over 2,000 Americans aged 18 and over on their views of various components of the federal government. They found that 54% of the people between the ages of 18 and 29 wanted larger government compared to 43% who preferred smaller government and fewer services. Older people were more likely to pick the second option. 2018 polls conducted by the Pew Research Center found that 70% of Americans aged 13 to 17 wanted the government to play a more active role in solving their problems. A 2022 poll by Pew showed that overall, medical experts, the military, and scientists were among the most trusted groups in the United States. But while a majority of Americans believed it was important for their country to remain a global leader in science, people aged 18 to 29 were somewhat less inclined to think so compared to older cohorts and were slightly more optimistic about the standing of U.S. science on the international stage. Gun ownership. The March for Our Lives, a protest taking place in the aftermath of the Stoneman Douglas High School shooting in 2018 was described by various media outlets as being led by students and young people. Some even describe it as the political "awakening" of Generation Z or that these protesters were "the voice of a generation on gun control." While this massive protest was indeed organized by the survivors of the Parkland shooting, albeit with the assistance of well-resourced and older benefactors, the reality was a little more complicated. According to a field survey by "The" "Washington Post" interviewing every fifth person at the protest, only ten percent of the participants were 18 years of age or younger. Meanwhile, the adult participants of the protest had an average age of just under 49. Polls conducted by Gallup and the Pew Research Center found that support for stricter gun laws among people aged 18 to 29 and 18 to 36, respectively, is statistically no different from that of the general population. According to Gallup, 57% of Americans are in favor of stronger gun control legislation. In a 2017 poll, Pew found that among the age group 18 to 29, 27% personally owned a gun and 16% lived with a gun owner, for a total of 43% living in a household with at least one gun. Nationwide, a similar percentage of American adults lived in a household with a gun. (See chart.) According to the CDC, the leading causes of death in the United States in 2016 were cancer, heart disease, accidents, chronic lower respiratory diseases, stroke, Alzheimer's disease, diabetes, influenza and pneumonia, kidney diseases, and suicide. Economics and the environment. Harvard University's Institute of Politics Youth Poll from 2019 found that support for single-payer universal healthcare and tuition-free free college dropped, down 8% to 47% and down 5% to 51%, respectively, if cost estimates were provided. 2018 surveys of teenagers 13 to 17 by the Pew Research Center revealed that 54% of Generation Z believed that climate change is real and is due to human activities while only 10% reject the scientific consensus on climate change. According to a 2019 CBS News poll on 2,143 U.S. residents, 72% of Americans 18 to 44 years of age believed that it is a matter of personal responsibility to tackle climate change while 61% of older Americans did the same. In addition, 42% of American adults under 45 years old thought that the U.S. could realistically transition to 100% renewable energy by 2050 while 29% deemed it unrealistic and 29% were unsure. Those numbers for older Americans are 34%, 40%, and 25%, respectively. Differences in opinion might be due to education as younger Americans are more likely to have been taught about climate change in schools than their elders. As of 2019, only 17% of electricity in the U.S. is generated from renewable energy, of which, 7% is from hydroelectric dams, 6% from wind turbines, and 1% solar panels. There are no rivers for new dams. Meanwhile, nuclear power plants generate about 20%, but their number is declining as they are being deactivated but not replaced. According to the Hispanic Heritage Foundation, about eight out of ten members of Generation Z identify as "fiscal conservatives." In 2018, the International Federation of Accountants released a report on a survey of 3,388 individuals aged 18 to 23 hailing from G20 countries, with a sample size of 150 to 300 per country. They found that members of Generation Z prefer a nationalist to a globalist approach to public policy by 51% to 32%, a margin of 19%. In the United States, 52% of Generation Z wanted their government to focus more on national problems, a 24% margin. Internationally, nationalism was strongest in China (by a 44% margin), India (30%), South Africa (37%), and Russia (32%), while support for globalism was strongest in France (20% margin) and Germany (3%). In general, for members of Generation Z, the top three priorities for public policy are the stability of the national economy, the quality of education, and the availability of jobs; the bottom issues, on the other hand, were addressing income and wealth inequality, making regulations smarter and more effective, and improving the effectiveness of international taxation. Moreover, healthcare is a top priority for Generation Z in Canada, France, Germany, and the United States. A 2018 Gallup poll found that people aged 18 to 29 have a more favorable view of socialism than capitalism, 51% to 45%. Nationally, 56% of Americans prefer capitalism compared to 37% who favor socialism. Older Americans consistently prefer capitalism to socialism. Whether the current attitudes of Millennials and Generation Z on capitalism and socialism will persist or dissipate as they grow older remains to be seen. Abortion, sexuality, and family values. In 2016, the Varkey Foundation and Populus conducted an international study examining the attitudes of 20,000 people aged 15 to 21 in twenty countries. They found that 66% of people aged 15 to 21 favored legal abortion. But there was significant variation among the countries surveyed. The United States stood at 63%, below France (84%), the United Kingdom (80%) but ahead of Brazil (45%), and Nigeria (24%). Gallup polls conducted in 2019 revealed that 62% of people aged 18 to 29—older members of Generation Z and younger Millennials—support giving women access to abortion while 33% opposed. In general, the older someone was, the less likely that they supported abortion. 56% of people aged 65 or over did not approve of abortion compared to 37% who did. (See chart to the right.) Gallup found in 2018 that nationwide, Americans are split on the issue of abortion, with equal numbers of people considering themselves "pro-life" or "pro-choice", 48%. In any case, many participants in the annual March for Life in Washington, D.C. in the early 2020s are members of Generation Z. The same international survey also asked about people's viewpoints on moral questions regarding sex and gender. Overall 89% supported sexual equality, with the U.S. (90%) standing between Canada and China (both 94%) and Nigeria (68%). 74% favored recognizing transgender rights, but with large national differences, from an overwhelming majority of 83% in Canada to a bare majority of 57% in Nigeria. The U.S. was again somewhere in the middle at 75%. 63% approved of same-sex marriage. There were again huge variations among countries. 81% of young Germans and 80% of young Canadians agreed that same-sex couples should be allowed to marry, compared to only 33% of young Turks and 16% of young Nigerians who did. As before, 71% of Americans approved, putting the country somewhere in the middle. A 2018 poll conducted by Harris on behalf of the LGBT advocacy group GLAAD found that despite being frequently described as the most tolerant segment of society, people aged 18 to 34—most Millennials and the oldest members of Generation Z—have become less accepting of LGBT individuals compared to previous years. In 2016, 63% of Americans in that age group said they felt comfortable interacting with members of the LGBT community; that number dropped to 53% in 2017 and then to 45% in 2018. On top of that, more people reported discomfort learning that a family member was LGBT (from 29% in 2017 to 36% in 2018), having a child learning LGBT history (30% to 39%), or having an LGBT doctor (27% to 34%). (See right.) Harris found that young women were driving this development; their overall comfort levels dived from 64% in 2017 to 52% in 2018. In general, the fall of comfort levels was the steepest among people aged 18 to 34 between 2016 and 2018. (Seniors aged 72 or above became more tolerant of LGBT doctors or having their (grand) children taking LGBT history lessons during the same period, albeit with a bump in discomfort levels in 2017.) Results from this Harris poll were released on the 50th anniversary of the riots that broke out in Stonewall Inn, New York City, in June 1969, thought to be the start of the LGBT rights movement. At that time, homosexuality was considered a mental illness or a crime in many U.S. states. 2018 surveys of teenagers 13 to 17 and adults aged 18 or over conducted by the Pew Research Center found that Generation Z has broadly similar views to the Millennials on various political and social issues. 67% were indifferent towards pre-nuptial cohabitation. 49% considered single motherhood to be neither a positive nor a negative for society. 62% saw increased ethnic or racial diversity as good for society. As did 48% for same-sex marriage, and 53% for interracial marriage. In most cases, Generation Z and the Millennials tended to hold different views from the Silent Generation, while the Baby Boomers and Generation X were in between. In the case of financial responsibility in a two-parent household, though, majorities from across the generations answered that it should be shared, with 58% for the Silent Generation, 73% for the Baby Boomers, 78% for Generation X, and 79% for both the Millennials and Generation Z. Across all the generations surveyed, at least 84% thought that both parents ought to be responsible for rearing children. About 13% of Generation Z thought that mothers should be the primary caretaker of children, with similar percentages for the other demographic cohorts. Very few thought that fathers should be the ones mainly responsible for taking care of children. Pew, however, noted that the views of this demographic cohort could change in the future as they age and due to new events. Even so, they could play a significant role in the shaping of the political landscape. A 2023 poll by "The Wall Street Journal" and NORC at the University of Chicago found that about 23% of people adults below the age of 30 thought that having children was important, 9 percentage points below those aged 65 and above. Immigration. In a 2016 survey conducted by the Varkey Foundation and Populus, the question of whether or not those 15 to 21 favored legal migration received mixed responses. Overall, 31% believed their governments should make it easier for immigrants to work and live legally in their countries while 23% said it should be more difficult, a margin of 8%. In the United States, that margin of support was 16%. (See chart above.) According to Gallup, Americans aged 18 to 34 are more likely to view immigration as a "good thing" than their elders. Foreign policy. In 2019, Harvard University's Institute of Politics Youth Poll asked voters aged 18 to 29 – younger Millennials and the first wave of Generation Z – what they would like to be priorities for U.S. foreign policy. They found that the top issues for these voters were countering terrorism and protecting human rights (both 39%), and protecting the environment (34%). Preventing nuclear proliferation and defending U.S. allies were not as important to young American voters. The same poll also found that younger Americans are also more positive about international free trade agreements than their elders. Another poll conducted at around the same time by the Center for American Progress found that only 18% of Americans supported liberal internationalism, which has been part of U.S. foreign policy since the time of President Franklin D. Roosevelt, while young Americans were even less likely to support it, contributing to a return towards isolationism, a historical norm dating back to the Founding Fathers. A 2023 poll by "The Economist" and YouGov showed that while people of all ages were generally quite hostile towards Russia, Americans aged 18 to 44 were markedly less likely to view China as an enemy. This might be because of lingering Cold War sentiments from older people. This is broadly consistent with another poll in the same year by the Pew Research Center. Religious tendencies. Globally, religion is growing except in Western Europe and North America. In the United States, Christianity remains the single most popular religion, with three quarters of Americans following, as of 2017. However, the nation's non-believers continued to grow in numbers in the early 2020s, a trend driven by people between the ages of 18 and 29, 32% of whom said they did not believe in God. Members of Generation Z are more likely to start questioning their parents' religions before the age of 18 than previous generations and those who leave tend to not return. Moreover, a majority of Generation Z disagrees that it is necessary to raise children in a religious household. According to a poll jointly conducted by "The Wall Street Journal" and the National Opinion Research Center (NORC) at the University of Chicago in 2023, only 31% of people aged 18 to 29 consider religion to be important in their lives, compared to 55% of those aged 65 and over. Nevertheless, American youths are fairly religious by global standards. In 2016, Barna found that 58% of teenagers agreed with the statement, "Many religions can lead to eternal life; there is no 'one true religion'." For adults, this number was 62%. About two-thirds of teenagers thought that "a person can be wrong about something that they sincerely believe in" whereas adults were much more likely to agree with that statement, especially the Baby Boomers (85%). 46% of adolescents require factual evidence before believing in something, on par with Millennials. Generation Z was more likely to consider the Bible to be at odds with science than older cohorts except Millennials (See chart). The same Barna survey revealed that the percentage of atheists and agnostics was 21% among Generation Z, higher than 15% of Millennials, 13% of Generation X, and 9% of Baby Boomers. Meanwhile, 59% of Generation Z were Christians, compared to 65% of Millennials, 65% of Generation X, and 75% of Baby Boomers. Among churchgoing teenagers, perception of this establishment tended to be overwhelmingly positive. 82% believed the church was relevant and helped them live a meaningful life. 77% thought they could be themselves at church, and 63% deemed the church to be tolerant of different beliefs. Only 27% considered the church to be unsafe for expressing doubts. 24% argued that religion and religious thought were shallow, and 17% thought it was too exclusive. When asked what their biggest barriers to faith were, irreligious members of Generation Z what they perceived as internal contradictions of the religion and its believers. According to the Pew Research Center, majorities of school-attending teenagers wear religious symbols and attire, pray before lunch, and invite their peers to join a religious club. Girls are more likely to discuss religion with their friends than boys. An overwhelming majority understood that teachers are "not" allowed to lead classroom prayers (82%). In contrast, 62% answered incorrectly that teachers may not read from the Bible as literature. While bullying has been acknowledged to be a serious problem in American public schools, students are seldom harassed for their religious views. Risky behaviors. General trends. Generation Z is generally more risk-averse in than previous generations. In 2013, 66% of teenagers (older members of Generation Z) had tried alcohol, down from 82% in 1991. Also, in 2013, 8% of teenagers never or rarely wear a seat belt when riding in a car with someone else, as opposed to 26% in 1991. Research from the Annie E. Casey Foundation conducted in 2016 found Generation Z youth had lower teen pregnancy rates, less substance abuse, and higher on-time high school graduation rates compared with Millennials. The researchers compared teens from 2008 and 2014 and found a 40% drop in teen pregnancy, a 38% drop in drug and alcohol abuse, and a 28% drop in the percentage of teens who did not graduate on time from high school. Three-quarters of American twelfth-graders believed their peers disapproved of binge drinking. Indeed, members of Generation Z tend to be more worried about mental health issues and getting good grades than unplanned pregnancies or binge drinking. Cigarette-smoking and substance abuse. The number of American teenagers who smoked cigarettes dropped to 5.8% in 2019 from 15.8% in 2011. Indeed, this decline can be traced back to the 1990s, and has been induced by government regulations and public opinion. For comparison, that rate for adults is 14% and falling. On the other hand, electronic cigarette smoking has risen dramatically, from 1.5% in 2011 to 27.5% in 2019 (see chart), leading the federal government to consider e-cigarette smoking an epidemic. Although e-cigarettes contain nicotine, an ingredient that makes tobacco addictive, it is generally considered less harmful than its carcinogenic traditional version, which is also linked to stroke and heart disease, among other deadly conditions. According to the Center for Disease Control and Prevention (CDC), nicotine consumption has deleterious effects on the development of the brain and can affect learning, memory, and mood among young people. As of 2019, there is no evidence linking the availability of electronic cigarettes to a decline in traditional smoking among youths. However, in an anti-vaping ad, the Food and Drugs Administration stated, "Teens who vape are more likely to start smoking cigarettes." Public opinion has turned against electronic cigarettes and various state and local governments are seeking to restrict is use, especially as kid-friendly flavors are on sale. Bloomberg reported in 2019 that members of Generation Z were twice as likely as an average American to consume cannabis. About 1% of the number of legal marijuana consumers came this demographic cohort, and that number tripled in 2019. Generation Z is the first to be born into a time when the legalization of marijuana at the federal level is being seriously considered. As of 2019, cannabis is legal in 33 U.S. states as well as in Canada and Uruguay. Even though Generation Z may not think of cannabis as anything more than a controversial issue, there is mounting concern on its effects on human health. A survey of literature reveals that marijuana usage is linked to, among other things, impaired driving, higher risks of stroke testicular cancer, memory loss, and certain mental illnesses, such as psychosis. Pregnant women, teenagers, and people prone to mental illnesses are especially vulnerable. Compared to those who do not use cannabis or those who start after they reach 16 years, people who start before that age suffer from reduced cognitive functioning (including planning and decision-making skills), and higher levels of impulsivity. About one in ten marijuana users developed a substance use disorder, meaning they continue to use it even though it causes problems in their lives, and those who use it before the age of 18 are more likely to suffer from it. Marijuana use in the United States is three times above the global average, but in line with other Western democracies. Forty-four percent of American high-school seniors have tried the drug at least once, and the typical age of first-use is 16, similar to the typical age of first-use for alcohol but lower than the first-use age for other illicit drugs. In a 2019 study, Jean Twenge and her collaborators examined surveys from the National Survey on Drug Use and Health of 200,000 adolescents aged 13 to 17 from 2005 to 2017 and 400,000 adults aged 18 and over from 2008 to 2017. They found that while there was a marked increase in the number of teenagers and young adults reporting mental illness, there was no corresponding development among those of 26 years and up. Early sexual intercourse and adolescent pregnancy. Historically, the birth rate of teenagers peaked in the late 1950s and early 1960s. However, at that time, most teenage parents were married. In the early twenty-first century, nine in ten births to the age group 15-19 are to unmarried mothers. Social norms have changed; it is now not unusual for teenagers to delay or avoid sexual intercourse altogether. Younger teenagers are more likely to practice abstinence than their older counterparts. A report published by the Center for Disease Control and Prevention (CDC) in early 2018 found the number of high school students who have had sex fell from 47% in 2005 down to 41% in 2015, with the most dramatic drop taking place between 2013 and 2015. On top of that, among never-married teens who have had sex, overwhelming majorities reported they used contraception the first time they did it. There are a few reasons for this. First, Millennials and Generation Z are more focused on the consequences of sex than their predecessors were. Second, there has been growing concern over unwanted sexual advances, especially in the wake of the Me-too movement. Numerous individuals have lost their jobs or been expelled from school over allegations of sexual assault. Writing for "The Spectator", Douglas Murray dubbed this the "sexual counter-revolution." Third, as a consequence of the precarious contemporary economy, young adults today are more likely to be living with their parents rather than on their own, with a romantic partner, or a spouse. A 2016 analysis by the CDC discovered that teenage birthrates nosedived between 1991, when they reached a crisis point, and 2014, when they dropped by 60%, a record low. The collapse of birthrates among blacks and (non-white) Hispanics, down 50%, was largely responsible for this development. However, their birthrates remained, on average, twice as high as those of their white counterparts. The birth rates of teenage Asians and Pacific Islanders were even lower, about half that of whites. In a 2014 paper, economists Melissa S. Kearney and Phillip B. Levine, both fellows of the Brookings Institution, were able to show that popular TV programs depicting the reality of teenage parenthood, such as MTV's "16 and Pregnant" and its "Teen Mom" sequels, have played a significant role in the reduction of teenage childbearing. While the CDC did not address the question of abortion, researchers from the Guttmacher Institute were able to show that the fall in teenage birthrates is likely not due to terminated pregnancies. The number of abortions remained the same or decreased in all U.S. states except for Vermont. This contradicts the historically negative correlation between birthrates and abortions. Modern youths also have better access to contraception than did their predecessors when they were at the same age. In addition to the daily birth control pill, injectable and implantable methods are available, and they last longer. A CDC analysis found that the rates of teens using a long-acting and reversible method of contraception, such as an intrauterine device (IUD), jumped from 0.4% in 2005 to 7.1% in 2013. The teen birthrate continues to fall in the late 2010s, down to 17.4 births per 1000 in 2018. In 2022, it fell to 13.5, the lowest on record. Social trends. Upbringing. The Pew Research Center's analysis of data from the American Community Survey and the Decennial Census revealed that the number of children living outside of the traditional ideal of parents marrying young and staying together till death has risen precipitously between the mid-to-late 20th century and the early 21st century. In 2013, only 43% of children lived with married parents in their first marriage, down from 73% in 1960. Meanwhile, the share of children living with a single parent was 34% in 2013, up from 9% in 1960. The proportion of children not living with their parents barely changed, standing at 5% in 2013; most of them lived with their grandparents. 15% of American children lived with married parents at least one of whom remarried in 2013, with little change from previous decades. In the early twenty-first century, American parents are less keen on seeing children upholding the same religious or political beliefs or following the same traditions. They are also less likely to say they would like their children to get married and have children. Rather, they emphasize ethical behavior, tolerance, generosity, and financial independence. Parents from wealthier backgrounds are less likely to have children out of wedlock and more likely to stay married, with desirable outcomes for their children, including better social and cognitive development and educational attainment. By contrast, children born into the middle class or the lower class are less likely to have married parents than before. They also tend to have younger parents who did not intend to have them and more siblings. Upper middle-class and wealthy American couples living in the cities tend to have fewer children and to invest heavily in their children in the form of breastfeeding for at least a year, giving them premium healthcare plans, sending them to private schools, and letting them eat organic foods. Said parents also hire nannies and housekeepers to reduce the time they spend doing house chores so that they can more time on culturally and educationally enriching activities with their children. In fact, the amount of time parents spend with their children has gone up since the mid-1960s, especially among educated couples, even though mothers of the 2010s are more likely to participate in the work force than in the 1960s. These separate trends lead children onto divergent future prospects. Having the right family background facilities the development of human capital. An early start to the accumulation of cultural capital helps children stand out from the competition as they mature, making them more likely to be admitted to prestigious universities. Historian Tara Westover calls this the "great sorting of America's youth." Time-spending habits and leisure. By analyzing data collected by the Bureau of Labor Statistics, the Pew Research Center found that Americans who were aged 15–17 in the years 2014–17 spent on average 16 more minutes doing homework and 22 more minutes asleep each day, compared to their predecessors one decade prior. However, other surveys suggest teens getting few hours of sleep each day. At the same time, they spent an average of 23 minutes less on paid jobs, and 16 minutes less on socializing each day, compared to their counterparts ten years ago. Meanwhile, the amount of time spent on sports, clothes shopping, and reading for pleasure have not changed. In general, Generation Z is less likely to engage in "fringe" behavior. As is the case with previous generations, there were differences in how teenage boys and girls spent their free time. Boys generally spent more time on leisure activities, such as playing sports and on their electronic devices. Girls, on the other hand, spent more time on homework, housework, and activities related to appearance. Girls also spent more time on volunteering and on unpaid care work than boys. 35% of girls and 23% of boys said they felt pressured to look good. Members of Generation Z tend to be lonelier than ever before. Despite the technological proficiency they possess, a clear majority, 72%, generally prefers person-to-person contact as opposed to online interaction. In the early 2020s, chess became a popular pastime, even an obsession, for many schoolchildren in the United States, thanks to the influence of the Netflix miniseries "The Queens Gambit" (2020) and many online personalities, such as International Master Levy Rozman (Gotham Chess). Fashions of courtship. Even though smartphone applications such as Tinder allow for easy hook-ups and one-night stands, Millennials and Generation Z are quite serious and cautious when it comes to long-term romantic relationships, according to Justin Garcia, who studies sex at the Kinsey Institute of Indiana University. This is in contrast to generations past, who married earlier and after shorter periods of courtship. Data from the 2019 General Social Survey revealed that 51% of Americans aged 18 to 24 had no steady partner, higher than other cohorts. Moreover, this number has grown in recent years. Sexual orientation and gender identity. In February 2021, Gallup reported that 15.9% of American adults in Generation Z (those born between 1997 and 2002) identified as LGBT. Of those adults, 11.5% were bisexual while 2% said they were lesbian, gay, or transgender. Overall, a greater share American adults in Generation Z identifies as LGBT than those in previous generational cohorts. Use of information and communications technologies (ICT). Use of ICT in general. Following the commercialization of the Internet in the 1990s, members of Generation Z have acquired a "digital bond to the Internet" since birth. With modern electronic telecommunications technology becoming more compact and affordable, the popularity of smartphones in the United States has grown exponentially. According to the Pew Research Center, by the 2010s, a majority of American teenagers have at least a basic phone if not a smartphone. The fact that an increasing majority own a cell phone has become one of the defining traits of this generation. About one quarter of teens are almost constantly online and 80% feel distressed if separated from their electronic gadgets. Generation Z spends on average six hours each day on the Internet, much of it playing video games. That much of Generation Z is growing up with constant access to Internet-enabled devices has undermined parental authority and control, prompting concerns over the sort of information children are exposed to while surfing the World Wide Web. Digital literacy. Despite being labeled as 'digital natives', the 2018 International Computer and Information Literacy Study (ICILS), conducted on 42,000 eighth-graders (or equivalents) from 14 countries and educational systems, found that only two percent of these people were sufficiently proficient with information devices to justify that description, and only 19% could work independently with computers to gather information and to manage their work. ICILS assesses students on two main categories: Computer and Information Literacy (CIL), and Computational Thinking (CT). For CIL, there are four levels, one to four, with Level 4 being the highest. Although at least 80% students from most countries tested reached Level 1, only two percent on average reached Level 4. In the U.S., 90% reached Level 1, 66% Level 2, 25% Level 3, and 2% Level 4. CT is divided into four levels, the Upper, Middle, and Lower Regions. International averages for these were 18%, 50%, and 32%, respectively. U.S. results were 20%, 45%, and 35%, respectively. In general, female eighth-graders outperformed their male counterparts in CIL by an international average of 18 points but were narrowly outclassed by their male counterparts in CT. (Narrow gaps made estimates of averages have higher coefficients of variation.) In the United States, where the computer-based tests were administered by the National Center for Education Statistics, although eighth-graders scored above the international average on CIL, being behind only students from Denmark, Moscow, South Korea, and Finland, they were about average on CT. Among American eighth-graders, 72% said they searched for information on the Internet at least once a week or every school day, and 65% reported they were autodidactic information finders on the Internet. As such, the role of Generation Z in the future "digital economy" remains uncertain as they still lack the skills they need to join the workforce. At least initially, they struggle with familiar office items, such as a printer or a scanner. Use of social media networks. Adults aged 18 to 24 of the late 2010s stood out in their usage of social media, with an overwhelming majority being on at least one platform. YouTube was the most popular of all social media sites. Members of Generation Z are more likely to "follow" others on social media than "share" and use different types of social media for different purposes. Very few expressed concern about third parties being able to access their data as modern teenagers share more personal information more often compared to previous generations. Majorities have uploaded photographs of themselves, stated their hobbies or interests, given their school names, posted their locations, and revealed their relationship statuses. On the other hand, only a minority of Generation Z engaged in political conversations on social media networks. While teens may dislike certain aspects of Facebook, such as excessive sharing, they continue to use it because participation is important in terms of socializing with friends and peers. On the other hand, Twitter is quickly gaining in popularity. Teens typically keep their Facebook accounts private and make their Twitter accounts public. This is partly because of the increasing number of adults on the former. Speed and reliability are important factors in members of Generation Z choice of social networking platform. This need for quick communication is apparent in the popularity of apps like Vine or Snapchat and the prevalent use of emojis. By the early 2020s, TikTok has become one of the most popular social networks among teenagers and young adults, so much so that they are willing to ignore their own government' concerns over issues of user privacy and national security, as well as a possible nationwide ban. A significant number of people from Generation Z have made their debut in social media before their exit from the womb, as they are the subject of their parents' social media posts. As they grow older, they do have opinions about pictures or videos of them being posted online. It is a tug-of-war between a child's privacy and a parent's pride or desire to share. While some parents do ask for their children's permission before posting, at least some of the time, others simply disregard what their children think. A 2010 report by cybersecurity firm AVG stated that 92% of American children under the age of two had a digital footprint and one in three had their information and photographs posted online within weeks after they were born. One of the reasons why children want greater control over their image online is because a college admissions officer or a prospective recruiter might look them up on the Internet. Indeed, a clear majority of employers check social media accounts during the hiring process and factor what they see into their recruitment decision. As a result, members of Generation Z are careful to make themselves presentable to potential employers with their social media accounts. Not only do they take advantage of the various tools that allow them to control who sees what and are more cautious about what they post, they also try to cultivate a "personal brand" online. According to the Pew Research Center, 57% refrain from posting something if they think it might "reflect badly on them in the future." It is quite common for young people to have an alternate Instagram account, so much so that there is a name for it, "finstas," or "fake Instagram accounts." There are growing concerns and evidence on the negative impact of social media networks on mental health. By nature, these platforms encourage social comparison and competition, even if shared photographs might have been digitally manipulated. Teenagers and young adults tend to have worse body-image issues the more time they spend on social media, a trend most pronounced among girls. Teenagers who spend so much time on these websites have correspondingly less time for in-person interactions with friends and family and schoolwork, and they are more likely to be exposed to misinformation and inappropriate contents. In fact, pornography is reaching an increasingly large youth audience on social media. As online social networks expanded, the amount of toxic contents generated mostly by young people also grew. Moreover, social media networks can be addictive, stimulating the brain the in the same manner as gambling and substance abuse do. Nevertheless, the ability to control impulsive desires of social media users was less impaired than that of gamblers or drug addicts. Hence, an obvious solution to this problem is to reduce screen time. Instagram users were the most likely to report unhappiness than those of any other social media platforms while FaceTime had the highest rate of happiness. Sean Parker, the founding president of Facebook, admitted that his company was "exploiting a vulnerability in human psychology." In response, legislation has been proposed or enacted at various levels of the U.S. government and some school boards to tackle the issue. There are also concerns that young people get drawn into online communities that promote politically extreme or radical beliefs, and spread incendiary statements or misinformation, even if they, at an intellectual level, realize that the algorithms of social media promote any materials that garner attention, especially negative ones. This could distort their outlooks and further exacerbate political polarization. Online dating and romance. While technology has added new ways for people to communicate with a romantic interest or partner, some traditions survived. Only 6% of teen boys waited to be asked out compared to 47% of teen girls. 69% of boys asked someone out in person, compared to 45% of girls, and 27% of boys did by text messaging, compared to 25% of girls. Social media networks proved useful in helping teenage lovers strengthen their ties; this was especially true for boys. On the other hand, although boys and girls liked to use the same means of communications, texting was by far the most popular method. Girls were much more likely than boys to text their lovers daily, 79% compared to 66%. A press release by the social media company Tinder showed that the age group 18 to 24 became the majority of users on their platform in 2019. In the U.S. the company boasted 7.86 million users in the United States that year. The most popular topics of discussion were in the fields of entertainment (especially music, film, and television) and politics (especially the key words "climate change," "social justice," "the environment," and "gun control"). For comparison, Millennials were thrice more likely to mention their travels in their autobiographical notes. According to Tinder, this was because users were trying to find like-minded users. Tinder also noted that surge in the number of matches in June might be due to the end of the school year. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N = 1,021,209" } ]
https://en.wikipedia.org/wiki?curid=58203815
58205
Vector processor
Computer processor which works on arrays of several numbers at once In computing, a vector processor or array processor is a central processing unit (CPU) that implements an instruction set where its instructions are designed to operate efficiently and effectively on large one-dimensional arrays of data called "vectors". This is in contrast to scalar processors, whose instructions operate on single data items only, and in contrast to some of those same scalar processors having additional single instruction, multiple data (SIMD) or SIMD within a register (SWAR) Arithmetic Units. Vector processors can greatly improve performance on certain workloads, notably numerical simulation and similar tasks. Vector processing techniques also operate in video-game console hardware and in graphics accelerators. Vector machines appeared in the early 1970s and dominated supercomputer design through the 1970s into the 1990s, notably the various Cray platforms. The rapid fall in the price-to-performance ratio of conventional microprocessor designs led to a decline in vector supercomputers during the 1990s. History. Early research and development. Vector processing development began in the early 1960s at the Westinghouse Electric Corporation in their "Solomon" project. Solomon's goal was to dramatically increase math performance by using a large number of simple coprocessors under the control of a single master Central processing unit (CPU). The CPU fed a single common instruction to all of the arithmetic logic units (ALUs), one per cycle, but with a different data point for each one to work on. This allowed the Solomon machine to apply a single algorithm to a large data set, fed in the form of an array. In 1962, Westinghouse cancelled the project, but the effort was restarted by the University of Illinois at Urbana–Champaign as the ILLIAC IV. Their version of the design originally called for a 1 GFLOPS machine with 256 ALUs, but, when it was finally delivered in 1972, it had only 64 ALUs and could reach only 100 to 150 MFLOPS. Nevertheless, it showed that the basic concept was sound, and, when used on data-intensive applications, such as computational fluid dynamics, the ILLIAC was the fastest machine in the world. The ILLIAC approach of using separate ALUs for each data element is not common to later designs, and is often referred to under a separate category, massively parallel computing. Around this time Flynn categorized this type of processing as an early form of single instruction, multiple threads (SIMT). International Computers Limited sought to avoid many of the difficulties with the ILLIAC concept with its own Distributed Array Processor (DAP) design, categorising the ILLIAC and DAP as cellular array processors that potentially offered substantial performance benefits over conventional vector processor designs such as the CDC STAR-100 and Cray 1. Computer for operations with functions. A computer for operations with functions was presented and developed by Kartsev in 1967. Supercomputers. The first vector supercomputers are the Control Data Corporation STAR-100 and Texas Instruments Advanced Scientific Computer (ASC), which were introduced in 1974 and 1972, respectively. The basic ASC (i.e., "one pipe") ALU used a pipeline architecture that supported both scalar and vector computations, with peak performance reaching approximately 20 MFLOPS, readily achieved when processing long vectors. Expanded ALU configurations supported "two pipes" or "four pipes" with a corresponding 2X or 4X performance gain. Memory bandwidth was sufficient to support these expanded modes. The STAR-100 was otherwise slower than CDC's own supercomputers like the CDC 7600, but at data-related tasks they could keep up while being much smaller and less expensive. However the machine also took considerable time decoding the vector instructions and getting ready to run the process, so it required very specific data sets to work on before it actually sped anything up. The vector technique was first fully exploited in 1976 by the famous Cray-1. Instead of leaving the data in memory like the STAR-100 and ASC, the Cray design had eight vector registers, which held sixty-four 64-bit words each. The vector instructions were applied between registers, which is much faster than talking to main memory. Whereas the STAR-100 would apply a single operation across a long vector in memory and then move on to the next operation, the Cray design would load a smaller section of the vector into registers and then apply as many operations as it could to that data, thereby avoiding many of the much slower memory access operations. The Cray design used pipeline parallelism to implement vector instructions rather than multiple ALUs. In addition, the design had completely separate pipelines for different instructions, for example, addition/subtraction was implemented in different hardware than multiplication. This allowed a batch of vector instructions to be pipelined into each of the ALU subunits, a technique they called "vector chaining". The Cray-1 normally had a performance of about 80 MFLOPS, but with up to three chains running it could peak at 240 MFLOPS and averaged around 150 – far faster than any machine of the era. Other examples followed. Control Data Corporation tried to re-enter the high-end market again with its ETA-10 machine, but it sold poorly and they took that as an opportunity to leave the supercomputing field entirely. In the early and mid-1980s Japanese companies (Fujitsu, Hitachi and Nippon Electric Corporation (NEC) introduced register-based vector machines similar to the Cray-1, typically being slightly faster and much smaller. Oregon-based Floating Point Systems (FPS) built add-on array processors for minicomputers, later building their own minisupercomputers. Throughout, Cray continued to be the performance leader, continually beating the competition with a series of machines that led to the Cray-2, Cray X-MP and Cray Y-MP. Since then, the supercomputer market has focused much more on massively parallel processing rather than better implementations of vector processors. However, recognising the benefits of vector processing, IBM developed Virtual Vector Architecture for use in supercomputers coupling several scalar processors to act as a vector processor. Although vector supercomputers resembling the Cray-1 are less popular these days, NEC has continued to make this type of computer up to the present day with their SX series of computers. Most recently, the SX-Aurora TSUBASA places the processor and either 24 or 48 gigabytes of memory on an HBM 2 module within a card that physically resembles a graphics coprocessor, but instead of serving as a co-processor, it is the main computer with the PC-compatible computer into which it is plugged serving support functions. GPU. Modern graphics processing units (GPUs) include an array of shader pipelines which may be driven by compute kernels, and can be considered vector processors (using a similar strategy for hiding memory latencies). As shown in Flynn's 1972 paper the key distinguishing factor of SIMT-based GPUs is that it has a single instruction decoder-broadcaster but that the cores receiving and executing that same instruction are otherwise reasonably normal: their own ALUs, their own register files, their own Load/Store units and their own independent L1 data caches. Thus although all cores simultaneously execute the exact same instruction in lock-step with each other they do so with completely different data from completely different memory locations. This is "significantly" more complex and involved than "Packed SIMD", which is strictly limited to execution of parallel pipelined arithmetic operations only. Although the exact internal details of today's commercial GPUs are proprietary secrets, the MIAOW team was able to piece together anecdotal information sufficient to implement a subset of the AMDGPU architecture. Recent development. Several modern CPU architectures are being designed as vector processors. The RISC-V vector extension follows similar principles as the early vector processors, and is being implemented in commercial products such as the Andes Technology AX45MPV. There are also several open source vector processor architectures being developed, including ForwardCom and Libre-SOC. Comparison with modern architectures. As of 2016[ [update]] most commodity CPUs implement architectures that feature fixed-length SIMD instructions. On first inspection these can be considered a form of vector processing because they operate on multiple (vectorized, explicit length) data sets, and borrow features from vector processors. However, by definition, the addition of SIMD cannot, by itself, qualify a processor as an actual "vector processor", because SIMD is fixed-length, and vectors are variable-length. The difference is illustrated below with examples, showing and comparing the three categories: Pure SIMD, Predicated SIMD, and Pure Vector Processing. Other CPU designs include some multiple instructions for vector processing on multiple (vectorized) data sets, typically known as MIMD (Multiple Instruction, Multiple Data) and realized with VLIW (Very Long Instruction Word) and EPIC (Explicitly Parallel Instruction Computing). The Fujitsu FR-V VLIW/vector processor combines both technologies. Difference between SIMD and vector processors. SIMD instruction sets lack crucial features when compared to vector instruction sets. The most important of these is that vector processors, inherently by definition and design, have always been variable-length since their inception. Whereas pure (fixed-width, no predication) SIMD is often mistakenly claimed to be "vector" (because SIMD processes data which happens to be vectors), through close analysis and comparison of historic and modern ISAs, actual vector ISAs may be observed to have the following features that no SIMD ISA has: Predicated SIMD (part of Flynn's taxonomy) which is comprehensive individual element-level predicate masks on every vector instruction as is now available in ARM SVE2. And AVX-512, almost qualifies as a vector processor. Predicated SIMD uses fixed-width SIMD ALUs but allows locally controlled (predicated) activation of units to provide the appearance of variable length vectors. Examples below help explain these categorical distinctions. SIMD, because it uses fixed-width batch processing, is unable by design to cope with iteration and reduction. This is illustrated further with examples, below. Additionally, vector processors can be more resource-efficient by using slower hardware and saving power, but still achieving throughput and having less latency than SIMD, through vector chaining. Consider both a SIMD processor and a vector processor working on 4 64-bit elements, doing a LOAD, ADD, MULTIPLY and STORE sequence. If the SIMD width is 4, then the SIMD processor must LOAD four elements entirely before it can move on to the ADDs, must complete all the ADDs before it can move on to the MULTIPLYs, and likewise must complete all of the MULTIPLYs before it can start the STOREs. This is by definition and by design. Having to perform 4-wide simultaneous 64-bit LOADs and 64-bit STOREs is very costly in hardware (256 bit data paths to memory). Having 4x 64-bit ALUs, especially MULTIPLY, likewise. To avoid these high costs, a SIMD processor would have to have 1-wide 64-bit LOAD, 1-wide 64-bit STORE, and only 2-wide 64-bit ALUs. As shown in the diagram, which assumes a multi-issue execution model, the consequences are that the operations now take longer to complete. If multi-issue is not possible, then the operations take even longer because the LD may not be issued (started) at the same time as the first ADDs, and so on. If there are only 4-wide 64-bit SIMD ALUs, the completion time is even worse: only when all four LOADs have completed may the SIMD operations start, and only when all ALU operations have completed may the STOREs begin. A vector processor, by contrast, even if it is "single-issue" and uses no SIMD ALUs, only having 1-wide 64-bit LOAD, 1-wide 64-bit STORE (and, as in the Cray-1, the ability to run MULTIPLY simultaneously with ADD), may complete the four operations faster than a SIMD processor with 1-wide LOAD, 1-wide STORE, and 2-wide SIMD. This more efficient resource utilization, due to vector chaining, is a key advantage and difference compared to SIMD. SIMD, by design and definition, cannot perform chaining except to the entire group of results. Description. In general terms, CPUs are able to manipulate one or two pieces of data at a time. For instance, most CPUs have an instruction that essentially says "add A to B and put the result in C". The data for A, B and C could be—in theory at least—encoded directly into the instruction. However, in efficient implementation things are rarely that simple. The data is rarely sent in raw form, and is instead "pointed to" by passing in an address to a memory location that holds the data. Decoding this address and getting the data out of the memory takes some time, during which the CPU traditionally would sit idle waiting for the requested data to show up. As CPU speeds have increased, this memory latency has historically become a large impediment to performance; see . In order to reduce the amount of time consumed by these steps, most modern CPUs use a technique known as instruction pipelining in which the instructions pass through several sub-units in turn. The first sub-unit reads the address and decodes it, the next "fetches" the values at those addresses, and the next does the math itself. With pipelining the "trick" is to start decoding the next instruction even before the first has left the CPU, in the fashion of an assembly line, so the address decoder is constantly in use. Any particular instruction takes the same amount of time to complete, a time known as the "latency", but the CPU can process an entire batch of operations, in an overlapping fashion, much faster and more efficiently than if it did so one at a time. Vector processors take this concept one step further. Instead of pipelining just the instructions, they also pipeline the data itself. The processor is fed instructions that say not just to add A to B, but to add all of the numbers "from here to here" to all of the numbers "from there to there". Instead of constantly having to decode instructions and then fetch the data needed to complete them, the processor reads a single instruction from memory, and it is simply implied in the definition of the instruction "itself" that the instruction will operate again on another item of data, at an address one increment larger than the last. This allows for significant savings in decoding time. To illustrate what a difference this can make, consider the simple task of adding two groups of 10 numbers together. In a normal programming language one would write a "loop" that picked up each of the pairs of numbers in turn, and then added them. To the CPU, this would look something like this: move $10, count ; count := 10 loop: load r1, a load r2, b add r3, r1, r2 ; r3 := r1 + r2 store r3, c add a, a, $4 ; move on add b, b, $4 add c, c, $4 dec count ; decrement jnez count, loop ; loop back if count is not yet 0 ret But to a vector processor, this task looks considerably different: move $10, count ; count = 10 vload v1, a, count vload v2, b, count vadd v3, v1, v2 vstore v3, c, count ret Note the complete lack of looping in the instructions, because it is the "hardware" which has performed 10 sequential operations: effectively the loop count is on an explicit "per-instruction" basis. Cray-style vector ISAs take this a step further and provide a global "count" register, called vector length (VL): setvli $10 # Set vector length VL=10 vload v1, a # 10 loads from a vload v2, b # 10 loads from b vadd v3, v1, v2 # 10 adds vstore v3, c # 10 stores into c ret There are several savings inherent in this approach. Additionally, in more modern vector processor ISAs, "Fail on First" or "Fault First" has been introduced (see below) which brings even more advantages. But more than that, a high performance vector processor may have multiple functional units adding those numbers in parallel. The checking of dependencies between those numbers is not required as a vector instruction specifies multiple independent operations. This simplifies the control logic required, and can further improve performance by avoiding stalls. The math operations thus completed far faster overall, the limiting factor being the time required to fetch the data from memory. Not all problems can be attacked with this sort of solution. Including these types of instructions necessarily adds complexity to the core CPU. That complexity typically makes "other" instructions run slower—i.e., whenever it is not adding up many numbers in a row. The more complex instructions also add to the complexity of the decoders, which might slow down the decoding of the more common instructions such as normal adding. ("This can be somewhat mitigated by keeping the entire ISA to RISC principles: RVV only adds around 190 vector instructions even with the advanced features.") Vector processors were traditionally designed to work best only when there are large amounts of data to be worked on. For this reason, these sorts of CPUs were found primarily in supercomputers, as the supercomputers themselves were, in general, found in places such as weather prediction centers and physics labs, where huge amounts of data are "crunched". However, as shown above and demonstrated by RISC-V RVV the "efficiency" of vector ISAs brings other benefits which are compelling even for Embedded use-cases. Vector instructions. The vector pseudocode example above comes with a big assumption that the vector computer can process more than ten numbers in one batch. For a greater quantity of numbers in the vector register, it becomes unfeasible for the computer to have a register that large. As a result, the vector processor either gains the ability to perform loops itself, or exposes some sort of vector control (status) register to the programmer, usually known as a vector Length. The self-repeating instructions are found in early vector computers like the STAR-100, where the above action would be described in a single instruction (somewhat like ). They are also found in the x86 architecture as the prefix. However, only very simple calculations can be done effectively in hardware this way without a very large cost increase. Since all operands have to be in memory for the STAR-100 architecture, the latency caused by access became huge too. Interestingly, though, Broadcom included space in all vector operations of the Videocore IV ISA for a field, but unlike the STAR-100 which uses memory for its repeats, the Videocore IV repeats are on all operations including arithmetic vector operations. The repeat length can be a small range of power of two or sourced from one of the scalar registers. The Cray-1 introduced the idea of using processor registers to hold vector data in batches. The batch lengths (vector length, VL) could be dynamically set with a special instruction, the significance compared to Videocore IV (and, crucially as will be shown below, SIMD as well) being that the repeat length does not have to be part of the instruction encoding. This way, significantly more work can be done in each batch; the instruction encoding is much more elegant and compact as well. The only drawback is that in order to take full advantage of this extra batch processing capacity, the memory load and store speed correspondingly had to increase as well. This is sometimes claimed to be a disadvantage of Cray-style vector processors: in reality it is part of achieving high performance throughput, as seen in GPUs, which face exactly the same issue. Modern SIMD computers claim to improve on early Cray by directly using multiple ALUs, for a higher degree of parallelism compared to only using the normal scalar pipeline. Modern vector processors (such as the SX-Aurora TSUBASA) combine both, by issuing multiple data to multiple internal pipelined SIMD ALUs, the number issued being dynamically chosen by the vector program at runtime. Masks can be used to selectively load and store data in memory locations, and use those same masks to selectively disable processing element of SIMD ALUs. Some processors with SIMD (AVX-512, ARM SVE2) are capable of this kind of selective, per-element ("predicated") processing, and it is these which somewhat deserve the nomenclature "vector processor" or at least deserve the claim of being capable of "vector processing". SIMD processors without per-element predication (MMX, SSE, AltiVec) categorically do not. Modern GPUs, which have many small compute units each with their own independent SIMD ALUs, use Single Instruction Multiple Threads (SIMT). SIMT units run from a shared single broadcast synchronised Instruction Unit. The "vector registers" are very wide and the pipelines tend to be long. The "threading" part of SIMT involves the way data is handled independently on each of the compute units. In addition, GPUs such as the Broadcom Videocore IV and other external vector processors like the NEC SX-Aurora TSUBASA may use fewer vector units than the width implies: instead of having 64 units for a 64-number-wide register, the hardware might instead do a pipelined loop over 16 units for a hybrid approach. The Broadcom Videocore IV is also capable of this hybrid approach: nominally stating that its SIMD QPU Engine supports 16-long FP array operations in its instructions, it actually does them 4 at a time, as (another) form of "threads". Vector instruction example. This example starts with an algorithm ("IAXPY"), first show it in scalar instructions, then SIMD, then predicated SIMD, and finally vector instructions. This incrementally helps illustrate the difference between a traditional vector processor and a modern SIMD one. The example starts with a 32-bit integer variant of the "DAXPY" function, in C: void iaxpy(size_t n, int a, const int x[], int y[]) { for (size_t i = 0; i &lt; n; i++) y[i] = a * x[i] + y[i]; In each iteration, every element of y has an element of x multiplied by a and added to it. The program is expressed in scalar linear form for readability. Scalar assembler. The scalar version of this would load one of each of x and y, process one calculation, store one result, and loop: loop: load32 r1, x ; load one 32bit data load32 r2, y mul32 r1, a, r1 ; r1 := r1 * a add32 r3, r1, r2 ; r3 := r1 + r2 store32 r3, y addl x, x, $4 ; x := x + 4 addl y, y, $4 subl n, n, $1 ; n := n - 1 jgz n, loop ; loop back if n &gt; 0 out: ret The STAR-like code remains concise, but because the STAR-100's vectorisation was by design based around memory accesses, an extra slot of memory is now required to process the information. Two times the latency is also needed due to the extra requirement of memory access. ; Assume tmp is pre-allocated vmul tmp, a, x, n ; tmp[i] = a * x[i] vadd y, y, tmp, n ; y[i] = y[i] + tmp[i] ret Pure (non-predicated, packed) SIMD. A modern packed SIMD architecture, known by many names (listed in Flynn's taxonomy), can do most of the operation in batches. The code is mostly similar to the scalar version. It is assumed that both x and y are properly aligned here (only start on a multiple of 16) and that n is a multiple of 4, as otherwise some setup code would be needed to calculate a mask or to run a scalar version. It can also be assumed, for simplicity, that the SIMD instructions have an option to automatically repeat scalar operands, like ARM NEON can. If it does not, a "splat" (broadcast) must be used, to copy the scalar argument across a SIMD register: splatx4 v4, a ; v4 = a,a,a,a The time taken would be basically the same as a vector implementation of described above. vloop: load32x4 v1, x load32x4 v2, y mul32x4 v1, a, v1 ; v1 := v1 * a add32x4 v3, v1, v2 ; v3 := v1 + v2 store32x4 v3, y addl x, x, $16 ; x := x + 16 addl y, y, $16 subl n, n, $4 ; n := n - 4 jgz n, vloop ; go back if n &gt; 0 out: ret Note that both x and y pointers are incremented by 16, because that is how long (in bytes) four 32-bit integers are. The decision was made that the algorithm "shall" only cope with 4-wide SIMD, therefore the constant is hard-coded into the program. Unfortunately for SIMD, the clue was in the assumption above, "that n is a multiple of 4" as well as "aligned access", which, clearly, is a limited specialist use-case. Realistically, for general-purpose loops such as in portable libraries, where n cannot be limited in this way, the overhead of setup and cleanup for SIMD in order to cope with non-multiples of the SIMD width, can far exceed the instruction count inside the loop itself. Assuming worst-case that the hardware cannot do misaligned SIMD memory accesses, a real-world algorithm will: Eight-wide SIMD requires repeating the inner loop algorithm first with four-wide SIMD elements, then two-wide SIMD, then one (scalar), with a test and branch in between each one, in order to cover the first and last remaining SIMD elements (0 &lt;= n &lt;= 7). This more than "triples" the size of the code, in fact in extreme cases it results in an "order of magnitude" increase in instruction count! This can easily be demonstrated by compiling the iaxpy example for AVX-512, using the options to gcc. Over time as the ISA evolves to keep increasing performance, it results in ISA Architects adding 2-wide SIMD, then 4-wide SIMD, then 8-wide and upwards. It can therefore be seen why AVX-512 exists in x86. Without predication, the wider the SIMD width the worse the problems get, leading to massive opcode proliferation, degraded performance, extra power consumption and unnecessary software complexity. Vector processors on the other hand are designed to issue computations of variable length for an arbitrary count, n, and thus require very little setup, and no cleanup. Even compared to those SIMD ISAs which have masks (but no instruction), Vector processors produce much more compact code because they do not need to perform explicit mask calculation to cover the last few elements (illustrated below). Predicated SIMD. Assuming a hypothetical predicated (mask capable) SIMD ISA, and again assuming that the SIMD instructions can cope with misaligned data, the instruction loop would look like this: vloop: # prepare mask. few ISAs have min though min t0, n, $4 ; t0 = min(n, 4) shift m, $1, t0 ; m = 1«t0 sub m, m, $1 ; m = (1«t0)-1 # now do the operation, masked by m bits load32x4 v1, x, m load32x4 v2, y, m mul32x4 v1, a, v1, m ; v1 := v1 * a add32x4 v3, v1, v2, m ; v3 := v1 + v2 store32x4 v3, y, m # update x, y and n for next loop addl x, t0*4 ; x := x + t0*4 addl y, t0*4 subl n, n, t0 ; n := n - t0 # loop? jgz n, vloop ; go back if n &gt; 0 out: ret Here it can be seen that the code is much cleaner but a little complex: at least, however, there is no setup or cleanup: on the last iteration of the loop, the predicate mask wil be set to either 0b0000, 0b0001, 0b0011, 0b0111 or 0b1111, resulting in between 0 and 4 SIMD element operations being performed, respectively. One additional potential complication: some RISC ISAs do not have a "min" instruction, needing instead to use a branch or scalar predicated compare. It is clear how predicated SIMD at least merits the term "vector capable", because it can cope with variable-length vectors by using predicate masks. The final evolving step to a "true" vector ISA, however, is to not have any evidence in the ISA "at all" of a SIMD width, leaving that entirely up to the hardware. Pure (true) vector ISA. For Cray-style vector ISAs such as RVV, an instruction called "setvl" (set vector length) is used. The hardware first defines how many data values it can process in one "vector": this could be either actual registers or it could be an internal loop (the hybrid approach, mentioned above). This maximum amount (the number of hardware "lanes") is termed "MVL" (Maximum Vector Length). Note that, as seen in SX-Aurora and Videocore IV, MVL may be an actual hardware lane quantity "or a virtual one". "(Note: As mentioned in the ARM SVE2 Tutorial, programmers must not make the mistake of assuming a fixed vector width: consequently MVL is not a quantity that the programmer needs to know. This can be a little disconcerting after years of SIMD mindset)." On calling setvl with the number of outstanding data elements to be processed, "setvl" is permitted (essentially required) to limit that to the Maximum Vector Length (MVL) and thus returns the "actual" number that can be processed by the hardware in subsequent vector instructions, and sets the internal special register, "VL", to that same amount. ARM refers to this technique as "vector length agnostic" programming in its tutorials on SVE2. Below is the Cray-style vector assembler for the same SIMD style loop, above. Note that t0 (which, containing a convenient copy of VL, can vary) is used instead of hard-coded constants: vloop: setvl t0, n # VL=t0=min(MVL, n) vld32 v0, x # load vector x vld32 v1, y # load vector y vmadd32 v1, v0, a # v1 += v0 * a vst32 v1, y # store Y add y, t0*4 # advance y by VL*4 add x, t0*4 # advance x by VL*4 sub n, t0 # n -= VL (t0) bnez n, vloop # repeat if n != 0 This is essentially not very different from the SIMD version (processes 4 data elements per loop), or from the initial Scalar version (processes just the one). n still contains the number of data elements remaining to be processed, but t0 contains the copy of VL – the number that is "going" to be processed in each iteration. t0 is subtracted from n after each iteration, and if n is zero then all elements have been processed. A number of things to note, when comparing against the Predicated SIMD assembly variant: Thus it can be seen, very clearly, how vector ISAs reduce the number of instructions. Also note, that just like the predicated SIMD variant, the pointers to x and y are advanced by t0 times four because they both point to 32 bit data, but that n is decremented by straight t0. Compared to the fixed-size SIMD assembler there is very little apparent difference: x and y are advanced by hard-coded constant 16, n is decremented by a hard-coded 4, so initially it is hard to appreciate the significance. The difference comes in the realisation that the vector hardware could be capable of doing 4 simultaneous operations, or 64, or 10,000, it would be the exact same vector assembler for all of them "and there would still be no SIMD cleanup code". Even compared to the predicate-capable SIMD, it is still more compact, clearer, more elegant and uses less resources. Not only is it a much more compact program (saving on L1 Cache size), but as previously mentioned, the vector version can issue far more data processing to the ALUs, again saving power because Instruction Decode and Issue can sit idle. Additionally, the number of elements going in to the function can start at zero. This sets the vector length to zero, which effectively disables all vector instructions, turning them into no-ops, at runtime. Thus, unlike non-predicated SIMD, even when there are no elements to process there is still no wasted cleanup code. Vector reduction example. This example starts with an algorithm which involves reduction. Just as with the previous example, it will be first shown in scalar instructions, then SIMD, and finally vector instructions, starting in c: void (size_t n, int a, const int x[]) { int y = 0; for (size_t i = 0; i &lt; n; i++) y += x[i]; return y; Here, an accumulator (y) is used to sum up all the values in the array, x. Scalar assembler. The scalar version of this would load each of x, add it to y, and loop: set y, 0 ; y initialised to zero loop: load32 r1, x ; load one 32bit data add32 y, y, r1 ; y := y + r1 addl x, x, $4 ; x := x + 4 subl n, n, $1 ; n := n - 1 jgz n, loop ; loop back if n &gt; 0 out: ret y ; returns result, y This is very straightforward. "y" starts at zero, 32 bit integers are loaded one at a time into r1, added to y, and the address of the array "x" moved on to the next element in the array. SIMD reduction. This is where the problems start. SIMD by design is incapable of doing arithmetic operations "inter-element". Element 0 of one SIMD register may be added to Element 0 of another register, but Element 0 may not be added to anything other than another Element 0. This places some severe limitations on potential implementations. For simplicity it can be assumed that n is exactly 8: addl r3, x, $16 ; for 2nd 4 of x load32x4 v1, x ; first 4 of x load32x4 v2, r3 ; 2nd 4 of x add32x4 v1, v2, v1 ; add 2 groups At this point four adds have been performed: but with 4-wide SIMD being incapable by design of adding for example, things go rapidly downhill just as they did with the general case of using SIMD for general-purpose IAXPY loops. To sum the four partial results, two-wide SIMD can be used, followed by a single scalar add, to finally produce the answer, but, frequently, the data must be transferred out of dedicated SIMD registers before the last scalar computation can be performed. Even with a general loop (n not fixed), the only way to use 4-wide SIMD is to assume four separate "streams", each offset by four elements. Finally, the four partial results have to be summed. Other techniques involve shuffle: examples online can be found for AVX-512 of how to do "Horizontal Sum" Aside from the size of the program and the complexity, an additional potential problem arises if floating-point computation is involved: the fact that the values are not being summed in strict order (four partial results) could result in rounding errors. Vector ISA reduction. Vector instruction sets have arithmetic reduction operations "built-in" to the ISA. If it is assumed that n is less or equal to the maximum vector length, only three instructions are required: setvl t0, n # VL=t0=min(MVL, n) vld32 v0, x # load vector x vredadd32 y, v0 # reduce-add into y The code when n is larger than the maximum vector length is not that much more complex, and is a similar pattern to the first example ("IAXPY"). set y, 0 vloop: setvl t0, n # VL=t0=min(MVL, n) vld32 v0, x # load vector x vredadd32 y, y, v0 # add all x into y add x, t0*4 # advance x by VL*4 sub n, t0 # n -= VL (t0) bnez n, vloop # repeat if n != 0 ret y The simplicity of the algorithm is stark in comparison to SIMD. Again, just as with the IAXPY example, the algorithm is length-agnostic (even on Embedded implementations where maximum vector length could be only one). Implementations in hardware may, if they are certain that the right answer will be produced, perform the reduction in parallel. Some vector ISAs offer a parallel reduction mode as an explicit option, for when the programmer knows that any potential rounding errors do not matter, and low latency is critical. This example again highlights a key critical fundamental difference between true vector processors and those SIMD processors, including most commercial GPUs, which are inspired by features of vector processors. Insights from examples. Compared to any SIMD processor claiming to be a vector processor, the order of magnitude reduction in program size is almost shocking. However, this level of elegance at the ISA level has quite a high price tag at the hardware level: Overall then there is a choice to either have These stark differences are what distinguishes a vector processor from one that has SIMD. Vector processor features. Where many SIMD ISAs borrow or are inspired by the list below, typical features that a vector processor will have are: GPU vector processing features. With many 3D shader applications needing trigonometric operations as well as short vectors for common operations (RGB, ARGB, XYZ, XYZW) support for the following is typically present in modern GPUs, in addition to those found in vector processors: Fault (or Fail) First. Introduced in ARM SVE2 and RISC-V RVV is the concept of speculative sequential Vector Loads. ARM SVE2 has a special register named "First Fault Register", where RVV modifies (truncates) the Vector Length (VL). The basic principle of ffirst is to attempt a large sequential Vector Load, but to allow the hardware to arbitrarily truncate the "actual" amount loaded to either the amount that would succeed without raising a memory fault or simply to an amount (greater than zero) that is most convenient. The important factor is that "subsequent" instructions are notified or may determine exactly how many Loads actually succeeded, using that quantity to only carry out work on the data that has actually been loaded. Contrast this situation with SIMD, which is a fixed (inflexible) load width and fixed data processing width, unable to cope with loads that cross page boundaries, and even if they were they are unable to adapt to what actually succeeded, yet, paradoxically, if the SIMD program were to even attempt to find out in advance (in each inner loop, every time) what might optimally succeed, those instructions only serve to hinder performance because they would, by necessity, be part of the critical inner loop. This begins to hint at the reason why ffirst is so innovative, and is best illustrated by memcpy or strcpy when implemented with standard 128-bit non-predicated non-ffirst SIMD. For IBM POWER9 the number of hand-optimised instructions to implement strncpy is in excess of 240. By contrast, the same strncpy routine in hand-optimised RVV assembler is a mere 22 instructions. The above SIMD example could potentially fault and fail at the end of memory, due to attempts to read too many values: it could also cause significant numbers of page or misaligned faults by similarly crossing over boundaries. In contrast, by allowing the vector architecture the freedom to decide how many elements to load, the first part of a strncpy, if beginning initially on a sub-optimal memory boundary, may return just enough loads such that on "subsequent" iterations of the loop the batches of vectorised memory reads are optimally aligned with the underlying caches and virtual memory arrangements. Additionally, the hardware may choose to use the opportunity to end any given loop iteration's memory reads "exactly" on a page boundary (avoiding a costly second TLB lookup), with speculative execution preparing the next virtual memory page whilst data is still being processed in the current loop. All of this is determined by the hardware, not the program itself. Performance and speed up. Let r be the vector speed ratio and f be the vectorization ratio. If the time taken for the vector unit to add an array of 64 numbers is 10 times faster than its equivalent scalar counterpart, r = 10. Also, if the total number of operations in a program is 100, out of which only 10 are scalar (after vectorization), then f = 0.9, i.e., 90% of the work is done by the vector unit. It follows the achievable speed up of: formula_0 So, even if the performance of the vector unit is very high (formula_1) there is a speedup less than formula_2, which suggests that the ratio f is crucial to the performance. This ratio depends on the efficiency of the compilation like adjacency of the elements in memory. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r/[(1-f)*r+f]" }, { "math_id": 1, "text": "r=\\infty" }, { "math_id": 2, "text": "1/(1-f)" } ]
https://en.wikipedia.org/wiki?curid=58205
582075
Lev Pontryagin
Soviet mathematician Lev Semyonovich Pontryagin (, also written Pontriagin or Pontrjagin, first name sometimes anglicized as Leon) (3 September 1908 – 3 May 1988) was a Soviet mathematician. Completely blind from the age of 14, he made major discoveries in a number of fields of mathematics, including algebraic topology, differential topology and optimal control. Early life and career. He was born in Moscow and lost his eyesight completely due to an unsuccessful eye surgery after a primus stove explosion when he was 14. His mother Tatyana Andreyevna, who did not know mathematical symbols, read mathematical books and papers (notably those of Heinz Hopf, J. H. C. Whitehead, and Hassler Whitney) to him, and later worked as his secretary. His mother used alternative names for math symbols, such as "tails up" for the set-union symbol formula_0. In 1925 he entered Moscow State University, where he was strongly influenced by the lectures of Pavel Alexandrov who would become his doctoral thesis advisor. After graduating in 1929, he obtained a position at Moscow State University. In 1934 he joined the Steklov Institute in Moscow. In 1970 he became vice president of the International Mathematical Union. Work. Pontryagin worked on duality theory for homology while still a student. He went on to lay foundations for the abstract theory of the Fourier transform, now called Pontryagin duality. Using these tools, he was able to solve the case of Hilbert's fifth problem for abelian groups in 1934. In 1935, he was able to compute the homology groups of the classical compact Lie groups, which he would later call his greatest achievement. With René Thom, he is regarded as one of the co-founders of cobordism theory, and co-discoverers of the central idea of this theory, that framed cobordism and stable homotopy are equivalent. This led to the introduction around 1940 of a theory of certain characteristic classes, now called Pontryagin classes, designed to vanish on a manifold that is a boundary. In 1942 he introduced the cohomology operations now called Pontryagin squares. Moreover, in operator theory there are specific instances of Krein spaces called Pontryagin spaces. Starting in 1952, he worked in optimal control theory. His maximum principle is fundamental to the modern theory of optimization. He also introduced the idea of a bang–bang principle, to describe situations where the applied control at each moment is either the maximum positive 'steer', or the maximum negative 'steer'. Pontryagin authored several influential monographs as well as popular textbooks in mathematics. Pontryagin's students include Dmitri Anosov, Vladimir Boltyansky, Revaz Gamkrelidze, Yevgeny Mishchenko, Mikhail Postnikov, Vladimir Rokhlin, and Mikhail Zelikin. Controversy and antisemitism allegations. Pontryagin participated in a few notorious political campaigns in the Soviet Union. In 1930, he and several other young members of the Moscow Mathematical Society publicly denounced as counter-revolutionary the Society's head Dmitri Egorov, who openly supported the Russian Orthodox Church and had recently been arrested. They then proceeded to follow their plan of reorganizing the Society. Pontryagin was accused of anti-Semitism on several occasions. For example, he attacked Nathan Jacobson for being a "mediocre scientist" representing the "Zionism movement", while both men were vice-presidents of the International Mathematical Union. When a prominent Soviet Jewish mathematician, Grigory Margulis, was selected by the IMU to receive the Fields Medal at the upcoming 1978 ICM, Pontryagin, who was a member of the executive committee of the IMU at the time, vigorously objected. Although the IMU stood by its decision to award Margulis the Fields Medal, Margulis was denied a Soviet exit visa by the Soviet authorities and was unable to attend the 1978 ICM in person. Pontryagin rejected charges of antisemitism in an article published in "Science" in 1979. In his memoirs Pontryagin claims that he struggled with Zionism, which he considered a form of racism. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\cup" } ]
https://en.wikipedia.org/wiki?curid=582075
58208186
Inquisitive semantics
Framework in logic and natural language semantics Inquisitive semantics is a framework in logic and natural language semantics. In inquisitive semantics, the semantic content of a sentence captures both the information that the sentence conveys and the issue that it raises. The framework provides a foundation for the linguistic analysis of statements and questions. It was originally developed by Ivano Ciardelli, Jeroen Groenendijk, Salvador Mascarenhas, and Floris Roelofsen. Basic notions. The essential notion in inquisitive semantics is that of an "inquisitive proposition". Inquisitive propositions encode informational content via the region of logical space that their information states cover. For instance, the inquisitive proposition formula_0 encodes the information that {"w"} is the actual world. The inquisitive proposition formula_1 encodes that the actual world is either formula_2 or formula_3. An inquisitive proposition encodes inquisitive content via its maximal elements, known as "alternatives". For instance, the inquisitive proposition formula_1 has two alternatives, namely formula_4 and formula_5. Thus, it raises the issue of whether the actual world is formula_2 or formula_3 while conveying the information that it must be one or the other. The inquisitive proposition formula_6 encodes the same information but does not raise an issue since it contains only one alternative. The informational content of an inquisitive proposition can be isolated by pooling its constituent information states as shown below. Inquisitive propositions can be used to provide a semantics for the connectives of propositional logic since they form a Heyting algebra when ordered by the subset relation. For instance, for every proposition "P" there exists a relative pseudocomplement formula_8, which amounts to formula_9. Similarly, any two propositions "P" and "Q" have a meet and a join, which amount to formula_10 and formula_11 respectively. Thus inquisitive propositions can be assigned to formulas of formula_12 as shown below. Given a model formula_13 where "W" is a set of possible worlds and "V" is a valuation function: The operators ! and ? are used as abbreviations in the manner shown below. Conceptually, the !-operator can be thought of as cancelling the issues raised by whatever it applies to while leaving its informational content untouched. For any formula formula_20, the inquisitive proposition formula_21 expresses the same information as formula_22, but it may differ in that it raises no nontrivial issues. For example, if formula_22 is the inquisitive proposition "P" from a few paragraphs ago, then formula_21 is the inquisitive proposition "Q". The ?-operator trivializes the information expressed by whatever it applies to, while converting information states that would establish that its issues are unresolvable into states that resolve it. This is very abstract, so consider another example. Imagine that logical space consists of four possible worlds, "w"1, "w"2, "w"3, and "w"4, and consider a formula formula_20 such that formula_22 contains {"w"1}, {"w"2}, and of course formula_23. This proposition conveys that the actual world is either "w"1 or "w"2 and raises the issue of which of those worlds it actually is. Therefore, the issue it raises would not be resolved if we learned that the actual world is in the information state {"w"3, "w"4}. Rather, learning this would show that the issue raised by our toy proposition is unresolvable. As a result, the proposition formula_24 contains all the states of formula_22, along with {"w"3, "w"4} and all of its subsets. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{ \\{ w \\}, \\emptyset \\}" }, { "math_id": 1, "text": "\\{ \\{ w \\}, \\{ v \\}, \\emptyset \\}" }, { "math_id": 2, "text": "w" }, { "math_id": 3, "text": "v" }, { "math_id": 4, "text": "\\{ w \\}" }, { "math_id": 5, "text": "\\{ v \\}" }, { "math_id": 6, "text": "\\{ \\{w,v\\}, \\{ w \\}, \\{ v \\}, \\emptyset \\}" }, { "math_id": 7, "text": "\\operatorname{info}(P) = \\{w \\mid w \\in t \\text{ for some } t\\in P\\}" }, { "math_id": 8, "text": "P^*" }, { "math_id": 9, "text": "\\{s \\subseteq W \\mid s \\cap t = \\emptyset \\text{ for all } t \\in P \\}" }, { "math_id": 10, "text": "P\\cap Q" }, { "math_id": 11, "text": "P \\cup Q" }, { "math_id": 12, "text": "\\mathcal{L}" }, { "math_id": 13, "text": "\\mathfrak{M} = \\langle W, V \\rangle " }, { "math_id": 14, "text": "[\\![p]\\!] = \\{s \\subseteq W \\mid \\forall w \\in s, V(w, p) = 1\\} " }, { "math_id": 15, "text": " [\\![ \\neg \\varphi ]\\!] = \\{s \\subseteq W \\mid s \\cap t = \\emptyset \\text{ for all } t \\in [\\![\\varphi]\\!] \\} " }, { "math_id": 16, "text": " [\\![ \\varphi \\land \\psi]\\!] = [\\![\\varphi]\\!] \\cap [\\![\\psi]\\!] " }, { "math_id": 17, "text": " [\\![ \\varphi \\lor \\psi]\\!] = [\\![\\varphi]\\!] \\cup [\\![\\psi]\\!] " }, { "math_id": 18, "text": "!\\varphi \\equiv \\neg \\neg \\varphi " }, { "math_id": 19, "text": " ?\\varphi \\equiv \\varphi \\lor \\neg \\varphi " }, { "math_id": 20, "text": "\\varphi" }, { "math_id": 21, "text": "[\\![!\\varphi]\\!]" }, { "math_id": 22, "text": "[\\![\\varphi]\\!]" }, { "math_id": 23, "text": "\\emptyset" }, { "math_id": 24, "text": "[\\![?\\varphi]\\!]" } ]
https://en.wikipedia.org/wiki?curid=58208186
5821113
Pulay stress
The Pulay stress or Pulay forces (named for Peter Pulay) is an error that occurs in the stress tensor (or Jacobian matrix) obtained from self-consistent field calculations (Hartree–Fock or density functional theory) due to the incompleteness of the basis set. A plane-wave density functional calculation on a crystal with specified lattice vectors will typically include in the basis set all plane waves with energies below the specified energy cutoff. This corresponds to all points on the reciprocal lattice that lie within a sphere whose radius is related to the energy cutoff. Consider what happens when the lattice vectors are varied, resulting in a change in the reciprocal lattice vectors. The points on the reciprocal lattice which represent the basis set will no longer correspond to a sphere, but an ellipsoid. This change in the basis set will result in errors in the calculated ground state energy change. The Pulay stress is often nearly isotropic, and tends to result in an underestimate of the equilibrium volume. Pulay stress can be reduced by increasing the energy cutoff. Another way to mitigate the effect of Pulay stress on the equilibrium cell shape is to calculate the energy at different lattice vectors with a fixed energy cutoff. Similarly, the error occurs in any calculation where the basis set explicitly depends on the position of atomic nuclei (which are to change during the geometry optimization). In this case, the Hellmann–Feynman theorem – which is used to avoid derivation of many-parameter wave function (expanded in a basis set) – is only valid for the complete basis set. Otherwise, the terms in theorem's expression containing derivatives of the wavefunction persist, giving rise to additional forces – the Pulay forces: formula_0 The presence of Pulay forces makes the optimized geometry parameters converge slower with increasing basis set. The way to eliminate the erroneous forces is to use nuclear-position-independent basis functions, to explicitly calculate and then subtract them from the conventionally obtained forces, or to self-consistently optimize the center of localization of the orbitals. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\n\\frac{\\mathrm{d} E}{\\mathrm{d}\\mathbf{R}} &= \\frac{\\mathrm{d}}{\\mathrm{d}\\mathbf{R}}\\langle\\psi|\\hat{H}|\\psi\\rangle \\\\\n&=\\bigg\\langle\\frac{\\mathrm{d}\\psi}{\\mathrm{d}\\mathbf{R}}\\bigg|\\hat{H}\\bigg|\\psi\\bigg\\rangle + \\bigg\\langle\\psi\\bigg|\\hat{H}\\bigg|\\frac{\\mathrm{d}\\psi}{\\mathrm{d}\\mathbf{R}}\\bigg\\rangle + \\bigg\\langle\\psi\\bigg|\\frac{\\mathrm{d}\\hat{H}}{\\mathrm{d}\\mathbf{R}}\\bigg|\\psi\\bigg\\rangle \\\\\n&=\\underbrace{ E\\bigg\\langle\\frac{\\mathrm{d}\\psi}{\\mathrm{d}\\mathbf{R}}\\bigg|\\psi\\bigg\\rangle + E\\bigg\\langle\\psi\\bigg|\\frac{\\mathrm{d}\\psi}{\\mathrm{d}\\mathbf{R}}\\bigg\\rangle }_{ \\ 0 \\ for \\ complete \\ basis \\ set } + \\bigg\\langle\\psi\\bigg|\\frac{\\mathrm{d}\\hat{H}}{\\mathrm{d}\\mathbf{R}}\\bigg|\\psi\\bigg\\rangle.\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=5821113
582127
Antenna tuner
Telecommunications device An antenna tuner, a matchbox, transmatch, antenna tuning unit (ATU), antenna coupler, or feedline coupler is a device connected between a radio transmitter or receiver and its antenna to improve power transfer between them by matching the impedance of the radio to the antenna's feedline. Antenna tuners are particularly important for use with transmitters. Transmitters feed power into a resistive load, very often 50 ohms, for which the transmitter is optimally designed for power output, efficiency, and low distortion. If the load seen by the transmitter departs from this design value due to improper tuning of the antenna/feedline combination the power output will change, distortion may occur and the transmitter may overheat. ATUs are a standard part of almost all radio transmitters; they may be a circuit included inside the transmitter itself or a separate piece of equipment connected between the transmitter and the antenna. In transmitters in which the antenna is mounted separate from the transmitter and connected to it by a transmission line (feedline), there may be a second ATU (or matching network) at the antenna to match the impedance of the antenna to the transmission line. In low power transmitters with attached antennas, such as cell phones and walkie-talkies, the ATU is fixed to work with the antenna. In high power transmitters like radio stations, the ATU is adjustable to accommodate changes in the antenna or transmitter, and adjusting the ATU to match the transmitter to the antenna is an important procedure done after any changes to these components have been made. This adjustment is done with an instrument called a SWR meter. In radio receivers ATUs are not so important, because in the low frequency part of the radio spectrum the signal to noise ratio (SNR) is dominated by atmospheric noise. It does not matter if the impedance of the antenna and receiver are mismatched so some of the incoming power from the antenna is reflected and does not reach the receiver, because the signal can be amplified to make up for it. However in high frequency receivers the receiver's SNR is dominated by noise in the receiver's front end, so it is important that the receiving antenna is impedance-matched to the receiver to give maximum signal amplitude in the front end stages, to overcome noise. Overview. An antenna's impedance is different at different frequencies. An antenna tuner matches a radio with a fixed impedance (typically 50 Ohms for modern transceivers) to the "combination" of the feedline and the antenna; useful when the impedance seen at the input end of the feedline is unknown, complex, or otherwise different from the transceiver. Coupling through an ATU allows the use of one antenna on a broad range of frequencies. However, despite its name, an antenna ‘"tuner" ’ actually matches the transmitter only to the complex impedance reflected back to the input end of the feedline. If both tuner and transmission line were lossless, tuning at the transmitter end would indeed produce a match at every point in the transmitter-feedline-antenna system. However, in practical systems feedline losses limit the ability of the antenna ‘tuner’ to match the antenna or change its resonant frequency. If the loss of power is very low in the line carrying the transmitter's signal into the antenna, a tuner at the transmitter end can produce a worthwhile degree of matching and tuning for the antenna and feedline network as a whole. With lossy feedlines (such as commonly used 50 Ohm coaxial cable) maximum power transfer only occurs if matching is done at both ends of the line. If there is still a high SWR (multiple reflections) in the feedline beyond the ATU, any loss in the feedline is multiplied several times by the transmitted waves reflecting back and forth between the tuner and the antenna, heating the wire instead of sending out a signal. Even with a matching unit at both ends of the feedline – the near ATU matching the transmitter to the feedline and the remote ATU matching the feedline to the antenna – losses in the circuitry of the two ATUs will reduce power delivered to the antenna. Therefore, operating an antenna far from its design frequency and compensating with a transmatch between the transmitter and the feedline is not as efficient as using a resonant antenna with a matched-impedance feedline, nor as efficient as a matched feedline from the transmitter to a remote antenna tuner attached directly to the antenna. Broad band matching methods. Transformers, autotransformers, and baluns are sometimes incorporated into the design of narrow band antenna tuners and antenna cabling connections. They will all usually have little effect on the resonant frequency of either the antenna or the narrow band transmitter circuits, but can widen the range of impedances that the antenna tuner can match, and/or convert between balanced and unbalanced cabling where needed. Ferrite transformers. Solid-state power amplifiers operating from 1–30 MHz typically use one or more wideband transformers wound on ferrite cores. MOSFETs and bipolar junction transistors are designed to operate into a low impedance, so the transformer primary typically has a single turn, while the 50 Ohm secondary will have 2 to 4 turns. This feedline system design has the advantage of reducing the retuning required when the operating frequency is changed. A similar design can match an antenna to a transmission line; For example, many TV antennas have a 300 Ohm impedance and feed the signal to the TV via a 75 Ohm coaxial line. A small ferrite core transformer makes the broad band impedance transformation. This transformer does not need, nor is it capable of adjustment. For receive-only use in a TV the small SWR variation with frequency is not a major problem. It should be added that many ferrite based transformers perform a balanced to unbalanced transformation along with the impedance change. When the "bal"anced to "un"balanced function is present these transformers are called a balun (otherwise an unun). The most common baluns have either a 1:1 or a 1:4 "impedance" transformation. Autotransformers. There are several designs for impedance matching using an autotransformer, which is a single-wire transformer with different connection points or "taps" spaced along the windings. They are distinguished mainly by their "impedance" transform ratio (1:1, 1:4, 1:9, etc., the square of the winding ratio), and whether the input and output sides share a common ground, or are matched from a cable that is grounded on one side (unbalanced) to an ungrounded (usually balanced) cable. When autotransformers connect "bal"anced and "un"balanced lines they are called baluns, just as two-winding transformers. When two differently-grounded cables or circuits must be connected but the grounds kept independent, a full, two-winding transformer with the desired ratio is used instead. The circuit pictured at the right has three identical windings wrapped in the same direction around either an "air" core (for very high frequencies) or ferrite core (for middle, or low frequencies). The three equal windings shown are wired for a common ground shared by two unbalanced lines (so this design is called an unun), and can be used as 1:1, 1:4, or 1:9 impedance match, depending on the tap chosen. (The same windings could be connected differently to make a balun instead.) For example, if the right-hand side is connected to a resistive load of 10 Ohms, the user can attach a source at any of the three ungrounded terminals on the left side of the autotransformer to get a different impedance. Notice that on the left side, the line with more windings measures greater impedance for the same 10 Ohm load on the right. Narrow band design. The "narrow-band" methods described below cover a very much smaller span of frequencies, by comparison with the broadband methods described above. Antenna matching methods that use transformers tend to cover a wide range of frequencies. A single, typical, commercially available balun can cover frequencies from 3.5–30.0 MHz, or nearly the entire shortwave radio band. Matching to an antenna using a cut segment of transmission line (described below) is perhaps the most efficient of all matching schemes in terms of electrical power, but typically can only cover a range about 3.5–3.7 MHz wide – a very small range indeed, compared to a broadband balun. Antenna coupling or feedline matching circuits are also narrowband for any single setting, but can be re-tuned more conveniently. However they are perhaps the least efficient in terms of power-loss (aside from having no impedance matching at all!). Transmission line antenna tuning methods. The insertion of a special section of transmission line, whose characteristic impedance differs from that of the main line, can be used to match the main line to the antenna. An inserted line with the proper impedance and connected at the proper location can perform complicated matching effects with very high efficiency, but spans a very limited frequency range. The simplest example this method is the quarter-wave impedance transformer formed by a section of mismatched transmission line. If a quarter-wavelength of 75 Ohm coaxial cable is linked to a 50 Ohm load, the SWR in the 75 Ohm quarter wavelength of line can be calculated as 75Ω / 50Ω = 1.5; the quarter-wavelength of line transforms the mismatched impedance to 112.5 Ohms (75 Ohms × 1.5 = 112.5 Ohms). Thus this inserted section matches a 112 Ohm antenna to a 50 Ohm main line. The &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄6 wavelength coaxial transformer is a useful way to match 50 to 75 Ohms using the same general method. The theoretical basis is discussion by the inventor, and wider application of the method is found here: Branham, P. (1959). "A Convenient Transformer for matching Co-axial lines". Geneva: CERN. A second common method is the use of a stub: A shorted, or open section of line is connected in parallel with the main line. With coax this is done using a ‘T’-connector. The length of the stub and its location can be chosen so as to produce a matched line below the stub, regardless of the complex impedance or SWR of the antenna itself. The J-pole antenna is an example of an antenna with a built-in stub match. Basic lumped circuit matching using the L network. The basic circuit required when lumped capacitances and inductors are used is shown below. This circuit is important in that many automatic antenna tuners use it, and also because more complex circuits can be analyzed as groups of L-networks. This is called an L network not because it contains an inductor, (in fact some L-networks consist of two capacitors), but because the two components are at right angles to each other, having the shape of a rotated and sometimes reversed English letter ‘L’. The ‘T’ (“Tee”) network and the π (“Pi”) network also have a shape similar to the English and Greek letters they are named after. This basic network is able to act as an impedance transformer. If the output has an impedance consisting of resistance "R"load and reactance "j" "X"load, while the input is to be attached to a source which has an impedance of "R"source resistance and "j" "X"source reactance, then formula_0 and formula_1. In this example circuit, "X"L and "X"C can be swapped. All the ATU circuits below create this network, which exists between systems with different impedances. For instance, if the source has a resistive impedance of 50 Ω and the load has a resistive impedance of 1000 Ω : formula_2 formula_3 If the frequency is 28 MHz, As, formula_4 then, formula_5 So, formula_6 While as, formula_7 then, formula_8 Theory and practice. A parallel network, consisting of a resistive element (1000 Ω) and a reactive element (−"j" 229.415 Ω), will have the same impedance and power factor as a series network consisting of resistive (50 Ω) and reactive elements (−"j" 217.94 Ω). By adding another element in series (which has a reactive impedance of +"j" 217.94 Ω), the impedance is 50 Ω (resistive). Types of L networks and their use. The L-network can have eight different configurations, six of which are shown here. The two missing configurations are the same as the bottom row, but with the parallel element (wires vertical) on the right side of the series element (wires horizontal), instead of on the left, as shown. In discussion of the diagrams that follows the in connector comes from the transmitter or "source"; the out connector goes to the antenna or "load". The general rule (with some exceptions, described below) is that the series element of an "L"-network goes on the side with the lowest impedance. So for example, the three circuits in the left column and the two in the bottom row have the series (horizontal) element on the out side are generally used for stepping up from a low-impedance input (transmitter) to a high-impedance output (antenna), similar to the example analyzed in the section above. The top two circuits in the right column, with the series (horizontal) element on the in side, are generally useful for stepping down from a higher input to a lower output impedance. The general rule only applies to loads that are mainly resistive, with very little reactance. In cases where the load is highly reactive – such as an antenna fed with a signals whose frequency is far away from any resonance – the opposite configuration may be required. If far from resonance, the bottom two step down (high-in to low-out) circuits would instead be used to connect for a step up (low-in to high-out that is mostly reactance). The low- and high-pass versions of the four circuits shown in the top two rows use only one inductor and one capacitor. Normally, the low-pass would be preferred with a transmitter, in order to attenuate harmonics, but the high-pass configuration may be chosen if the components are more conveniently obtained, or if the radio already contains an internal low-pass filter, or if attenuation of low frequencies is desirable – for example when a local AM station broadcasting on a medium frequency may be overloading a high frequency receiver. The Low "R", high "C" circuit is shown feeding a short vertical antenna, such as would be the case for a compact, mobile antenna or otherwise on frequencies below an antenna's lowest natural resonant frequency. Here the inherent capacitance of a short, random wire antenna is so high that the L-network is best realized with two inductors, instead of aggravating the problem by using a capacitor. The Low "R", high "L" circuit is shown feeding a small loop antenna. Below resonance this type of antenna has so much inductance, that more inductance from adding a coil would make the reactance even worse. Therefore, the L-network is composed of two capacitors. An L-network is the simplest circuit that will achieve the desired transformation; for any one given antenna and frequency, once a circuit is selected from the eight possible configurations (of which six are shown above) only one set of component values will match the in impedance to the out impedance. In contrast, the circuits described below all have three or more components, and hence have many more choices for inductance and capacitance that will produce an impedance match. The radio operator must experiment, test, and use judgement to choose among the many adjustments that produce the same impedance match. Antenna system losses. Loss in Antenna tuners. Every means of impedance match will introduce some power loss. This will vary from a few percent for a transformer with a ferrite core, to 50% or more for a complex ATU that is improperly tuned or working at the limits of its tuning range. With the narrow band tuners, the L-network has the lowest loss, partly because it has the fewest components, but mainly because it necessarily operates at the lowest formula_9 possible for a given impedance transformation. With the L-network, the loaded formula_9 is not adjustable, but is fixed midway between the source and load impedances. Since most of the loss in practical tuners will be in the coil, choosing either the low-pass or high-pass network may reduce the loss somewhat. The L-network using only capacitors will have the lowest loss, but this network only works where the load impedance is very inductive, making it a good choice for a small loop antenna. Inductive impedance also occurs with straight-wire antennas used at frequencies slightly above a resonant frequency, where the antenna is too long – for example, between a quarter and a half wave long at the operating frequency. However, problematic straight-wire antennas are typically too short for the frequency in use. With the high-pass T-network, the loss in the tuner can vary from a few percent – if tuned for lowest loss – to over 50% if the tuner is not properly adjusted. Using the maximum available capacitance will give less loss, than if one simply tunes for a match without regard for the settings. This is because using more capacitance means using fewer inductor turns, and the loss is mainly in the inductor. With the SPC tuner the losses will be somewhat higher than with the T-network, since the added capacitance across the inductor will shunt some reactive current to ground which must be cancelled by additional current in the inductor. The trade-off is that the effective inductance of the coil is increased, thus allowing operation at lower frequencies than would otherwise be possible. If additional filtering is desired, the inductor can be deliberately set to larger values, thus providing a partial band pass effect. Either the high-pass T, low-pass π, or the SPC tuner can be adjusted in this manner. The additional attenuation at harmonic frequencies can be increased significantly with only a small percentage of additional loss at the tuned frequency. When adjusted for minimum loss, the SPC tuner will have better harmonic rejection than the high-pass T due to its internal tank circuit. Either type is capable of good harmonic rejection if a small additional loss is acceptable. The low-pass π has exceptional harmonic attenuation at "any" setting, including the lowest-loss. ATU location. An ATU will be inserted somewhere along the line connecting the radio transmitter or receiver to the antenna. The antenna feedpoint is usually high in the air (for example, a dipole antenna) or far away (for example, an end-fed random wire antenna). A transmission line, or feedline, must carry the signal between the transmitter and the antenna. The ATU can be placed anywhere along the feedline: at the transmitter, at the antenna, or somewhere in between. Antenna tuning is best done as close to the antenna as possible to minimize loss, increase bandwidth, and reduce voltage and current on the transmission line. Also, when the information being transmitted has frequency components whose wavelength is a significant fraction of the electrical length of the feed line, distortion of the transmitted information will occur if there are standing waves on the line. Analog TV and FM stereo broadcasts are affected in this way. For those modes, matching at the antenna is required. When possible, an automatic or remotely-controlled tuner in a weather-proof case at or near the antenna is convenient and makes for an efficient system. With such a tuner, it is possible to match a wide range of antennas (including stealth antennas). When the ATU must be located near the radio for convenient adjustment, any significant SWR will increase the loss in the feedline. For that reason, when using an ATU at the transmitter, low-loss, high-impedance feedline is a great advantage (open-wire line, for example). A short length of low-loss coaxial line is acceptable, but with longer lossy lines the additional loss due to SWR becomes very high. It is very important to remember that when matching the transmitter to the line, as is done when the ATU is near the transmitter, there is no change in the SWR in the feedline. The backlash currents reflected from the antenna are retro-reflected by the ATU – usually several times between the two – and so are invisible on the transmitter-side of the ATU. The result of the multiple reflections is compounded loss, higher voltage or higher currents, and narrowed bandwidth, none of which can be corrected by the ATU. Standing wave ratio. It is a common misconception that a high standing wave ratio (SWR) "per se" causes loss. A well-adjusted ATU feeding an antenna through a low-loss line may have only a small percentage of additional loss compared with an intrinsically matched antenna, even with a high SWR (4:1, for example). An ATU sitting beside the transmitter just re-reflects energy reflected from the antenna (“backlash current”) back yet again along the feedline to the antenna (“retro-reflection”). High losses arise from RF resistance in the feedline and antenna, and those multiple reflections due to high SWR cause feedline losses to be compounded. Using low-loss, high-impedance feedline with an ATU results in very little loss, even with multiple reflections. However, if the feedline-antenna combination is ‘lossy’ then an identical high SWR may lose a considerable fraction of the transmitter's power output. High impedance lines – such as most parallel-wire lines – carry power mostly as high voltage rather than high current, and current alone determines the power lost to line resistance. So despite high SWR, very little power is lost in high-impedance line compared low-impedance line – typical coaxial cable, for example. For that reason, radio operators can be more casual about using tuners with high-impedance feedline. Without an ATU, the SWR from a mismatched antenna and feedline can present an improper load to the transmitter, causing distortion and loss of power or efficiency with heating and/or burning of the output stage components. Modern solid state transmitters will automatically reduce power when high SWR is detected, so some solid-state power stages only produce weak signals if the SWR rises above 1.5 to 1. Were it not for that problem, even the losses from an SWR of 2:1 could be tolerated, since only 11 percent of transmitted power would be reflected and 89 percent sent out through to the antenna. So the main loss of output power with high SWR is due to the transmitter "backing off" its output when challenged with backlash current. Tube transmitters and amplifiers usually have an adjustable output network that can feed mismatched loads up to perhaps 3:1 SWR without trouble. In effect the built-in π-network of the transmitter output stage acts as an ATU. Further, since tubes are electrically robust (even though mechanically fragile), tube-based circuits can tolerate very high backlash current without damage. Broadcast Applications. AM broadcast transmitters. One of the oldest applications for antenna tuners is in AM and shortwave broadcasting transmitters. AM transmitters usually use a vertical antenna (tower) which can be from 0.20 to 0.68 wavelengths long. At the base of the tower an ATU is used to match the antenna to the 50 Ohm transmission line from the transmitter. The most commonly used circuit is a T-network, using two series inductors with a shunt capacitor between them. When multiple towers are used the ATU network may also provide for a phase adjustment so that the currents in each tower can be phased relative to the others to produce a desired pattern. These patterns are often required by law to include nulls in directions that could produce interference as well as to increase the signal in the target area. Adjustment of the ATUs in a multitower array is a complex and time consuming process requiring considerable expertise. High-power shortwave transmitters. For International Shortwave (50 kW and above), frequent antenna tuning is done as part of frequency changes which may be required on a seasonal or even a daily basis. Modern shortwave transmitters typically include built-in impedance-matching circuitry for SWR up to 2:1 , and can adjust their output impedance within 15 seconds. The matching networks in transmitters sometimes incorporate a balun or an external one can be installed at the transmitter in order to feed a balanced line. Balanced transmission lines of 300 Ohms or more were more-or-less standard for all shortwave transmitters and antennas in the past, even by amateurs. Most shortwave broadcasters have continued to use high-impedance feeds even before the advent of automatic impedance matching. The most commonly used shortwave antennas for international broadcasting are the HRS antenna (curtain array), which cover a 2 to 1 frequency range and the log-periodic antenna which cover up to 8 to 1 frequency range. Within that range, the SWR will vary, but is usually kept below 1.7 to 1 – within the range of SWR that can be tuned by antenna matching built-into many modern transmitters. Hence, when feeding these antennas, a modern transmitter will be able to tune itself as needed to match at any frequency. Automatic antenna tuning. Automatic antenna tuning is used in flagship mobile phones, transceivers for amateur radio, and in land mobile, marine, and tactical HF radio transceivers. Each antenna tuning system (AT) shown in the figure has an "antenna port", which is directly or indirectly coupled to an antenna, and another port, referred to as "radio port" (or as "user port"), for transmitting and / or receiving radio signals through the AT and the antenna. Each AT shown in the figure has a single antenna-port, (SAP) AT, but a multiple antenna-port (MAP) AT may be needed for MIMO radio transmission. Several control schemes can be used in a radio transceiver or transmitter to automatically adjust an antenna tuner (AT). The control schemes are based on one of the two configurations, (a) and (b), shown in the diagram. For both configurations, the transmitter comprises: The TSPU incorporates all the parts of the transmitting not otherwise shown in the diagram. The TX port of the TSPU delivers a test signal. The SU delivers, to the TSPU, one or more output signals indicating the response to the test signal, one or more electrical variables (such as voltage, current, incident or forward voltage, etc.). The response sensed at the "radio port" in the case of configuration (a) or at the "antenna port" in the case of configuration (b). Note that neither configuration (a) nor (b) is ideal, since the line between the antenna and the AT attenuates SWR; response to a test signal is most accurately tested at or near the antenna feedpoint. Broydé &amp; Clavelier (2020) distinguish five types of antenna tuner control schemes, as follows: The control schemes may be compared as regards: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_\\text{L} = \\sqrt{\\Big(R_\\text{source}+jX_\\text{source}\\Big)\\Big((R_\\text{source}+jX_\\text{source})-(R_\\text{load}+jX_\\text{load})\\Big)}" }, { "math_id": 1, "text": "X_\\text{C} = (R_\\text{load}+jX_\\text{load})\\sqrt{\\frac{(R_\\text{source}+jX_\\text{source})}{(R_\\text{load}+jX_\\text{load})-(R_\\text{source}+jX_\\text{source})}}" }, { "math_id": 2, "text": "X_\\text{L} = \\sqrt{(50)(50-1000)} = \\sqrt{(-47500)}= j\\, 217.94\\ \\text{Ohms}" }, { "math_id": 3, "text": "X_\\text{C} = 1000 \\sqrt{\\frac{50}{(1000-50)}} = 1000\\,\\times\\,0.2294\\ \\text{Ohms} = 229.4\\ \\text{Ohms}" }, { "math_id": 4, "text": "X_\\text{C} = \\frac{1}{2\\pi fC}" }, { "math_id": 5, "text": "2\\pi fX_\\text{C} = \\frac{1}{C}" }, { "math_id": 6, "text": "\\frac{1}{2\\pi fX_\\text{C}} = C = 24.78\\ p \\text{F}" }, { "math_id": 7, "text": "X_\\text{L} = 2\\pi fL\\!" }, { "math_id": 8, "text": " L = \\frac{X_\\text{L}}{2\\pi f} = 1.239\\ \\mu \\text{H}" }, { "math_id": 9, "text": "Q" } ]
https://en.wikipedia.org/wiki?curid=582127
58213939
Shooting and bouncing rays
The shooting and bouncing rays (SBR) method in computational electromagnetics was first developed for computation of radar cross section (RCS). Since then, the method has been generalized to be used also for installed antenna performance. The SBR method is an approximate method applied to high frequencies. The method can be implemented for GPU computing, which makes the computation very efficient. Theory. The first step in the SBR method is to use geometrical optics (GO, ray-tracing) for computing equivalent currents, either on metallic structures or on an exit aperture. The scattered field is thereafter computed by integrating these currents using physical optics (PO), by the Kirchhoff's diffraction formula. The current formula_0 on a perfect electrical conductor (PEC) is related to the incident magnetic field formula_1 by formula_2. This approximation holds best for short wavelengths, and it assumes that the radius of curvature of the scatterer is large compared to the wavelength. Extending SBR for edge diffraction. Since the approximation described above assumes that the radius of curvature is large compared to the wavelength, the diffraction from edges needs to be handled separately. The SBR method can be extended with physical theory of diffraction (PTD) in order to include edge diffraction in the model. Implementation in commercial software. The SBR method is implemented in the following commercial codes: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\vec{J}" }, { "math_id": 1, "text": "\\vec{H}_i " }, { "math_id": 2, "text": " \\vec{J} = 2\\hat{n}\\times\\vec{H}_i " } ]
https://en.wikipedia.org/wiki?curid=58213939
5821699
Herbrandization
Proof of Herbrand's theorem The Herbrandization of a logical formula (named after Jacques Herbrand) is a construction that is dual to the Skolemization of a formula. Thoralf Skolem had considered the Skolemizations of formulas in prenex form as part of his proof of the Löwenheim–Skolem theorem (Skolem 1920). Herbrand worked with this dual notion of Herbrandization, generalized to apply to non-prenex formulas as well, in order to prove Herbrand's theorem (Herbrand 1930). The resulting formula is not necessarily equivalent to the original one. As with Skolemization, which only preserves satisfiability, Herbrandization being Skolemization's dual preserves validity: the resulting formula is valid if and only if the original one is. Definition and examples. Let formula_0 be a formula in the language of first-order logic. We may assume that formula_0 contains no variable that is bound by two different quantifier occurrences, and that no variable occurs both bound and free. (That is, formula_0 could be relettered to ensure these conditions, in such a way that the result is an equivalent formula). The "Herbrandization" of formula_0 is then obtained as follows: For instance, consider the formula formula_4. There are no free variables to replace. The variables formula_5 are the kind we consider for the second step, so we delete the quantifiers formula_6 and formula_7. Finally, we then replace formula_8 with a constant formula_9 (since there were no other quantifiers governing formula_8), and we replace formula_10 with a function symbol formula_11: formula_12 The "Skolemization" of a formula is obtained similarly, except that in the second step above, we would delete quantifiers on variables that are either (1) existentially quantified and within an even number of negations, or (2) universally quantified and within an odd number of negations. Thus, considering the same formula_0 from above, its Skolemization would be: formula_13 To understand the significance of these constructions, see Herbrand's theorem or the Löwenheim–Skolem theorem. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "F" }, { "math_id": 1, "text": "v" }, { "math_id": 2, "text": "f_v(x_1,\\dots,x_k)" }, { "math_id": 3, "text": "x_1,\\dots,x_k" }, { "math_id": 4, "text": "F := \\forall y \\exists x [R(y,x) \\wedge \\neg\\exists z S(x,z)]" }, { "math_id": 5, "text": "y,z" }, { "math_id": 6, "text": "\\forall y" }, { "math_id": 7, "text": "\\exists z" }, { "math_id": 8, "text": "y" }, { "math_id": 9, "text": "c_y" }, { "math_id": 10, "text": "z" }, { "math_id": 11, "text": "f_z(x)" }, { "math_id": 12, "text": " F^H = \\exists x [R(c_y,x) \\wedge \\neg S(x,f_z(x))]. " }, { "math_id": 13, "text": " F^S = \\forall y [R(y,f_x(y)) \\wedge \\neg\\exists z S(f_x(y),z)]. " } ]
https://en.wikipedia.org/wiki?curid=5821699
582228
Rabi cycle
Quantum mechanical phenomenon In physics, the Rabi cycle (or Rabi flop) is the cyclic behaviour of a two-level quantum system in the presence of an oscillatory driving field. A great variety of physical processes belonging to the areas of quantum computing, condensed matter, atomic and molecular physics, and nuclear and particle physics can be conveniently studied in terms of two-level quantum mechanical systems, and exhibit Rabi flopping when coupled to an optical driving field. The effect is important in quantum optics, magnetic resonance and quantum computing, and is named after Isidor Isaac Rabi. A two-level system is one that has two possible energy levels. These two levels are a ground state with lower energy and an excited state with higher energy. If the energy levels are not degenerate (i.e. not having equal energies), the system can absorb a quantum of energy and transition from the ground state to the "excited" state. When an atom (or some other two-level system) is illuminated by a coherent beam of photons, it will cyclically absorb photons and re-emit them by stimulated emission. One such cycle is called a Rabi cycle, and the inverse of its duration is the Rabi frequency of the system. The effect can be modeled using the Jaynes–Cummings model and the Bloch vector formalism. Mathematical description. A detailed mathematical description of the effect can be found on the page for the Rabi problem. For example, for a two-state atom (an atom in which an electron can either be in the excited or ground state) in an electromagnetic field with frequency tuned to the excitation energy, the probability of finding the atom in the excited state is found from the Bloch equations to be formula_1 where formula_2 is the Rabi frequency. More generally, one can consider a system where the two levels under consideration are not energy eigenstates. Therefore, if the system is initialized in one of these levels, time evolution will make the population of each of the levels oscillate with some characteristic frequency, whose angular frequency is also known as the Rabi frequency. The state of a two-state quantum system can be represented as vectors of a two-dimensional complex Hilbert space, which means that every state vector formula_3 is represented by complex coordinates: formula_4 where formula_5 and formula_6 are the coordinates. If the vectors are normalized, formula_5 and formula_6 are related by formula_7. The basis vectors will be represented as formula_8 and formula_9. All observable physical quantities associated with this systems are 2 × 2 Hermitian matrices, which means that the Hamiltonian of the system is also a similar matrix. Derivations. One can construct an oscillation experiment through the following steps: If formula_0 is an eigenstate of H, formula_11 and there will be no oscillations. Also if the two states formula_12 and formula_0 are degenerate, every state including formula_0 is an eigenstate of H. As a result, there will be no oscillations. On the other hand, if H has no degenerate eigenstates, and the initial state is not an eigenstate, then there will be oscillations. The most general form of the Hamiltonian of a two-state system is given formula_13 here, formula_14 and formula_15 are real numbers. This matrix can be decomposed as, formula_16 The matrix formula_17 is the 2 formula_18 2 identity matrix and the matrices formula_19 are the Pauli matrices. This decomposition simplifies the analysis of the system especially in the time-independent case where the values of formula_20 and formula_15are constants. Consider the case of a spin-1/2 particle in a magnetic field formula_21. The interaction Hamiltonian for this system is formula_22, formula_23 where formula_24 is the magnitude of the particle's magnetic moment, formula_25 is the Gyromagnetic ratio and formula_26 is the vector of Pauli matrices. Here the eigenstates of Hamiltonian are eigenstates of formula_27, that is formula_12 and formula_0, with corresponding eigenvalues of formula_28. The probability that a system in the state formula_29 can be found in the arbitrary state formula_30 is given by formula_31. Let the system be prepared in state formula_32 at time formula_33. Note that formula_32 is an eigenstate of formula_34: formula_35 Here the Hamiltonian is time independent. Thus by solving the stationary Schrödinger equation, the state after time t is given by formula_36 with total energy of the system formula_37. So the state after time t is given by: formula_38. Now suppose the spin is measured in x-direction at time t. The probability of finding spin-up is given by:formula_39where formula_2 is a characteristic angular frequency given by formula_40, where it has been assumed that formula_41. So in this case the probability of finding spin-up in x-direction is oscillatory in time formula_42 when the system's spin is initially in the formula_32 direction. Similarly, if we measure the spin in the formula_43-direction, the probability of measuring spin as formula_44 of the system is formula_45. In the degenerate case where formula_46, the characteristic frequency is 0 and there is no oscillation. Notice that if a system is in an eigenstate of a given Hamiltonian, the system remains in that state. This is true even for time dependent Hamiltonians. Taking for example formula_47; if the system's initial spin state is formula_48, then the probability that a measurement of the spin in the y-direction results in formula_49 at time formula_42 is formula_50. By Pauli matrices. Consider a Hamiltonian of the formformula_51The eigenvalues of this matrix are given byformula_52where formula_53 and formula_54, so we can take formula_55. Now, eigenvectors for formula_56 can be found from equationformula_57Soformula_58Applying the normalization condition on the eigenvectors, formula_59. Soformula_60Let formula_61 and formula_62. So formula_63. So we get formula_64. That is formula_65, using the identity formula_66. The phase of formula_67 relative to formula_68 should be formula_69. Choosing formula_67 to be real, the eigenvector for the eigenvalue formula_56 is given byformula_70Similarly, the eigenvector for eigenenergy formula_71 isformula_72From these two equations, we can writeformula_73Suppose the system starts in state formula_74 at time formula_75; that is,formula_76For a time-independent Hamiltonian, after time "t", the state evolves asformula_77If the system is in one of the eigenstates formula_78 or formula_79, it will remain the same state. However, for a time-dependent Hamiltonian and a general initial state as shown above, the time evolution is non trivial. The resulting formula for the Rabi oscillation is valid because the state of the spin may be viewed in a reference frame that rotates along with the field. The probability amplitude of finding the system at time t in the state formula_80 is given by formula_81. Now the probability that a system in the state formula_82 will be found to be in the state formula_83 is given byformula_84This can be simplified to This shows that there is a finite probability of finding the system in state formula_80 when the system is originally in the state formula_74. The probability is oscillatory with angular frequency formula_85, which is simply unique Bohr frequency of the system and also called Rabi frequency. The formula (1) is known as Rabi formula. Now after time t the probability that the system in state formula_74 is given by formula_86, which is also oscillatory. These types of oscillations of two-level systems are called Rabi oscillations, which arise in many problems such as Neutrino oscillation, the ionized Hydrogen molecule, Quantum computing, Ammonia maser, etc. Applications. The Rabi effect is important in quantum optics, magnetic resonance and quantum computing. Quantum computing. Any two-state quantum system can be used to model a qubit. Consider a spin-formula_87 system with magnetic moment formula_88 placed in a classical magnetic field formula_89. Let formula_90 be the gyromagnetic ratio for the system. The magnetic moment is thus formula_91. The Hamiltonian of this system is then given by formula_92 where formula_93 and formula_94. One can find the eigenvalues and eigenvectors of this Hamiltonian by the above-mentioned procedure. Now, let the qubit be in state formula_95 at time formula_96. Then, at time formula_97, the probability of it being found in state formula_80 is given by formula_98 where formula_99. This phenomenon is called Rabi oscillation. Thus, the qubit oscillates between the formula_74 and formula_80 states. The maximum amplitude for oscillation is achieved at formula_100, which is the condition for resonance. At resonance, the transition probability is given by formula_101. To go from state formula_74 to state formula_80 it is sufficient to adjust the time formula_97 during which the rotating field acts such that formula_102 or formula_103. This is called a formula_104 pulse. If a time intermediate between 0 and formula_105 is chosen, we obtain a superposition of formula_74 and formula_80. In particular for formula_106, we have a formula_107 pulse, which acts as: formula_108. This operation has crucial importance in quantum computing. The equations are essentially identical in the case of a two level atom in the field of a laser when the generally well satisfied rotating wave approximation is made. Then formula_109 is the energy difference between the two atomic levels, formula_2 is the frequency of laser wave and Rabi frequency formula_110 is proportional to the product of the transition electric dipole moment of atom formula_111 and electric field formula_112 of the laser wave that is formula_113. In summary, Rabi oscillations are the basic process used to manipulate qubits. These oscillations are obtained by exposing qubits to periodic electric or magnetic fields during suitably adjusted time intervals. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|1\\rangle" }, { "math_id": 1, "text": "|c_b(t)|^2 \\propto \\sin^2(\\omega t/2)," }, { "math_id": 2, "text": "\\omega" }, { "math_id": 3, "text": "|\\psi\\rangle" }, { "math_id": 4, "text": "|\\psi\\rangle = \\begin{pmatrix} c_1 \\\\ c_2 \\end{pmatrix} = c_1 \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix} + c_2 \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix}," }, { "math_id": 5, "text": "c_1" }, { "math_id": 6, "text": "c_2" }, { "math_id": 7, "text": "|c_1|^2 + |c_2|^2 = 1" }, { "math_id": 8, "text": "|0\\rangle = \\begin{pmatrix} 1 \\\\ 0 \\end{pmatrix}" }, { "math_id": 9, "text": "|1\\rangle = \\begin{pmatrix} 0 \\\\ 1 \\end{pmatrix}" }, { "math_id": 10, "text": "P(t)" }, { "math_id": 11, "text": "P(t)=1" }, { "math_id": 12, "text": "|0\\rangle" }, { "math_id": 13, "text": " \\mathbf{H} = \\begin{pmatrix} a_0+a_3 & a_1-ia_2\\\\ a_1+ia_2 & a_0-a_3\\end{pmatrix}" }, { "math_id": 14, "text": " a_0,a_1, a_2 " }, { "math_id": 15, "text": "a_3" }, { "math_id": 16, "text": " \\mathbf{H} = a_0\\cdot\\sigma_0 + a_1\\cdot\\sigma_1 + a_2\\cdot\\sigma_2 + a_3\\cdot\\sigma_3 ;" }, { "math_id": 17, "text": "\\sigma_0" }, { "math_id": 18, "text": "\\times" }, { "math_id": 19, "text": " \\sigma_k \\; (k = 1,2,3)" }, { "math_id": 20, "text": " a_0,a_1,a_2" }, { "math_id": 21, "text": "\\mathbf{B} = B\\mathbf{\\hat z}" }, { "math_id": 22, "text": " \\mathbf{H}=-\\boldsymbol{\\mu}\\cdot\\mathbf{B}=-\\gamma\\mathbf{S}\\cdot\\mathbf{B}=-\\gamma \\ B\\ S_z " }, { "math_id": 23, "text": " S_z = \\frac{\\hbar}{2}\\, \\sigma_3 =\n\\frac{\\hbar}{2} \\begin{pmatrix}1&0\\\\ 0&-1 \\end{pmatrix}, " }, { "math_id": 24, "text": "\\mu" }, { "math_id": 25, "text": "\\gamma" }, { "math_id": 26, "text": "\\boldsymbol{\\sigma}" }, { "math_id": 27, "text": "\\sigma_3" }, { "math_id": 28, "text": "E_+ = \\frac{\\hbar}{2} \\gamma B \\ , \\ E_-= -\\frac{\\hbar}{2} \\gamma B" }, { "math_id": 29, "text": "|\\psi\\rang" }, { "math_id": 30, "text": "|\\phi\\rangle " }, { "math_id": 31, "text": "{|\\langle\\phi|\\psi\\rangle|}^2" }, { "math_id": 32, "text": "\\left| +X \\right\\rangle" }, { "math_id": 33, "text": "t=0 " }, { "math_id": 34, "text": "\\sigma_1 " }, { "math_id": 35, "text": "|\\psi(0)\\rang= \\frac{1}{\\sqrt{2}}\\begin{pmatrix} 1 \\\\ 1 \\end{pmatrix}= \\frac{1}{\\sqrt{2}}\\begin{pmatrix} 1 \\\\ 0\\end{pmatrix}+ \\frac{1}{\\sqrt{2}}\\begin{pmatrix}0\\\\1\\end{pmatrix}." }, { "math_id": 36, "text": "\\left|\\psi(t)\\right\\rang= \\exp\\left[{\\frac{-i\\mathbf{H}t}{\\hbar}}\\right] \\left|\\psi(0) \\right\\rang = \\begin{pmatrix} \\exp\\left[{\\tfrac{-i E_+ t}{\\hbar}}\\right] & 0 \\\\\n0 & \\exp\\left[{\\tfrac{-i E_- t}{\\hbar}}\\right]\n\\end{pmatrix} |\\psi(0)\\rang," }, { "math_id": 37, "text": "E" }, { "math_id": 38, "text": "|\\psi(t)\\rang=e^{\\frac{-iE_+t}{\\hbar}}\\frac{1}{\\sqrt{2}}|0\\rangle + e^{\\frac{-iE_-t}{\\hbar}}\\frac{1}{\\sqrt{2}}|1\\rangle " }, { "math_id": 39, "text": "{\\left|\\langle +X|\\psi(t)\\rangle\\right|}^2\n= {\\left|\n\\frac{{\\left\\langle 0 \\right| + \\left\\langle 1 \\right|}}{\\sqrt{2}}\n \\left({\n \\frac{1}{\\sqrt{2}} \\exp \\left[\\frac{-i E_+ t}{\\hbar} \\right]\n \\left|0 \\right\\rangle\n + \\frac{1}{\\sqrt{2}} \\exp \\left[\\frac{-i E_- t}{\\hbar} \\right]\n \\left|1 \\right\\rangle\n }\\right)\n\\right|}^2\n= \\cos^2\\left( \\frac{\\omega t}{2} \\right) ,\n" }, { "math_id": 40, "text": " \\omega = \\frac{E_+ - E_-}{\\hbar}=\\gamma B" }, { "math_id": 41, "text": "E_- \\leq E_+ " }, { "math_id": 42, "text": "t" }, { "math_id": 43, "text": "\\left| +Z \\right\\rangle" }, { "math_id": 44, "text": "\\tfrac{\\hbar}{2}" }, { "math_id": 45, "text": "\\tfrac{1}{2}" }, { "math_id": 46, "text": "E_+ = E_-" }, { "math_id": 47, "text": "\\hat{H} = -\\gamma\\ S_z B \\sin(\\omega t)" }, { "math_id": 48, "text": "\\left| +Y \\right\\rangle " }, { "math_id": 49, "text": "+\\tfrac{\\hbar}{2}" }, { "math_id": 50, "text": "{\\left| \\left\\langle \\, +Y|\\psi(t) \\right\\rangle \\right|}^2 \\,\n= \\cos^2 \\left(\\frac{\\gamma B}{2\\omega} \\cos \\left({\\omega t}\\right) \\right)" }, { "math_id": 51, "text": " \\hat{H} = \nE_0\\cdot\\sigma_0 +\nW_1\\cdot\\sigma_1 +\nW_2\\cdot\\sigma_2 +\n\\Delta\\cdot\\sigma_3 =\n\\begin{pmatrix}\nE_0 + \\Delta & W_1 - iW_2 \\\\\nW_1 + iW_2 & E_0 - \\Delta\n\\end{pmatrix}." }, { "math_id": 52, "text": "\\begin{align}\n\\lambda_+ &= E_+ = E_0 + \\sqrt{{\\Delta}^2 + {W_1}^2 + {W_2}^2} = E_0 + \\sqrt{{\\Delta}^2+ {\\left\\vert W \\right\\vert}^2} \\\\\n\\lambda_- &= E_- = E_0 - \\sqrt{{\\Delta}^2 + {W_1}^2 + {W_2}^2} = E_0 - \\sqrt{{\\Delta}^2 + {\\left\\vert W \\right\\vert}^2},\n\\end{align}" }, { "math_id": 53, "text": "\\mathbf{W} = W_1 + i W_2" }, { "math_id": 54, "text": "{\\left\\vert W \\right\\vert}^2 = {W_1}^2 + {W_2}^2 = WW^*" }, { "math_id": 55, "text": "\\mathbf{W} = {\\left\\vert W \\right\\vert} e^{i \\phi}" }, { "math_id": 56, "text": "E_+" }, { "math_id": 57, "text": "\\begin{pmatrix} E_0 + \\Delta & W_1 - i W_2 \\\\ W_1 + i W_2 & E_0 - \\Delta \\end{pmatrix} \\begin{pmatrix} a \\\\ b \\end{pmatrix} = E_+ \\begin{pmatrix} a \\\\ b \\end{pmatrix}." }, { "math_id": 58, "text": " b = -\\frac{a \\left(E_0 + \\Delta - E_+ \\right)} {W_1 - i W_2}. " }, { "math_id": 59, "text": "{\\left\\vert a \\right\\vert}^2 + {\\left\\vert b \\right\\vert}^2 = 1" }, { "math_id": 60, "text": "{\\left\\vert a \\right\\vert}^2 + {\\left\\vert a \\right\\vert}^2\\left(\\frac{\\Delta}{\\left\\vert W \\right\\vert} - \\frac{\\sqrt{{\\Delta}^2 + {\\left\\vert W \\right\\vert}^2}}{\\left\\vert W \\right\\vert}\\right)^2 = 1 . " }, { "math_id": 61, "text": "\\sin\\theta=\\frac{\\left\\vert W \\right\\vert}{\\sqrt{{\\Delta}^2+ {\\left\\vert W \\right\\vert}^2}}" }, { "math_id": 62, "text": "\\cos\\theta = \\frac{\\Delta}{\\sqrt{{\\Delta}^2+ {\\left\\vert W \\right\\vert}^2}}" }, { "math_id": 63, "text": "\\tan\\theta = \\frac{\\left\\vert W \\right\\vert}{\\Delta}" }, { "math_id": 64, "text": "{\\left\\vert a \\right\\vert}^2+{\\left\\vert a \\right\\vert}^2\\frac{({1-\\cos\\theta})^2}{\\sin^2\\theta}=1" }, { "math_id": 65, "text": "{\\left\\vert a \\right\\vert}^2=\\cos^2\\left(\\tfrac{\\theta}{2}\\right)" }, { "math_id": 66, "text": "\\tan(\\tfrac{\\theta}{2}) = \\tfrac{1-\\cos(\\theta)}{\\sin(\\theta)}" }, { "math_id": 67, "text": "a" }, { "math_id": 68, "text": "b" }, { "math_id": 69, "text": "-\\phi" }, { "math_id": 70, "text": "\\left|E_+\\right\\rang = \n\\begin{pmatrix}\n\\cos \\left(\\tfrac{\\theta}{2}\\right) \\\\\ne^{i\\phi}\\sin\\left(\\tfrac{\\theta}{2}\\right)\n\\end{pmatrix}\n= \\cos \\left(\\tfrac{\\theta}{2}\\right) \\left|0\\right\\rang \n+ e^{i\\phi} \\sin \\left(\\tfrac{\\theta}{2}\\right) \\left|1\\right\\rang." }, { "math_id": 71, "text": "E_-" }, { "math_id": 72, "text": "\\left|E_-\\right\\rang =\n\\sin \\left(\\tfrac{\\theta}{2}\\right) \\left|0\\right\\rang \n- e^{i\\phi} \\cos \\left(\\tfrac{\\theta}{2}\\right) \\left|1\\right\\rang." }, { "math_id": 73, "text": "\\begin{align}\n\\left|0\\right\\rang &=\n\\cos \\left(\\tfrac{\\theta}{2}\\right) \\left|E_+\\right\\rang \n+ \\sin \\left(\\tfrac{\\theta}{2}\\right) \\left|E_-\\right\\rang\n\\\\\n\\left|1\\right\\rang &=\ne^{-\\imath\\phi} \\sin \\left(\\tfrac{\\theta}{2}\\right) \\left|E_+\\right\\rang \n- e^{-\\imath\\phi} \\cos \\left(\\tfrac{\\theta}{2}\\right) \\left|E_-\\right\\rang.\n\\end{align}" }, { "math_id": 74, "text": "|0\\rang" }, { "math_id": 75, "text": "t = 0" }, { "math_id": 76, "text": "\\left| \\psi\\left( 0 \\right) \\right\\rang =\n\\left|0\\right\\rang =\n\\cos \\left(\\tfrac{\\theta}{2}\\right) \\left|E_+\\right\\rang \n+ \\sin \\left(\\tfrac{\\theta}{2}\\right) \\left|E_-\\right\\rang." }, { "math_id": 77, "text": "\\left| \\psi\\left( t \\right) \\right\\rang =\ne^{\\frac{-i \\hat{H} t}{\\hbar}} \\left| \\psi\\left( 0 \\right) \\right\\rang =\n\\cos \\left(\\tfrac{\\theta}{2}\\right) e^{\\frac{-i E_+ t}{\\hbar}} \\left|E_+\\right\\rang \n+ \\sin \\left(\\tfrac{\\theta}{2}\\right) e^{\\frac{-i E_- t}{\\hbar}} \\left|E_-\\right\\rang." }, { "math_id": 78, "text": "|E_+\\rang" }, { "math_id": 79, "text": "|E_-\\rang" }, { "math_id": 80, "text": "|1\\rang" }, { "math_id": 81, "text": "\\left \\langle\\ 1 | \\psi(t) \\right\\rangle = \ne^{i\\phi} \\sin \\left(\\tfrac{\\theta}{2}\\right) \\cos\\left(\\tfrac{\\theta}{2}\\right)\n\\left( e^{\\frac{-i E_+ t}{\\hbar}}-e^{\\frac{-i E_- t}{\\hbar}} \\right)\n" }, { "math_id": 82, "text": "|\\psi(t)\\rang" }, { "math_id": 83, "text": "|1\\rang" }, { "math_id": 84, "text": "\n\\begin{align}\nP_{0\\to 1}(t) &= {|\\langle\\ 1|\\psi(t)\\rangle|}^2\n\\\\ &= \ne^{-\\imath\\phi} \\sin\\left(\\frac{\\theta}{2}\\right) \\cos\\left(\\frac{\\theta}{2}\\right)\n\\left(e^{\\frac{+\\imath E_+ t}{\\hbar}}-e^{\\frac{+\\imath E_-t}{\\hbar}}\\right) \ne^{+\\imath\\phi} \\sin\\left(\\frac{\\theta}{2}\\right)\\cos\\left(\\frac{\\theta}{2}\\right)\n\\left( e^{\\frac{-\\imath E_+ t}{\\hbar}} - e^{\\frac{-\\imath E_-t}{\\hbar}} \\right)\n\\\\&= \\frac{\\sin^2{\\theta}}{4} \n\\left(2 - 2\\cos\\left( \\frac{\\left (E_+-E_- \\right)t}{\\hbar} \\right) \\right)\n\\end{align}\n" }, { "math_id": 85, "text": "\\omega =\\frac{E_+-E_-}{2\\hbar}=\\frac{\\sqrt{{\\Delta}^2+ {\\left\\vert W \\right\\vert}^2}}{\\hbar}" }, { "math_id": 86, "text": "{|\\langle\\ 0|\\psi(t)\\rangle|}^2=1-\\sin^2(\\theta)\\sin^2\\left(\\frac{(E_+-E_-)t}{2\\hbar}\\right)" }, { "math_id": 87, "text": " \\tfrac{1}{2} " }, { "math_id": 88, "text": " \\boldsymbol{\\mu} " }, { "math_id": 89, "text": " \\boldsymbol{B} =\nB_0\\ \\hat{z} +\nB_1 \\left(\\cos{(\\omega t)}\\ \\hat{x} - \\sin{(\\omega t)} \\ \\hat{y} \\right)" }, { "math_id": 90, "text": " \\gamma " }, { "math_id": 91, "text": " \\boldsymbol{\\mu} = \\frac{\\hbar}{2} \\gamma \\boldsymbol{\\sigma} " }, { "math_id": 92, "text": "\\mathbf{H}=-\\boldsymbol{\\mu}\\cdot\\mathbf{B}= -\\frac{\\hbar}{2}\\omega_0\\sigma_z-\\frac{\\hbar}{2}\\omega_1(\\sigma_x\\cos\\omega t-\\sigma_y\\sin\\omega t)" }, { "math_id": 93, "text": "\\omega_0=\\gamma B_0" }, { "math_id": 94, "text": "\\omega_1=\\gamma B_1" }, { "math_id": 95, "text": " |0\\rang" }, { "math_id": 96, "text": " t = 0 " }, { "math_id": 97, "text": " t " }, { "math_id": 98, "text": " P_{0\\to1}(t)=\\left(\\frac{\\omega_1}{\\Omega}\\right)^2\\sin^2\\left(\\frac{\\Omega t}{2}\\right)" }, { "math_id": 99, "text": "\\Omega=\\sqrt{(\\omega-\\omega_0)^2+\\omega_1^2}" }, { "math_id": 100, "text": "\\omega=\\omega_0" }, { "math_id": 101, "text": " P_{0\\to1}(t)=\\sin^2\\left(\\frac{\\omega_1 t}{2}\\right)" }, { "math_id": 102, "text": "\\frac{\\omega_1 t}{2}=\\frac{\\pi}{2}" }, { "math_id": 103, "text": " t=\\frac{\\pi}{\\omega_1}" }, { "math_id": 104, "text": "\\pi" }, { "math_id": 105, "text": " \\frac{\\pi}{\\omega_1}" }, { "math_id": 106, "text": " t=\\frac{\\pi}{2\\omega_1}" }, { "math_id": 107, "text": "\\frac{\\pi}{2}" }, { "math_id": 108, "text": "|0\\rang \\to \\frac{|0\\rang+i|1\\rang}{\\sqrt{2}}" }, { "math_id": 109, "text": "\\hbar\\omega_0" }, { "math_id": 110, "text": "\\omega_1" }, { "math_id": 111, "text": "\\vec{d}" }, { "math_id": 112, "text": "\\vec{E}" }, { "math_id": 113, "text": "\\omega_1 \\propto \\hbar \\ \\vec{d} \\cdot \\vec{E}" } ]
https://en.wikipedia.org/wiki?curid=582228
58223172
Flag bundle
In algebraic geometry, the flag bundle of a flag formula_0 of vector bundles on an algebraic scheme "X" is the algebraic scheme over "X": formula_1 such that formula_2 is a flag formula_3 of vector spaces such that formula_4 is a vector subspace of formula_5 of dimension "i". If "X" is a point, then a flag bundle is a flag variety and if the length of the flag is one, then it is the Grassmann bundle; hence, a flag bundle is a common generalization of these two notions. Construction. A flag bundle can be constructed inductively. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_{\\bullet}: E = E_l \\supsetneq \\cdots \\supsetneq E_1 \\supsetneq 0" }, { "math_id": 1, "text": "p: \\operatorname{Fl}(E_{\\bullet}) \\to X" }, { "math_id": 2, "text": "p^{-1}(x)" }, { "math_id": 3, "text": "V_{\\bullet}" }, { "math_id": 4, "text": "V_i" }, { "math_id": 5, "text": "(E_i)_x" } ]
https://en.wikipedia.org/wiki?curid=58223172
5822377
Shadows of the Mind
Book by Roger Penrose Shadows of the Mind: A Search for the Missing Science of Consciousness is a 1994 book by mathematical physicist Roger Penrose that serves as a followup to his 1989 book "The Emperor's New Mind: Concerning Computers, Minds and The Laws of Physics". Penrose hypothesizes that: Argument. Mathematical thought. In 1931, the mathematician and logician Kurt Gödel proved his incompleteness theorems, showing that any effectively generated theory capable of expressing elementary arithmetic cannot be both consistent and complete. Further to that, for any consistent formal theory that proves certain basic arithmetic truths, there is an arithmetical statement that is true, but not provable in the theory. The essence of Penrose's argument is that while a formal proof system cannot, because of the theorem, prove its own incompleteness, Gödel-type results are provable by human mathematicians. He takes this disparity to mean that human mathematicians are not describable as formal proof systems and are not running an algorithm, so that the computational theory of mind is false, and computational approaches to artificial general intelligence are unfounded. (The argument was first given by Penrose in "The Emperor's New Mind" (1989) and is developed further in "Shadows of The Mind". An earlier version of the argument was given by J. R. Lucas in 1959. For this reason, the argument is sometimes called the Penrose-Lucas argument.) Objective reduction. Penrose's theory of Objective Reduction predicts the relationship between quantum mechanics and general relativity. Penrose proposes that a quantum state remains in superposition until the difference in space-time curvature reaches a significant level. This idea is inspired by quantum gravity, because it uses both the physical constants formula_0 and formula_1. It is an alternative to the Copenhagen interpretation, which posits that superposition fails under observation, and the many-worlds hypothesis, which states that each alternative outcome of a superposition becomes real in a separate world. Penrose's idea is a type of objective collapse theory. In these theories the wavefunction is a physical wave, which undergoes wave function collapse as a physical process, with observers playing no special role. Penrose theorises that the wave function cannot be sustained in superposition beyond a certain energy difference between the quantum states. He gives an approximate value for this difference: a Planck mass worth of matter, which he calls the "'one-graviton' level". He then hypothesizes that this energy difference causes the wave function to collapse to a single state, with a probability based on its amplitude in the original wave function, a procedure taken from standard quantum mechanics. Orchestrated objective reduction. When he wrote his first consciousness book, "The Emperor's New Mind" in 1989, Penrose lacked a detailed proposal for how such quantum processes could be implemented in the brain. Subsequently, Stuart Hameroff read "The Emperor's New Mind" and suggested to Penrose that microtubules within brain cells were suitable candidate sites for quantum processing and ultimately for consciousness. The Orch-OR theory arose from the cooperation of these two scientists and was developed in Penrose's second consciousness book "Shadows of the Mind" (1994). Hameroff's contribution to the theory derived from studying brain cells (neurons). His interest centred on the cytoskeleton, which provides an internal supportive structure for neurons, and particularly on the microtubules, which are the important component of the cytoskeleton. As neuroscience has progressed, the role of the cytoskeleton and microtubules has assumed greater importance. In addition to providing a supportive structure for the cell, the known functions of the microtubules include transport of molecules, including neurotransmitter molecules bound for the synapses, and control of the cell's movement, growth and shape. Criticism. Gödelian argument and nature of human thought. Penrose's views on the human thought process are not widely accepted in certain scientific circles (Drew McDermott, David Chalmers and others). According to Marvin Minsky, because people can construe false ideas to be factual, the process of thinking is not limited to formal logic. Further, AI programs can also conclude that false statements are true, so error is not unique to humans. In May 1995, Stanford mathematician Solomon Feferman attacked Penrose's approach on multiple grounds, including the mathematical validity of his Gödelian argument and theoretical background. In 1996, Penrose offered a consolidated reply to many of the criticisms of "Shadows". John Searle criticises Penrose's appeal to Gödel as resting on the fallacy that all computational algorithms must be capable of mathematical description. As a counter-example, Searle cites the assignment of license plate numbers (LPN) to specific vehicle identification numbers (VIN), to register a vehicle. According to Searle, no mathematical function can be used to connect a known VIN with its LPN, but the process of assignment is quite simple—namely, "first come, first served"—and can be performed entirely by a computer. Microtubule hypothesis. Penrose and Stuart Hameroff have constructed the Orch-OR theory in which human consciousness is the result of quantum gravity effects in microtubules. However, in 2000, Max Tegmark calculated in an article he published in "Physical Review E" that the time scale of neuron firing and excitations in microtubules is slower than the decoherence time by a factor of at least 1010. Tegmark's article has been widely cited by critics of the Penrose-Hameroff hypothesis. The reception of the article is summed up by this statement in his support: "Physicists outside the fray, such as IBM's John Smolin, say the calculations confirm what they had suspected all along. 'We're not working with a brain that's near absolute zero. It's reasonably unlikely that the brain evolved quantum behavior', he says." In other words, there is a missing link between physics and neuroscience, and to date, it is too premature to claim that the Orch-OR hypothesis is right. In response to Tegmark's claims, Hagan, Tuszynski and Hameroff claimed that Tegmark did not address the Orch-OR model, but instead a model of his own construction. This involved superpositions of quanta separated by 24 nm rather than the much smaller separations stipulated for Orch-OR. As a result, Hameroff's group claimed a decoherence time seven orders of magnitude greater than Tegmark's, although still far below 25 ms. Hameroff's group also suggested that the Debye layer of counterions could screen thermal fluctuations, and that the surrounding actin gel might enhance the ordering of water, further screening noise. They also suggested that incoherent metabolic energy could further order water, and finally that the configuration of the microtubule lattice might be suitable for quantum error correction, a means of resisting quantum decoherence. In 2007, Gregory S. Engel claimed that all arguments concerning the brain being "too warm and wet" have been dispelled, as multiple "warm and wet" quantum processes have been discovered. Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\scriptstyle \\hbar" }, { "math_id": 1, "text": "\\scriptstyle G" } ]
https://en.wikipedia.org/wiki?curid=5822377
582263
Michelson interferometer
Common configuration for optical interferometry The Michelson interferometer is a common configuration for optical interferometry and was invented by the 19/20th-century American physicist Albert Abraham Michelson. Using a beam splitter, a light source is split into two arms. Each of those light beams is reflected back toward the beamsplitter which then combines their amplitudes using the superposition principle. The resulting interference pattern that is not directed back toward the source is typically directed to some type of photoelectric detector or camera. For different applications of the interferometer, the two light paths can be with different lengths or incorporate optical elements or even materials under test. The Michelson interferometer (among other interferometer configurations) is employed in many scientific experiments and became well known for its use by Michelson and Edward Morley in the famous Michelson–Morley experiment (1887) in a configuration which would have detected the Earth's motion through the supposed luminiferous aether that most physicists at the time believed was the medium in which light waves propagated. The null result of that experiment essentially disproved the existence of such an aether, leading eventually to the special theory of relativity and the revolution in physics at the beginning of the twentieth century. In 2015, another application of the Michelson interferometer, LIGO, made the first direct observation of gravitational waves. That observation confirmed an important prediction of general relativity, validating the theory's prediction of space-time distortion in the context of large scale cosmic events (known as strong field tests). Configuration. A Michelson interferometer consists minimally of mirrors "M1" &amp; "M2" and a beam splitter "M" (although a diffraction grating is also used). In Fig 2, a source "S" emits light that hits the beam splitter (in this case, a plate beamsplitter) surface "M" at point "C". "M" is partially reflective, so part of the light is transmitted through to point "B" while some is reflected in the direction of "A". Both beams recombine at point "C' "to produce an interference pattern incident on the detector at point "E" (or on the retina of a person's eye). If there is a slight angle between the two returning beams, for instance, then an imaging detector will record a sinusoidal "fringe pattern" as shown in Fig. 3b. If there is perfect spatial alignment between the returning beams, then there will not be any such pattern but rather a constant intensity over the beam dependent on the differential pathlength; this is difficult, requiring very precise control of the beam paths. Fig. 2 shows use of a coherent (laser) source. Narrowband spectral light from a discharge or even white light can also be used, however to obtain significant interference contrast it is required that the differential pathlength is reduced below the coherence length of the light source. That can be only micrometers for white light, as discussed below. If a lossless beamsplitter is employed, then one can show that optical energy is conserved. At every point on the interference pattern, the power that is "not" directed to the detector at "E" is rather present in a beam (not shown) returning in the direction of the source. As shown in Fig. 3a and 3b, the observer has a direct view of mirror "M1" seen through the beam splitter, and sees a reflected image "M'2" of mirror "M2". The fringes can be interpreted as the result of interference between light coming from the two virtual images "S'1" and "S'2" of the original source "S". The characteristics of the interference pattern depend on the nature of the light source and the precise orientation of the mirrors and beam splitter. In Fig. 3a, the optical elements are oriented so that "S'1" and "S'2" are in line with the observer, and the resulting interference pattern consists of circles centered on the normal to "M1" and "M'2" (fringes of equal inclination). If, as in Fig. 3b, "M1" and "M'2" are tilted with respect to each other, the interference fringes will generally take the shape of conic sections (hyperbolas), but if "M1" and "M'2" overlap, the fringes near the axis will be straight, parallel, and equally spaced (fringes of equal thickness). If S is an extended source rather than a point source as illustrated, the fringes of Fig. 3a must be observed with a telescope set at infinity, while the fringes of Fig. 3b will be localized on the mirrors. Source bandwidth. White light has a tiny coherence length and is difficult to use in a Michelson (or Mach–Zehnder) interferometer. Even a narrowband (or "quasi-monochromatic") spectral source requires careful attention to issues of chromatic dispersion when used to illuminate an interferometer. The two optical paths must be practically equal for all wavelengths present in the source. This requirement can be met if both light paths cross an equal thickness of glass of the same dispersion. In Fig. 4a, the horizontal beam crosses the beam splitter three times, while the vertical beam crosses the beam splitter once. To equalize the dispersion, a so-called compensating plate identical to the substrate of the beam splitter may be inserted into the path of the vertical beam. In Fig. 4b, we see using a cube beam splitter already equalizes the pathlengths in glass. The requirement for dispersion equalization is eliminated by using extremely narrowband light from a laser. The extent of the fringes depends on the coherence length of the source. In Fig. 3b, the yellow sodium light used for the fringe illustration consists of a pair of closely spaced lines, D1 and D2, implying that the interference pattern will blur after several hundred fringes. Single longitudinal mode lasers are highly coherent and can produce high contrast interference with differential pathlengths of millions or even billions of wavelengths. On the other hand, using white (broadband) light, the central fringe is sharp, but away from the central fringe the fringes are colored and rapidly become indistinct to the eye. Early experimentalists attempting to detect the Earth's velocity relative to the supposed luminiferous aether, such as Michelson and Morley (1887) and Miller (1933), used quasi-monochromatic light only for initial alignment and coarse path equalization of the interferometer. Thereafter they switched to white (broadband) light, since using white light interferometry they could measure the point of "absolute phase" equalization (rather than phase modulo 2π), thus setting the two arms' pathlengths equal. More importantly, in a white light interferometer, any subsequent "fringe jump" (differential pathlength shift of one wavelength) would always be detected. Applications. The Michelson interferometer configuration is used in a number of different applications. Fourier transform spectrometer. Fig. 5 illustrates the operation of a Fourier transform spectrometer, which is essentially a Michelson interferometer with one mirror movable. (A practical Fourier transform spectrometer would substitute corner cube reflectors for the flat mirrors of the conventional Michelson interferometer, but for simplicity, the illustration does not show this.) An interferogram is generated by making measurements of the signal at many discrete positions of the moving mirror. A Fourier transform converts the interferogram into an actual spectrum. Fourier transform spectrometers can offer significant advantages over dispersive (i.e., grating and prism) spectrometers under certain conditions. (1) The Michelson interferometer's detector in effect monitors all wavelengths simultaneously throughout the entire measurement. When using a noisy detector, such as at infrared wavelengths, this offers an increase in signal-to-noise ratio while using only a single detector element; (2) the interferometer does not require a limited aperture as do grating or prism spectrometers, which require the incoming light to pass through a narrow slit in order to achieve high spectral resolution. This is an advantage when the incoming light is not of a single spatial mode. For more information, see Fellgett's advantage. Twyman–Green interferometer. The Twyman–Green interferometer is a variation of the Michelson interferometer used to test small optical components, invented and patented by Twyman and Green in 1916. The basic characteristics distinguishing it from the Michelson configuration are the use of a monochromatic point light source and a collimator. Michelson (1918) criticized the Twyman–Green configuration as being unsuitable for the testing of large optical components, since the available light sources had limited coherence length. Michelson pointed out that constraints on geometry forced by the limited coherence length required the use of a reference mirror of equal size to the test mirror, making the Twyman–Green impractical for many purposes. Decades later, the advent of laser light sources answered Michelson's objections. The use of a figured reference mirror in one arm allows the Twyman–Green interferometer to be used for testing various forms of optical component, such as lenses or telescope mirrors. Fig. 6 illustrates a Twyman–Green interferometer set up to test a lens. A point source of monochromatic light is expanded by a diverging lens (not shown), then is collimated into a parallel beam. A convex spherical mirror is positioned so that its center of curvature coincides with the focus of the lens being tested. The emergent beam is recorded by an imaging system for analysis. Laser unequal path interferometer. The "LUPI" is a Twyman–Green interferometer that uses a coherent laser light source. The high coherence length of a laser allows unequal path lengths in the test and reference arms and permits economical use of the Twyman–Green configuration in testing large optical components. A similar scheme has been used by Tajammal M in his PhD thesis (Manchester University UK, 1995) to balance two arms of an LDA system. This system used fibre optic direction coupler. Gravitational wave detection. Michelson interferometry is the leading method for the direct detection of gravitational waves. This involves detecting tiny strains in space itself, affecting two long arms of the interferometer unequally, due to a strong passing gravitational wave. In 2015 the first detection of gravitational waves was accomplished using the two Michelson interferometers, each with 4 km arms, which comprise the Laser Interferometer Gravitational-Wave Observatory. This was the first experimental validation of gravitational waves, predicted by Albert Einstein's General Theory of Relativity. With the addition of the Virgo interferometer in Europe, it became possible to calculate the direction from which the gravitational waves originate, using the tiny arrival-time differences between the three detectors. In 2020, India was constructing a fourth Michelson interferometer for gravitational wave detection. Miscellaneous applications. Fig. 7 illustrates use of a Michelson interferometer as a tunable narrow band filter to create dopplergrams of the Sun's surface. When used as a tunable narrow band filter, Michelson interferometers exhibit a number of advantages and disadvantages when compared with competing technologies such as Fabry–Pérot interferometers or Lyot filters. Michelson interferometers have the largest field of view for a specified wavelength, and are relatively simple in operation, since tuning is via mechanical rotation of waveplates rather than via high voltage control of piezoelectric crystals or lithium niobate optical modulators as used in a Fabry–Pérot system. Compared with Lyot filters, which use birefringent elements, Michelson interferometers have a relatively low temperature sensitivity. On the negative side, Michelson interferometers have a relatively restricted wavelength range, and require use of prefilters which restrict transmittance. The reliability of Michelson interferometers has tended to favor their use in space applications, while the broad wavelength range and overall simplicity of Fabry–Pérot interferometers has favored their use in ground-based systems. Another application of the Michelson interferometer is in optical coherence tomography (OCT), a medical imaging technique using low-coherence interferometry to provide tomographic visualization of internal tissue microstructures. As seen in Fig. 8, the core of a typical OCT system is a Michelson interferometer. One interferometer arm is focused onto the tissue sample and scans the sample in an X-Y longitudinal raster pattern. The other interferometer arm is bounced off a reference mirror. Reflected light from the tissue sample is combined with reflected light from the reference. Because of the low coherence of the light source, interferometric signal is observed only over a limited depth of sample. X-Y scanning therefore records one thin optical slice of the sample at a time. By performing multiple scans, moving the reference mirror between each scan, an entire three-dimensional image of the tissue can be reconstructed. Recent advances have striven to combine the nanometer phase retrieval of coherent interferometry with the ranging capability of low-coherence interferometry. Others applications include delay line interferometer which convert phase modulation into amplitude modulation in DWDM networks, the characterization of high-frequency circuits, and low-cost THz power generation. Atmospheric and space applications. The Michelson Interferometer has played an important role in studies of the upper atmosphere, revealing temperatures and winds, employing both space-borne, and ground-based instruments, by measuring the Doppler widths and shifts in the spectra of airglow and aurora. For example, the Wind Imaging Interferometer, WINDII, on the Upper Atmosphere Research Satellite, UARS, (launched on September 12, 1991) measured the global wind and temperature patterns from 80 to 300 km by using the visible airglow emission from these altitudes as a target and employing optical Doppler interferometry to measure the small wavelength shifts of the narrow atomic and molecular airglow emission lines induced by the bulk velocity of the atmosphere carrying the emitting species. The instrument was an all-glass field-widened achromatically and thermally compensated phase-stepping Michelson interferometer, along with a bare CCD detector that imaged the airglow limb through the interferometer. A sequence of phase-stepped images was processed to derive the wind velocity for two orthogonal view directions, yielding the horizontal wind vector. The principle of using a polarizing Michelson Interferometer as a narrow band filter was first described by Evans who developed a birefringent photometer where the incoming light is split into two orthogonally polarized components by a polarizing beam splitter, sandwiched between two halves of a Michelson cube. This led to the first polarizing wide-field Michelson interferometer described by Title and Ramsey which was used for solar observations; and led to the development of a refined instrument applied to measurements of oscillations in the Sun's atmosphere, employing a network of observatories around the Earth known as the Global Oscillations Network Group (GONG). The Polarizing Atmospheric Michelson Interferometer, PAMI, developed by Bird et al., and discussed in "Spectral Imaging of the Atmosphere", combines the polarization tuning technique of Title and Ramsey with the Shepherd "et al." technique of deriving winds and temperatures from emission rate measurements at sequential path differences, but the scanning system used by PAMI is much simpler than the moving mirror systems in that it has no internal moving parts, instead scanning with a polarizer external to the interferometer. The PAMI was demonstrated in an observation campaign where its performance was compared to a Fabry–Pérot spectrometer, and employed to measure E-region winds. More recently, the Helioseismic and Magnetic Imager (HMI), on the Solar Dynamics Observatory, employs two Michelson Interferometers with a polarizer and other tunable elements, to study solar variability and to characterize the Sun's interior along with the various components of magnetic activity. HMI takes high-resolution measurements of the longitudinal and vector magnetic field over the entire visible disk thus extending the capabilities of its predecessor, the SOHO's MDI instrument (See Fig. 9). HMI produces data to determine the interior sources and mechanisms of solar variability and how the physical processes inside the Sun are related to surface magnetic field and activity. It also produces data to enable estimates of the coronal magnetic field for studies of variability in the extended solar atmosphere. HMI observations will help establish the relationships between the internal dynamics and magnetic activity in order to understand solar variability and its effects. In one example of the use of the MDI, Stanford scientists reported the detection of several sunspot regions in the deep interior of the Sun, 1–2 days before they appeared on the solar disc. The detection of sunspots in the solar interior may thus provide valuable warnings about upcoming surface magnetic activity which could be used to improve and extend the predictions of space weather forecasts. Technical topics. Step-phase interferometer. This is a Michelson interferometer in which the mirror in one arm is replaced with a Gires–Tournois etalon. The highly dispersed wave reflected by the Gires–Tournois etalon interferes with the original wave as reflected by the other mirror. Because the phase change from the Gires–Tournois etalon is an almost step-like function of wavelength, the resulting interferometer has special characteristics. It has an application in fiber-optic communications as an optical interleaver. Both mirrors in a Michelson interferometer can be replaced with Gires–Tournois etalons. The step-like relation of phase to wavelength is thereby more pronounced, and this can be used to construct an asymmetric optical interleaver. Phase-conjugating interferometry. The reflection from phase-conjugating mirror of two light beams inverses their phase difference formula_0 to the opposite one formula_1. For this reason the interference pattern in twin-beam interferometer changes drastically. Compared to conventional Michelson interference curve with period of half-wavelength formula_2: formula_3 where formula_4 is second-order correlation function, the interference curve in phase-conjugating interferometer has much longer period defined by frequency shift formula_5 of reflected beams: formula_6 where visibility curve is nonzero when optical path difference formula_7 exceeds coherence length of light beams. The nontrivial features of phase fluctuations in optical phase-conjugating mirror had been studied via Michelson interferometer with two independent PC-mirrors . The phase-conjugating Michelson interferometry is a promising technology for coherent summation of laser amplifiers. Constructive interference in an array containing formula_8 beamsplitters of formula_9 laser beams synchronized by phase conjugation may increase the brightness of amplified beams as formula_10. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta \\varphi" }, { "math_id": 1, "text": "-\\Delta \\varphi" }, { "math_id": 2, "text": "\\lambda/2" }, { "math_id": 3, "text": "I(\\Delta L) \\sim [1+ \\gamma(\\Delta L) \\cos (2k\\Delta L)]," }, { "math_id": 4, "text": "\\gamma(\\Delta L)" }, { "math_id": 5, "text": "\\delta \\omega = \\Delta k c" }, { "math_id": 6, "text": "I(\\Delta L) \\sim [1+ [\\gamma(\\Delta L)+0.25] \\cos (\\Delta k\\Delta L)]," }, { "math_id": 7, "text": "\\Delta L > \\ell_\\text{coh}" }, { "math_id": 8, "text": "N/2" }, { "math_id": 9, "text": "N" }, { "math_id": 10, "text": "N^2" } ]
https://en.wikipedia.org/wiki?curid=582263
582273
Hydraulic diameter
Measure of a channel flow efficiency The hydraulic diameter, "D"H, is a commonly used term when handling flow in non-circular tubes and channels. Using this term, one can calculate many things in the same way as for a round tube. When the cross-section is uniform along the tube or channel length, it is defined as formula_0 where A is the cross-sectional area of the flow, P is the wetted perimeter of the cross-section. More intuitively, the hydraulic diameter can be understood as a function of the hydraulic radius "R"H, which is defined as the cross-sectional area of the channel divided by the wetted perimeter. Here, the wetted perimeter includes all surfaces acted upon by shear stress from the fluid. formula_1 formula_2 Note that for the case of a circular pipe, formula_3 The need for the hydraulic diameter arises due to the use of a single dimension in the case of a dimensionless quantity such as the Reynolds number, which prefers a single variable for flow analysis rather than the set of variables as listed in the table below. The Manning formula contains a quantity called the hydraulic radius. Despite what the name may suggest, the hydraulic diameter is "not" twice the hydraulic radius, but four times larger. Hydraulic diameter is mainly used for calculations involving turbulent flow. Secondary flows can be observed in non-circular ducts as a result of turbulent shear stress in the turbulent flow. Hydraulic diameter is also used in calculation of heat transfer in internal-flow problems. Non-uniform and non-circular cross-section channels. In the more general case, channels with non-uniform non-circular cross-sectional area, such as the Tesla valve, the hydraulic diameter is defined as: formula_4 where V is the total wetted volume of the channel, S is the total wetted surface area. This definition is reduced to formula_5 for uniform non-circular cross-section channels, and formula_6 for circular pipes. List of hydraulic diameters. For a fully filled duct or pipe whose cross-section is a convex regular polygon, the hydraulic diameter is equivalent to the diameter formula_7 of a circle inscribed within the wetted perimeter. This can be seen as follows: The formula_8-sided regular polygon is a union of formula_8 triangles, each of height formula_9 and base formula_10. Each such triangle contributes formula_11 to the total area and formula_12 to the total perimeter, giving formula_13 for the hydraulic diameter. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " D_\\text{H} = \\frac{4A}{P}, " }, { "math_id": 1, "text": " R_\\text{H} = \\frac{A}{P}, " }, { "math_id": 2, "text": " D_\\text{H} = 4R_\\text{H}, " }, { "math_id": 3, "text": " D_\\text{H} =\\frac{4 \\pi R^{2}}{2 \\pi R}=2R" }, { "math_id": 4, "text": " D_\\text{H} = \\frac{4V}{S}, " }, { "math_id": 5, "text": " \\frac{4A}{P} " }, { "math_id": 6, "text": " 2R " }, { "math_id": 7, "text": "D" }, { "math_id": 8, "text": " N " }, { "math_id": 9, "text": " D/2 " }, { "math_id": 10, "text": " B = D \\tan(\\pi/N)" }, { "math_id": 11, "text": " BD/4 " }, { "math_id": 12, "text": " B " }, { "math_id": 13, "text": " D_\\text{H} = 4\\frac{N BD/4}{NB} = D " } ]
https://en.wikipedia.org/wiki?curid=582273
58229944
Joint Polarization Experiment
The Joint Polarization Experiment (JPOLE) was a test for evaluating the performance of the WSR-88D in order to modify it to include dual polarization. This program was a joint project of the National Weather Service (NWS), the Federal Aviation Administration (FAA), and the US Air Force Meteorological Agency (AFWA), which took place from 2000-2004. It has resulted in the upgrading of the entire meteorological radar network in the United States by adding dual polarization to better determine the type of hydrometeor, and quantities that have fallen. History. During the years preceding JPOLE, the National Center for Atmospheric Research (NCAR) was among the first centers in the field to utilize dual polarization for a weather radar, with staff Dusan S. Zrnic and Alexander V. Ryzhkov. In July 2000, the first planning meeting for JPOLE was held at the National Severe Storms Laboratory (NSSL), and it was determined that the project would take place in two stages: Description. JPOLE was introduced using a testbed NEXRAD mounted in Norman, Oklahoma, on the grounds of the NSSL. The signal from its transmitter was split in two to obtain a conventional horizontal polarization and a vertical polarization. The signals were sent to the antenna by two waveguides and could simultaneously transmit the two signals and furthermore receive the echoes returned by the precipitation in the emitted or orthogonal planes. In general, most hydrometeors have a larger axis in the horizontal (for example, drops of rain become oblates when falling because of the resistance of the air). Because of this, the dipolar axis of the water molecules therefore tends to align in the horizontal and, as such, the radar beam will generally be horizontally polarized to take advantage of maximum return properties. If we send at the same time a pulse with vertical polarization and another with horizontal polarization, we can note a difference of several characteristics between these returns: Differential Reflectivity. If the targets have a flattened shape, by sampling with two waves [of which one is of vertical polarization (V) and the other horizontal (H)], we obtain stronger intensities returning the horizontal axis. On the other hand, if the orthogonal returns are equal, this indicates a round target. This is called differential reflectivity, or (formula_0). Correlation Coefficient. The radar beam probes a larger or smaller volume depending on the characteristics of the transmitting antenna. What comes back is the average of the waves reflected by the individual targets within the volume. Since the targets can change position in time relative to each one another, the intensity of the V and H waves remains constant only if the targets maintain homogeneity. The intensity ratio between the H and V channels returning from successive samples is called the correlation coefficient (formula_1) and therefore gives an idea of the homogeneity, or lack thereof, of the targets in the volume surveyed. Differential Phase Shift. The phase of the wave changes as it passes through media of varying densities. By comparing the phase change rate of the return wave with the distance, the specific differential phase formula_2 can help sample the quantity of material traversed. Unlike the differential reflectivity, correlation coefficient, which are both dependent on reflected power, differential phase is a "propagation effect." The range derivative of differential phase, specific differential phase, can be used to localize areas of strong precipitation/attenuation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Z_{dr}" }, { "math_id": 1, "text": "\\rho_{hv}" }, { "math_id": 2, "text": "K_{dp}" } ]
https://en.wikipedia.org/wiki?curid=58229944
58232142
The monkey and the coconuts
Mathematical puzzle The monkey and the coconuts is a mathematical puzzle in the field of Diophantine analysis that originated in a short story involving five sailors and a monkey on a desert island who divide up a pile of coconuts; the problem is to find the number of coconuts in the original pile (fractional coconuts not allowed). The problem is notorious for its confounding difficulty to unsophisticated puzzle solvers, though with the proper mathematical approach, the solution is trivial. The problem has become a staple in recreational mathematics collections. General description. The problem can be expressed as: There is a pile of coconuts, owned by five men. One man divides the pile into five equal piles, giving the one left over coconut to a passing monkey, and takes away his own share. The second man then repeats the procedure, dividing the remaining pile into five and taking away his share, as do the third, fourth, and fifth, each of them finding one coconut left over when dividing the pile by five, and giving it to a monkey. Finally, the group divide the remaining coconuts into five equal piles: this time no coconuts are left over. How many coconuts were there in the original pile? The monkey and the coconuts is the best known representative of a class of puzzle problems requiring integer solutions structured as recursive division or fractionating of some discretely divisible quantity, with or without remainders, and a final division into some number of equal parts, possibly with a remainder. The problem is so well known that the entire class is often referred to broadly as "monkey and coconut type problems", though most are not closely related to the problem. Another example: "I have a whole number of pounds of cement, I know not how many, but after addition of a ninth and an eleventh, it was partitioned into 3 sacks, each with a whole number of pounds. How many pounds of cement did I have?" Problems ask for either the initial or terminal quantity. Stated or implied is the smallest positive number that could be a solution. There are two unknowns in such problems, the initial number and the terminal number, but only one equation which is an algebraic reduction of an expression for the relation between them. Common to the class is the nature of the resulting equation, which is a linear Diophantine equation in two unknowns. Most members of the class are determinate, but some are not (the monkey and the coconuts is one of the latter). Familiar algebraic methods are unavailing for solving such equations. History. The origin of the class of such problems has been attributed to the Indian mathematician Mahāvīra in chapter VI, § 131&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2, 132&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 of his "Ganita-sara-sangraha" (“Compendium of the Essence of Mathematics”), circa 850CE, which dealt with serial division of fruit and flowers with specified remainders. That would make progenitor problems over 1000 years old before their resurgence in the modern era. Problems involving division which invoke the Chinese remainder theorem appeared in Chinese literature as early as the first century CE. Sun Tzu asked: Find a number which leaves the remainders 2, 3 and 2 when divided by 3, 5 and 7, respectively. Diophantus of Alexandria first studied problems requiring integer solutions in the 3rd century CE. The Euclidean algorithm for greatest common divisor which underlies the solution of such problems was discovered by the Greek geometer Euclid and published in his "Elements" in 300CE. Prof. David Singmaster, a historian of puzzles, traces a series of less plausibly related problems through the middle ages, with a few references as far back as the Babylonian empire circa 1700 BC. They involve the general theme of adding or subtracting fractions of a pile or specific numbers of discrete objects and asking how many there could have been in the beginning. The next reference to a similar problem is in Jacques Ozanam's "Récréations mathématiques et physiques", 1725. In the realm of pure mathematics, Lagrange in 1770 expounded his continued fraction theorem and applied it to solution of Diophantine equations. The first description of the problem in close to its modern wording appears in Lewis Carroll's diaries in 1888: it involves a pile of nuts on a table serially divided by four brothers, each time with remainder of one given to a monkey, and the final division coming out even. The problem never appeared in any of Carroll's published works, though from other references it appears the problem was in circulation in 1888. An almost identical problem appeared in W.W. Rouse Ball's "Elementary Algebra" (1890). The problem was mentioned in works of period mathematicians, with solutions, mostly wrong, indicating that the problem was new and unfamiliar at the time. The problem became notorious when American novelist and short story writer Ben Ames Williams modified an older problem and included it in a story, "Coconuts", in the October 9, 1926, issue of the "Saturday Evening Post". Here is how the problem was stated by Williams (condensed and paraphrased): Five men and a monkey were shipwrecked on an island. They spent the first day gathering coconuts for food. During the night, one man woke up, and decided to take his share early. So he divided the coconuts in five piles. He had one coconut left over, and he gave that to the monkey. Then he hid his pile and put the rest back together. By and by each of the five men woke up and did the same thing, one after the other: each one taking a fifth of the coconuts that were in the pile when he woke up, and having one left over for the monkey. In the morning they divided what coconuts were left, and they came out in five equal shares. Of course each one must have known there were coconuts missing; but each one was guilty as the others, so they didn't say anything. How many coconuts were there in the original pile? Williams had not included an answer in the story. The magazine was inundated by more than 2,000 letters pleading for an answer to the problem. The "Post" editor, Horace Lorimer, famously fired off a telegram to Williams saying: "FOR THE LOVE OF MIKE, HOW MANY COCONUTS? HELL POPPING AROUND HERE". Williams continued to get letters asking for a solution or proposing new ones for the next twenty years. Martin Gardner featured the problem in his April 1958 Mathematical Games column in "Scientific American". According to Gardner, Williams had modified an older problem to make it more confounding. In the older version there is a coconut for the monkey on the final division; in Williams's version the final division in the morning comes out even. But the available historical evidence does not indicate which versions Williams had access to. Gardner once told his son Jim that it was his favorite problem. He said that the Monkey and the Coconuts is "probably the most worked on and least often solved" Diophantine puzzle. Since that time the Williams version of the problem has become a staple of recreational mathematics. The original story containing the problem was reprinted in full in Clifton Fadiman's 1962 anthology "The Mathematical Magpie", a book that the Mathematical Association of America recommends for acquisition by undergraduate mathematics libraries. Numerous variants which vary the number of sailors, monkeys, or coconuts have appeared in the literature. Solutions. &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; A Diophantine problem Diophantine analysis is the study of equations with rational coefficients requiring integer solutions. In Diophantine problems, there are fewer equations than unknowns. The "extra" information required to solve the equations is the condition that the solutions be integers. Any solution must satisfy all equations. Some Diophantine equations have no solution, some have one or a finite number, and others have infinitely many solutions. The monkey and the coconuts reduces to a two-variable linear Diophantine equation of the form ax + by = c, or more generally, (a/d)x + (b/d)y = c/d where d is the greatest common divisor of a and b. By Bézout's identity, the equation is solvable if and only if d divides c. If it does, the equation has infinitely many periodic solutions of the form x = x0 + t · b, y = y0 + t · a where (x0,y0) is a solution and t is a parameter than can be any integer. The problem is not intended to be solved by trial-and-error; there are deterministic methods for solving (x0,y0) in this case (see text). Numerous solutions starting as early as 1928 have been published both for the original problem and Williams modification. Before entering upon a solution to the problem, a couple of things may be noted. If there were no remainders, given there are 6 divisions of 5, 56=15,625 coconuts must be in the pile; on the 6th and last division, each sailor receives 1024 coconuts. No smaller positive number will result in all 6 divisions coming out even. That means that in the problem as stated, any multiple of 15,625 may be added to the pile, and it will satisfy the problem conditions. That also means that the number of coconuts in the original pile is smaller than 15,625, else subtracting 15,625 will yield a smaller solution. But the number in the original pile is not trivially small, like 5 or 10 (that is why this is a hard problem) – it may be in the hundreds or thousands. Unlike trial and error in the case of guessing a polynomial root, trial and error for a Diophantine root will not result in any obvious convergence. There is no simple way of estimating what the solution will be. The original version. Martin Gardner's 1958 Mathematical Games column begins its analysis by solving the original problem (with one coconut also remaining in the morning) because it is easier than Williams's version. Let "F" be the number of coconuts received by each sailor after the final division into 5 equal shares in the morning. Then the number of coconuts left before the morning division is formula_0; the number present when the fifth sailor awoke was formula_1; the number present when the fourth sailor awoke was formula_2; and so on. We find that the size "N" of the original pile satisfies the Diophantine equation formula_3 Gardner points out that this equation is "much too difficult to solve by trial and error," but presents a solution he credits to J. H. C. Whitehead (via Paul Dirac): The equation also has solutions in negative integers. Trying out a few small negative numbers it turns out formula_4 is a solution. We add 15625 to "N" and 1024 to "F" to get the smallest positive solution: formula_5. Williams version. Trial and error fails to solve Williams's version, so a more systematic approach is needed. Using a sieve. The search space can be reduced by a series of increasingly larger factors by observing the structure of the problem so that a bit of trial and error finds the solution. The search space is much smaller if one starts with the number of coconuts received by each man in the morning division, because that number is much smaller than the number in the original pile. If "F" is the number of coconuts each sailor receives in the final division in the morning, the pile in the morning is 5"F", which must also be divisible by 4, since the last sailor in the night combined 4 piles for the morning division. So the morning pile, call the number "n", is a multiple of 20. The pile before the last sailor woke up must have been 5/4("n")+1. If only one sailor woke up in the night, then 5/4(20)+1 = 26 works for the minimum number of coconuts in the original pile. But if two sailors woke up, 26 is not divisible by 4, so the morning pile must be some multiple of 20 that yields a pile divisible by 4 before the last sailor wakes up. It so happens that 3*20=60 works for two sailors: applying the recursion formula for "n" twice yields 96 as the smallest number of coconuts in the original pile. 96 is divisible by 4 once more, so for 3 sailors awakening, the pile could have been 121 coconuts. But 121 is not divisible by 4, so for 4 sailors awakening, one needs to make another leap. At this point, the analogy becomes obtuse, because in order to accommodate 4 sailors awakening, the morning pile must be some multiple of 60: if one is persistent, it may be discovered that 17*60=1020 does the trick and the minimum number in the original pile would be 2496. A last iteration on 2496 for 5 sailors awakening, i.e. 5/4(2496)+1 brings the original pile to 3121 coconuts. Blue coconuts. Another device is to use extra objects to clarify the division process. Suppose that in the evening we add four blue coconuts to the pile. Then the first sailor to wake up will find the pile to be evenly divisible by five, instead of having one coconut left over. The sailor divides the pile into fifths such that each blue coconut is in a different fifth; then he takes the fifth with no blue coconut, gives one of his coconuts to the monkey, and puts the other four fifths (including all four blue coconuts) back together. Each sailor does the same. During the final division in the morning, the blue coconuts are left on the side, belonging to no one. Since the whole pile was evenly divided 5 times in the night, it must have contained 55 coconuts: 4 blue coconuts and 3121 ordinary coconuts. The device of using additional objects to aid in conceptualizing a division appeared as far back as 1912 in a solution due to Norman H. Anning. A related device appears in the 17-animal inheritance puzzle: A man wills 17 horses to his three sons, specifying that the eldest son gets half, the next son one-third, and the youngest son, one-ninth of the animals. The sons are confounded, so they consult a wise horse trader. He says, "here, borrow my horse." The sons duly divide the horses, discovering that all the divisions come out even, with one horse left over, which they return to the trader. Base 5 numbering. A simple solution appears when the divisions and subtractions are performed in base 5. Consider the subtraction, when the first sailor takes his share (and the monkey's). Let n0,n1... represent the digits of N, the number of coconuts in the original pile, and s0,s1... represent the digits of the sailor's share S, both base 5. After the monkey's share, the least significant digit of N must now be 0; after the subtraction, the least significant digit of N' left by the first sailor must be 1, hence the following (the actual number of digits in N as well as S is unknown, but they are irrelevant just now): n5n4n3n2n1 0 (N5) s4s3s2s1s0 (S5) 1 (N'5) The digit subtracted from 0 base 5 to yield 1 is 4, so s0=4. But since S is (N-1)/5, and dividing by 55 is just shifting the number right one position, n1=s0=4. So now the subtraction looks like: n5n4n3n2 4 0 s4s3s2s1 4 1 Since the next sailor is going to do the same thing on N', the least significant digit of N' becomes 0 after tossing one to the monkey, and the LSD of S' must be 4 for the same reason; the next digit of N' must also be 4. So now it looks like: n5n4n3n2 4 0 s4s3s2s1 4 4 1 Borrowing 1 from n1 (which is now 4) leaves 3, so s1 must be 4, and therefore n2 as well. So now it looks like: n5n4n3 4 4 0 s4s3s2 4 4 4 1 But the same reasoning again applies to N' as applied to N, so the next digit of N' is 4, so s2 and n3 are also 4, etc. There are 5 divisions; the first four must leave an odd number base 5 in the pile for the next division, but the last division must leave an even number base 5 so the morning division will come out even (in 5s). So there are four 4s in N following a LSD of 1: N=444415=312110 A numerical approach. A straightforward numeric analysis goes like this: If N is the initial number, each of 5 sailors transitions the original pile thus: N =&gt; 4(N–1)/5 or equivalently, N =&gt; 4(N+4)/5 – 4. Repeating this transition 5 times gives the number left in the morning: N =&gt; 4(N+4)/5 – 4    =&gt; 16(N+4)/25 – 4    =&gt; 64(N+4)/125 – 4    =&gt; 256(N+4)/625 – 4    =&gt; 1024(N+4)/3125 – 4 Since that number must be an integer and 1024 is relatively prime to 3125, N+4 must be a multiple of 3125. The smallest such multiple is 3125 · 1, so N = 3125 – 4 = 3121; the number left in the morning comes to 1020, which is evenly divisible by 5 as required. Modulo congruence. A simple succinct solution can be obtained by directly utilizing the recursive structure of the problem: There were five divisions of the coconuts into fifths, each time with one left over (putting aside the final division in the morning). The pile remaining after each division must contain an integral number of coconuts. If there were only one such division, then it is readily apparent that 5 · 1+1=6 is a solution. In fact any multiple of five plus one is a solution, so a possible general formula is 5 · "k" – 4, since a multiple of 5 plus 1 is also a multiple of 5 minus 4. So 11, 16, etc also work for one division. If two divisions are done, a multiple of 5 · 5=25 rather than 5 must be used, because 25 can be divided by 5 twice. So the number of coconuts that could be in the pile is "k" · 25 – 4. "k"=1 yielding 21 is the smallest positive number that can be successively divided by 5 twice with remainder 1. If there are 5 divisions, then multiples of 55=3125 are required; the smallest such number is 3125 – 4 = 3121. After 5 divisions, there are 1020 coconuts left over, a number divisible by 5 as required by the problem. In fact, after "n" divisions, it can be proven that the remaining pile is divisible by "n", a property made convenient use of by the creator of the problem. A formal way of stating the above argument is: The original pile of coconuts will be divided by 5 a total of 5 times with a remainder of 1, not considering the last division in the morning. Let N = number of coconuts in the original pile. Each division must leave the number of nuts in the same congruence class (mod 5). So, formula_6 (mod 5) (the –1 is the nut tossed to the monkey) formula_7 (mod 5) formula_8 (mod 5) (–4 is the congruence class) So if we began in modulo class –4 nuts then we will remain in modulo class –4. Since ultimately we have to divide the pile 5 times or 5^5, the original pile was 5^5 – 4 = 3121 coconuts. The remainder of 1020 coconuts conveniently divides evenly by 5 in the morning. This solution essentially reverses how the problem was (probably) constructed. The Diophantine equation and forms of solution. The equivalent Diophantine equation for this version is: formula_9 (1) where "N" is the original number of coconuts, and "F" is the number received by each sailor on the final division in the morning. This is only trivially different than the equation above for the predecessor problem, and solvability is guaranteed by the same reasoning. Reordering, formula_10 (2) This Diophantine equation has a solution which follows directly from the Euclidean algorithm; in fact, it has infinitely many periodic solutions positive and negative. If (x0, y0) is a solution of 1024x–15625y=1, then N0=x0 · 8404, F0=y0 · 8404 is a solution of (2), which means any solution must have the form formula_11(3) where formula_12 is an arbitrary parameter that can have any integral value. A reductionist approach. One can take both sides of (1) above modulo 1024, so formula_13 Another way of thinking about it is that in order for formula_14 to be an integer, the RHS of the equation must be an integral multiple of 1024; that property will be unaltered by factoring out as many multiples of 1024 as possible from the RHS. Reducing both sides by multiples of 1024, formula_15 subtracting, formula_16 factoring, formula_17 The RHS must still be a multiple of 1024; since 53 is relatively prime to 1024, 5"F"+4 must be a multiple of 1024. The smallest such multiple is 1 · 1024, so 5"F"+4=1024 and F=204. Substituting into (1) formula_18 Euclidean algorithm. The Euclidean algorithm is quite tedious but a general methodology for solving rational equations ax+by=c requiring integral answers. From (2) above, it is evident that 1024 (210) and 15625 (56) are relatively prime and therefore their GCD is 1, but we need the reduction equations for back substitution to obtain "N" and "F" in terms of these two quantities: First, obtain successive remainders until GCD remains: 15625 = 15·1024 + 265 (a) 1024 = 3·265 + 229 (b) 265 = 1·229 + 36 (c) 229 = 6·36 + 13 (d) 36 = 2·13 + 10 (e) 13 = 1·10 + 3 (f) 10 = 3·3 + 1 (g) (remainder 1 is GCD of 15625 and 1024) 1 = 10 – 3(13–1·10) = 4·10 – 3·13 (reorder (g), substitute for 3 from (f) and combine) 1 = 4·(36 – 2·13) – 3·13 = 4·36 – 11·13 (substitute for 10 from (e) and combine) 1 = 4·36 – 11·(229 – 6·36) = –11·229 + 70*36 (substitute for 13 from (d) and combine) 1 = –11·229 + 70·(265 – 1·229) = –81·229 + 70·265 (substitute for 36 from (c) and combine) 1 = –81·(1024 – 3·265) + 70·265 = –81·1024 + 313·265 (substitute for 229 from (b) and combine) 1 = –81·1024 + 313·(15625 – 15·1024) = 313·15625 – 4776·1024 (substitute for 265 from (a) and combine) So the pair (N0,F0) = (-4776·8404, -313*8404); the smallest formula_12 (see (3) in the previous subsection) that will make both N and F positive is 2569, so: formula_19 formula_20 Continued fraction. Alternately, one may use a continued fraction, whose construction is based on the Euclidean algorithm. The continued fraction for &lt;templatestyles src="Fraction/styles.css" /&gt;1024⁄15625 (0.065536 exactly) is [;15,3,1,6,2,1,3]; its convergent terminated after the repetend is &lt;templatestyles src="Fraction/styles.css" /&gt;313⁄4776, giving us x0=–4776 and y0=313. The least value of "t" for which both N and F are non-negative is 2569, so formula_21. This is the smallest positive number that satisfies the conditions of the problem. A generalized solution. When the number of sailors is a parameter, let it be formula_22, rather than a computational value, careful algebraic reduction of the relation between the number of coconuts in the original pile and the number allotted to each sailor in the morning yields an analogous Diophantine relation whose coefficients are expressions in formula_22. The first step is to obtain an algebraic expansion of the recurrence relation corresponding to each sailor's transformation of the pile, formula_23 being the number left by the sailor: formula_24 where formula_25, the number originally gathered, and formula_26 the number left in the morning. Expanding the recurrence by substituting formula_23 for formula_27 formula_22 times yields: formula_28 Factoring the latter term, formula_29 The power series polynomial in brackets of the form formula_30 sums to formula_31 so, formula_32 which simplifies to: formula_33 But formula_34 is the number left in the morning which is a multiple of formula_22 (i.e. formula_35, the number allotted to each sailor in the morning): formula_36 Solving for formula_37(=formula_38), formula_39 The equation is a linear Diophantine equation in two variables, formula_38 and formula_35. formula_22 is a parameter that can be any integer. The nature of the equation and the method of its solution do not depend on formula_22. Number theoretic considerations now apply. For formula_38 to be an integer, it is sufficient that formula_40 be an integer, so let it be formula_41: formula_42 The equation must be transformed into the form formula_43 whose solutions are formulaic. Hence: formula_44, where formula_45 Because formula_22 and formula_46 are relatively prime, there exist integer solutions formula_47 by Bézout's identity. This equation can be restated as: formula_48 But ("m"–1)"m" is a polynomial "Z" · "m"–1 if "m" is odd and "Z" · "m"+1 if "m" is even, where "Z" is a polynomial with monomial basis in "m". Therefore "r0"=1 if "m" is odd and "r0"=–1 if "m" is even is a solution. Bézout's identity gives the periodic solution formula_49, so substituting for formula_41 in the Diophantine equation and rearranging: formula_50 where formula_51 for formula_22 odd and formula_52 for formula_22 even and formula_53 is any integer. For a given formula_22, the smallest positive formula_53 will be chosen such that formula_38 satisfies the constraints of the problem statement. In the William's version of the problem, formula_22 is 5 sailors, so formula_54 is 1, and formula_53 may be taken to be zero to obtain the lowest positive answer, so "N" = 1  · 55 – 4 = 3121 for the number of coconuts in the original pile. (It may be noted that the next sequential solution of the equation for "k"=–1, is –12504, so trial and error around zero will not solve the Williams version of the problem, unlike the original version whose equation, fortuitously, had a small magnitude negative solution). Here is a table of the positive solutions formula_38 for the first few formula_22 (formula_53 is any non-negative integer): Other variants and general solutions. Other variants, including the putative predecessor problem, have related general solutions for an arbitrary number of sailors. When the morning division also has a remainder of one, the solution is: formula_55 For formula_56 and formula_57 this yields 15,621 as the smallest positive number of coconuts for the pre-William's version of the problem. In some earlier alternate forms of the problem, the divisions came out even, and nuts (or items) were allocated from the remaining pile "after" division. In these forms, the recursion relation is: formula_58 The alternate form also had two endings, when the morning division comes out even, and when there is one nut left over for the monkey. When the morning division comes out even, the general solution reduces via a similar derivation to: formula_59 For example, when formula_60 and formula_57, the original pile has 1020 coconuts, and after four successive even divisions in the night with a coconut allocated to the monkey after each division, there are 80 coconuts left over in the morning, so the final division comes out even with no coconut left over. When the morning division results in a nut left over, the general solution is: formula_61 where formula_52 if formula_22 is odd, and formula_51 if formula_22 is even. For example, when formula_62, formula_52 and formula_57, the original pile has 51 coconuts, and after three successive divisions in the night with a coconut allocated to the monkey after each division, there are 13 coconuts left over in the morning, so the final division has a coconut left over for the monkey. Other post-Williams variants which specify different remainders including positive ones (i.e. the monkey adds coconuts to the pile), have been treated in the literature. The solution is: formula_63 where formula_51 for formula_22 odd and formula_52 for formula_22 even, formula_64 is the remainder after each division (or number of monkeys) and formula_53 is any integer (formula_64 is negative if the monkeys add coconuts to the pile). Other variants in which the number of men or the remainders vary between divisions, are generally outside the class of problems associated with the monkey and the coconuts, though these similarly reduce to linear Diophantine equations in two variables. Their solutions yield to the same techniques and present no new difficulties.
[ { "math_id": 0, "text": "5F+1" }, { "math_id": 1, "text": "\\tfrac{5}{4}(5F+1)+1 = \\tfrac{25}{4}F+\\tfrac{9}{4}" }, { "math_id": 2, "text": "\\tfrac{5}{4}(\\tfrac{25}{4}F+\\tfrac{9}{4})+1 = \\tfrac{125}{16}F+\\tfrac{241}{16}" }, { "math_id": 3, "text": "1024N = 15625F + 11529" }, { "math_id": 4, "text": "N = -4, F = -1" }, { "math_id": 5, "text": "N = 15621, F = 1023" }, { "math_id": 6, "text": "N \\equiv 4/5\\cdot(N-1)" }, { "math_id": 7, "text": "5N \\equiv 4N - 4 " }, { "math_id": 8, "text": "N \\equiv -4" }, { "math_id": 9, "text": "1024N=15625F+8404" }, { "math_id": 10, "text": "1024N-15625F=8404" }, { "math_id": 11, "text": " \\begin{cases}\nN=N_0 + 15625\\cdot t \\\\\nF=F_0 + 1024\\cdot t\n\\end{cases}" }, { "math_id": 12, "text": "t" }, { "math_id": 13, "text": "1024N=15625F+8404 \\mod 1024" }, { "math_id": 14, "text": "n" }, { "math_id": 15, "text": "0=(15625F-15\\cdot 1024F)+(8404-8\\cdot 1024) \\mod 1024" }, { "math_id": 16, "text": "0=265F+212 \\mod 1024" }, { "math_id": 17, "text": "0=53 \\cdot(5F+4) \\mod 1024" }, { "math_id": 18, "text": "1024N=15625\\cdot 204+8404 \\Rightarrow N=\\frac{3 195 904}{1024} \\Rightarrow N=3121" }, { "math_id": 19, "text": " N = N_0 + 15625\\cdot 2569 = 3121" }, { "math_id": 20, "text": " F = F_0 + 1024\\cdot 2569 = 204" }, { "math_id": 21, "text": "N= -4776 \\cdot 8404+15625 \\cdot 2569=3121" }, { "math_id": 22, "text": "m" }, { "math_id": 23, "text": "n_i" }, { "math_id": 24, "text": "n_i = \\frac{m-1}{m}(n_{i-1}-1)" }, { "math_id": 25, "text": "n_{i\\rightarrow 0} \\equiv N" }, { "math_id": 26, "text": "n_{i\\rightarrow m}" }, { "math_id": 27, "text": "n_{i-1}" }, { "math_id": 28, "text": "n_m=\\left(\\frac{m-1}{m}\\right)^m\\cdot n_0 - \\left[(\\frac{m-1}{m})^m+...+(\\frac{m-1}{m})^2+\\frac{m-1}{m}\\right]" }, { "math_id": 29, "text": "n_m=\\left(\\frac{m-1}{m}\\right)^m\\cdot n_0 - \\left(\\frac{m-1}{m}\\right)\\cdot \\left[(\\frac{m-1}{m})^{m-1}+...+\\frac{m-1}{m}+1\\right]" }, { "math_id": 30, "text": "x^{m-1}+...+x+1" }, { "math_id": 31, "text": "(1-x^m)/(1-x)" }, { "math_id": 32, "text": "n_m=\\left(\\frac{m-1}{m}\\right)^m\\cdot n_0 - \\left(\\frac{m-1}{m}\\right)\\cdot \\left[\\left(1-(\\frac{m-1}{m})^m\\right)\\bigg/\\left(1-(\\frac{m-1}{m})\\right)\\right]" }, { "math_id": 33, "text": "n_m=\\left(\\frac{m-1}{m}\\right)^m\\cdot n_0 - (m-1)\\cdot \\frac{m^m - (m-1)^m}{m^m}" }, { "math_id": 34, "text": "n_m" }, { "math_id": 35, "text": "F" }, { "math_id": 36, "text": "m\\cdot F = \\left(\\frac{m-1}{m}\\right)^m\\cdot n_0 - (m-1)\\cdot \\frac{m^m - (m-1)^m}{m^m}" }, { "math_id": 37, "text": "n_0" }, { "math_id": 38, "text": "N" }, { "math_id": 39, "text": "N=m^m\\cdot \\frac{m-1+m\\cdot F}{(m-1)^m} - (m-1)" }, { "math_id": 40, "text": "\\frac{m-1+m\\cdot F}{(m-1)^m}" }, { "math_id": 41, "text": "r" }, { "math_id": 42, "text": "r=\\frac{m-1+m\\cdot F}{(m-1)^m}" }, { "math_id": 43, "text": "ax+by=\\pm 1" }, { "math_id": 44, "text": "(m-1)^m\\cdot r - m\\cdot s = -1" }, { "math_id": 45, "text": "s = 1+F" }, { "math_id": 46, "text": "m-1" }, { "math_id": 47, "text": "(r,s)" }, { "math_id": 48, "text": "(m-1)^m\\cdot r\\equiv -1\\mod m" }, { "math_id": 49, "text": "r=r_0+k\\cdot m" }, { "math_id": 50, "text": "N=r_0\\cdot m^m - (m-1) + k\\cdot m^{m+1}" }, { "math_id": 51, "text": "r_0=1" }, { "math_id": 52, "text": "r_0=-1" }, { "math_id": 53, "text": "k" }, { "math_id": 54, "text": "r_0" }, { "math_id": 55, "text": "N=- (m-1) + k\\cdot m^{m+1}" }, { "math_id": 56, "text": "m=5" }, { "math_id": 57, "text": "k=1" }, { "math_id": 58, "text": "n_i = \\frac{m-1}{m}n_{i-1}-1" }, { "math_id": 59, "text": "N=-m + k\\cdot m^{m+1}" }, { "math_id": 60, "text": "m=4" }, { "math_id": 61, "text": "N=r_0\\cdot m^m - m + k\\cdot m^{m+1}" }, { "math_id": 62, "text": "m=3" }, { "math_id": 63, "text": "N=r_0\\cdot m^m - c \\cdot(m-1) + k\\cdot m^{m+1}" }, { "math_id": 64, "text": "c" } ]
https://en.wikipedia.org/wiki?curid=58232142
58232370
Nikon Z-mount
Digital camera lens mount Nikon Z-mount (stylised as formula_0) is an interchangeable lens mount developed by Nikon for its mirrorless digital cameras. In late 2018, Nikon released two cameras that use this mount, the full-frame Nikon Z7 and Nikon Z6. In late 2019 Nikon announced their first Z-mount camera with an APS-C sensor, the Nikon Z50. In July 2020 the entry-level full-frame Z5 was introduced. In October 2020, Nikon announced the Nikon Z6II and Nikon Z7II, which succeed the Z6 and Z7, respectively. The APS-C lineup was expanded in July 2021, with the introduction of the retro styled Nikon Zfc, and in October 2021, Nikon unveiled the Nikon Z9, which effectively succeeds the brand's flagship D6 DSLR. The APS-C lineup was further expanded with the Nikon Z30, announced at the end of June 2022. The Nikon Z6III was announced in June 2024. Nikon SLR cameras, both film and digital, have used the Nikon F-mount with its 44 mm diameter since 1959. The Z-mount has a 55 mm diameter. The FTZ lens adapter allows many F-mount lenses to be used on Z-mount cameras. The FTZ allows AF-S, AF-P and AF-I lenses to autofocus on Z-mount cameras. The older screw-drive AF and AF-D lenses will not autofocus with the FTZ adapter, but they do retain metering and Exif data. Z-mount cameras support metering as well as in-body image stabilization (IBIS) with manual focus lenses. The 55 mm throat diameter of the Nikon Z-mount makes it the largest full-frame lens mount. It is much larger than the F-mount and the E-mount used by Sony mirrorless cameras but only slightly larger than the 54 mm of both the Canon EF and RF mounts. It is also slightly larger than the 51.6 mm diameter full-frame mirrorless Leica L-Mount. The Z-mount has also a very short flange distance of 16 mm, which is shorter than all mentioned lens mounts. This flange distance allows for numerous lenses of nearly all other current and previous mounts to be mounted to Z-mount with an adapter. The Z-mount 58 mm &lt;templatestyles src="F//sandbox/styles.css" /&gt;f/0.95 S Noct lens reintroduced the Noct brand historically used by Nikon for lenses with ultra-fast maximum apertures. Nikon published a roadmap outlining which lenses are forthcoming when the Z-mount system was initially announced. The roadmap has been updated multiple times. As of November 2023 the current version of the roadmap indicates a 35 mm lens left to be released within 2023. Z-mount lenses. Nikon uses a new designation system for their Z-mount lenses. The older F-mount Nikkor designations are no longer used, though they overlap in some areas (e.g. the VR and DX labels). Nikon also introduced the S-Line branding for especially high-performance ("superior") lenses, which is akin to Canon's L designation or Sony's "G-Master" branding. Teleconverters. The Nikon teleconverters are only compatible with select Nikon Z lenses. They cannot be used in conjunction with the FTZ adapter. Z-mount teleconverters cannot be mounted on top of each other. The following lenses are compatible with the Nikon teleconverters: Mount adapters. Nikon specifies lens compatibility as in the following table. F-mount teleconverters can be used on compatible lenses, but the Z-mount teleconverters may not be used in conjunction with the FTZ. For details on the lens types, refer to Nikon F-mount. Third-party lenses and adapters. Numerous manufacturers offer purely manual lenses and lens mount adapters for the Z-mount. These do not interface electronically to the camera and do not support autofocus or automatic control of the aperture. Some manufacturers offer lenses and adapters with full electronic functionality (autofocus, automatic aperture control, Exif metadata etc.). Third-party lenses and adapters often rely on reverse engineering the electronic protocol of a lens mount and might not work properly on new cameras or firmware versions. However, Cosina Voigtländer, Sigma and Tamron licensed the mount from Nikon, enabling full compatibility. Accessories. Nikon Z cameras use the same iTTL flash system as Nikon DSLRs, which remains fully backward compatible and with third-party flashes and flash transmitters. The Z 9 and Z 8 use the same circular 10-pin accessory port (for a remote shutter release, external GPS receiver etc.) as previous "pro-grade" Nikons, while the Z 5/6/7 use the rectangular 8-pin accessory port introduced with the D90 and used on most other Nikon DSLRs since. The Z 30/50/fc do not have an accessory port. Most Z cameras use the same batteries of their "peer" DSLRs: Battery grips are available for several models: Nikon does not offer grips for the Z 50, Z 30 and Z fc. The MC-N10 is a remote-control grip for the Z 30, Z fc, Z 5, Z 6, Z 6II, Z 6III, Z 7, Z 7II, Z f, Z 8 and Z 9 (as of October 2023). It connects through a USB-C cable to the camera and replicates the right-hand controls of the camera body. It is designed for film applications and uses an ARRI rosette-type mount. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. PROCESSOR: EXPEED 6 | Dual EXPEED 6 | EXPEED 7 VIDEO: Q Slow-motion video, 4K 4K video, 6K 6K video, 8K 8K video SCREEN: Articulating A , Touchscreen T BODY FEATURE: In-Body Image Stabilization S , Weather Sealed
[ { "math_id": 0, "text": "\\mathbb{Z}" } ]
https://en.wikipedia.org/wiki?curid=58232370
582340
Quadratic sieve
Integer factorization algorithm The quadratic sieve algorithm (QS) is an integer factorization algorithm and, in practice, the second-fastest method known (after the general number field sieve). It is still the fastest for integers under 100 decimal digits or so, and is considerably simpler than the number field sieve. It is a general-purpose factorization algorithm, meaning that its running time depends solely on the size of the integer to be factored, and not on special structure or properties. It was invented by Carl Pomerance in 1981 as an improvement to Schroeppel's linear sieve. Basic aim. The algorithm attempts to set up a congruence of squares modulo "n" (the integer to be factorized), which often leads to a factorization of "n". The algorithm works in two phases: the "data collection" phase, where it collects information that may lead to a congruence of squares; and the "data processing" phase, where it puts all the data it has collected into a matrix and solves it to obtain a congruence of squares. The data collection phase can be easily parallelized to many processors, but the data processing phase requires large amounts of memory, and is difficult to parallelize efficiently over many nodes or if the processing nodes do not each have enough memory to store the whole matrix. The block Wiedemann algorithm can be used in the case of a few systems each capable of holding the matrix. The naive approach to finding a congruence of squares is to pick a random number, square it, divide by "n" and hope the least non-negative remainder is a perfect square. For example, formula_0. This approach finds a congruence of squares only rarely for large "n", but when it does find one, more often than not, the congruence is nontrivial and the factorization is complete. This is roughly the basis of Fermat's factorization method. The quadratic sieve is a modification of Dixon's factorization method. The general running time required for the quadratic sieve (to factor an integer "n") is conjectured to be formula_1 in the L-notation. The constant "e" is the base of the natural logarithm. The approach. To factorize the integer "n", Fermat's method entails a search for a single number "a", "n"1/2 &lt; "a" &lt; "n"−1, such that the remainder of "a"2 divided by "n" is a square. But these "a" are hard to find. The quadratic sieve consists of computing the remainder of "a"2/"n" for several "a", then finding a subset of these whose product is a square. This will yield a congruence of squares. For example, consider attempting to factor the number 1649. We have: formula_2. None of the integers formula_3 is a square, but the product formula_4 is a square. We also had formula_5 since formula_6. The observation that formula_4 thus gives a congruence of squares formula_7 Hence formula_8 for some integer formula_9. We can then factor formula_10 using the Euclidean algorithm to calculate the greatest common divisor. So the problem has now been reduced to: given a set of integers, find a subset whose product is a square. By the fundamental theorem of arithmetic, any positive integer can be written uniquely as a product of prime powers. We do this in a vector format; for example, the prime-power factorization of 504 is 23325071, it is therefore represented by the exponent vector (3,2,0,1). Multiplying two integers then corresponds to adding their exponent vectors. A number is a square when its exponent vector is even in every coordinate. For example, the vectors (3,2,0,1) + (1,0,0,1) = (4,2,0,2), so (504)(14) is a square. Searching for a square requires knowledge only of the parity of the numbers in the vectors, so it is sufficient to compute these vectors mod 2: (1,0,0,1) + (1,0,0,1) = (0,0,0,0). So given a set of (0,1)-vectors, we need to find a subset which adds to the zero vector mod 2. This is a linear algebra problem since the ring formula_11 can be regarded as the Galois field of order 2, that is we can divide by all non-zero numbers (there is only one, namely 1) when calculating modulo 2. It is a theorem of linear algebra that with more vectors than each vector has entries, a linear dependency always exists. It can be found by Gaussian elimination. However, simply squaring many random numbers mod "n" produces a very large number of different prime factors, and so very long vectors and a very large matrix. The trick is to look specifically for numbers "a" such that "a"2 mod "n" has only small prime factors (they are smooth numbers). They are harder to find, but using only smooth numbers keeps the vectors and matrices smaller and more tractable. The quadratic sieve searches for smooth numbers using a technique called sieving, discussed later, from which the algorithm takes its name. The algorithm. To summarize, the basic quadratic sieve algorithm has these main steps: The remainder of this article explains details and extensions of this basic algorithm. How QS optimizes finding congruences. The quadratic sieve attempts to find pairs of integers "x" and "y"("x") (where "y"("x") is a function of "x") satisfying a much weaker condition than "x"2 ≡ "y"2 (mod "n"). It selects a set of primes called the "factor base", and attempts to find "x" such that the least absolute remainder of "y"("x") = "x"2 mod "n" factorizes completely over the factor base. Such "y" values are said to be "smooth" with respect to the factor base. The factorization of a value of "y"("x") that splits over the factor base, together with the value of "x", is known as a "relation". The quadratic sieve speeds up the process of finding relations by taking "x" close to the square root of "n". This ensures that "y"("x") will be smaller, and thus have a greater chance of being smooth. formula_13 formula_14 This implies that "y" is on the order of 2"x"[√"n"]. However, it also implies that "y" grows linearly with "x" times the square root of "n". Another way to increase the chance of smoothness is by simply increasing the size of the factor base. However, it is necessary to find at least one smooth relation more than the number of primes in the factor base, to ensure the existence of a linear dependency. Partial relations and cycles. Even if for some relation "y"("x") is not smooth, it may be possible to merge two of these "partial relations" to form a full one, if the two "y"'s are products of the same prime(s) outside the factor base. [Note that this is equivalent to extending the factor base.] For example, if the factor base is {2, 3, 5, 7} and "n" = 91, there are partial relations: formula_15 formula_16 Multiply these together: formula_17 and multiply both sides by (11−1)2 modulo 91. 11−1 modulo 91 is 58, so: formula_18 formula_19 producing a full relation. Such a full relation (obtained by combining partial relations) is called a "cycle". Sometimes, forming a cycle from two partial relations leads directly to a congruence of squares, but rarely. Checking smoothness by sieving. There are several ways to check for smoothness of the "y"s. The most obvious is by trial division, although this increases the running time for the data collection phase. Another method that has some acceptance is the elliptic curve method (ECM). In practice, a process called "sieving" is typically used. If "f"("x") is the polynomial formula_20 we have formula_21 Thus solving "f(x)" ≡ 0 (mod "p") for "x" generates a whole sequence of numbers "y" for which "y"="f"("x"), all of which are divisible by "p". This is finding a square root modulo a prime, for which there exist efficient algorithms, such as the Shanks–Tonelli algorithm. (This is where the quadratic sieve gets its name: "y" is a quadratic polynomial in "x", and the sieving process works like the Sieve of Eratosthenes.) The sieve starts by setting every entry in a large array "A"[] of bytes to zero. For each "p", solve the quadratic equation mod "p" to get two roots "α" and "β", and then add an approximation to log("p") to every entry for which "y"("x") = 0 mod "p" ... that is, "A"["kp" + "α"] and "A"["kp" + "β"]. It is also necessary to solve the quadratic equation modulo small powers of "p" in order to recognise numbers divisible by small powers of a factor-base prime. At the end of the factor base, any "A"[] containing a value above a threshold of roughly log("x"2−"n") will correspond to a value of "y"("x") which splits over the factor base. The information about exactly which primes divide "y"("x") has been lost, but it has only small factors, and there are many good algorithms for factoring a number known to have only small factors, such as trial division by small primes, SQUFOF, Pollard rho, and ECM, which are usually used in some combination. There are many "y"("x") values that work, so the factorization process at the end doesn't have to be entirely reliable; often the processes misbehave on say 5% of inputs, requiring a small amount of extra sieving. Example of basic sieve. This example will demonstrate standard quadratic sieve without logarithm optimizations or prime powers. Let the number to be factored "N" = 15347, therefore the ceiling of the square root of "N" is 124. Since "N" is small, the basic polynomial is enough: "y"("x") = ("x" + 124)2 − 15347. Data collection. Since "N" is small, only four primes are necessary. The first four primes "p" for which 15347 has a square root mod "p" are 2, 17, 23, and 29 (in other words, 15347 is a quadratic residue modulo each of these primes). These primes will be the basis for sieving. Now we construct our sieve formula_22 of formula_23 and begin the sieving process for each prime in the basis, choosing to sieve the first 0 ≤ X &lt; 100 of Y(X): formula_24 The next step is to perform the sieve. For each "p" in our factor base formula_25 solve the equation formula_26 to find the entries in the array "V" which are divisible by "p". For formula_27 solve formula_28 to get the solution formula_29. Thus, starting at X=1 and incrementing by 2, each entry will be divisible by 2. Dividing each of those entries by 2 yields formula_30 Similarly for the remaining primes "p" in formula_31 the equationformula_32 is solved. Note that for every "p" &gt; 2, there will be 2 resulting linear equations due to there being 2 modular square roots. formula_33 Each equation formula_34 results in formula_35 being divisible by "p" at "x"="a" and each "p"th value beyond that. Dividing "V" by "p" at "a", "a"+"p", "a"+2"p", "a"+3"p", etc., for each prime in the basis finds the smooth numbers which are products of unique primes (first powers). formula_36 Any entry of "V" that equals 1 corresponds to a smooth number. Since formula_37, formula_38, and formula_39 equal one, this corresponds to: Matrix processing. Since smooth numbers "Y" have been found with the property formula_40, the remainder of the algorithm follows equivalently to any other variation of Dixon's factorization method. Writing the exponents of the product of a subset of the equations formula_41 as a matrixformula_42 yields: formula_43 A solution to the equation is given by the left null space, simply formula_44 Thus the product of all three equations yields a square (mod N). formula_45 and formula_46 So the algorithm found formula_47 Testing the result yields GCD(3070860 - 22678, 15347) = 103, a nontrivial factor of 15347, the other being 149. This demonstration should also serve to show that the quadratic sieve is only appropriate when "n" is large. For a number as small as 15347, this algorithm is overkill. Trial division or Pollard rho could have found a factor with much less computation. Multiple polynomials. In practice, many different polynomials are used for "y" so that when "y"("x") starts to become large, resulting in poor density of smooth "y", this growth can be reset by switching polynomials. As usual, we choose "y"("x") to be a square modulo "n", but now with the form formula_48 formula_49 is chosen such that formula_50, so formula_51 for some formula_52. The polynomial y(x) can then be written as formula_53. If "A" is a square or a smooth number, then only the factor formula_54 has to be checked for smoothness. This approach, called Multiple Polynomial Quadratic Sieve (MPQS), is ideally suited for parallelization, since each processor involved in the factorization can be given "n", the factor base and a collection of polynomials, and it will have no need to communicate with the central processor until it has finished sieving with its polynomials. Large primes. One large prime. If, after dividing by all the factors less than "A", the remaining part of the number (the cofactor) is less than "A"2, then this cofactor must be prime. In effect, it can be added to the factor base, by sorting the list of relations into order by cofactor. If y(a) = 7*11*23*137 and y(b) = 3*5*7*137, then y(a)y(b) = 3*5*11*23 * 72 * 1372. This works by reducing the threshold of entries in the sieving array above which a full factorization is performed. More large primes. Reducing the threshold even further, and using an effective process for factoring y(x) values into products of even relatively large primes - ECM is superb for this - can find relations with most of their factors in the factor base, but with two or even three larger primes. Cycle finding then allows combining a set of relations sharing several primes into a single relation. Parameters from realistic example. To illustrate typical parameter choices for a realistic example on a real implementation including the multiple polynomial and large prime optimizations, the tool msieve was run on a 267-bit semiprime, producing the following parameters: Factoring records. Until the discovery of the number field sieve (NFS), QS was the asymptotically fastest known general-purpose factoring algorithm. Now, Lenstra elliptic curve factorization has the same asymptotic running time as QS (in the case where "n" has exactly two prime factors of equal size), but in practice, QS is faster since it uses single-precision operations instead of the multi-precision operations used by the elliptic curve method. On April 2, 1994, the factorization of RSA-129 was completed using QS. It was a 129-digit number, the product of two large primes, one of 64 digits and the other of 65 digits. The factor base for this factorization contained 524339 primes. The data collection phase took 5000 MIPS-years, done in distributed fashion over the Internet. The data collected totaled 2GB. The data processing phase took 45 hours on Bellcore's (now Telcordia Technologies) MasPar (massively parallel) supercomputer. This was the largest published factorization by a general-purpose algorithm, until NFS was used to factor RSA-130, completed April 10, 1996. All RSA numbers factored since then have been factored using NFS. The current QS factorization record is the 140-digit (463-bit) RSA-140, which was factored by Patrick Konsor in June 2020 using approximately 6,000 core hours over 6 days. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "80^2 \\equiv 441 = 21^2 \\pmod{5959}" }, { "math_id": 1, "text": "e^{(1 + o(1))\\sqrt{\\ln n \\ln\\ln n}} =L_n\\left[1/2,1\\right]" }, { "math_id": 2, "text": "41^2 \\equiv 32, 42^2 \\equiv 115, 43^2 \\equiv 200 \\pmod{1649}" }, { "math_id": 3, "text": "32, 115, 200" }, { "math_id": 4, "text": "32 \\cdot 200 = 80^2" }, { "math_id": 5, "text": "32 \\cdot 200 \\equiv 41^2 \\cdot 43^2 = (41 \\cdot 43)^2 \\equiv 114^2 \\pmod{1649}" }, { "math_id": 6, "text": "41 \\cdot 43 \\equiv 114 \\pmod{1649}" }, { "math_id": 7, "text": "114^2 \\equiv 80^2 \\pmod{1649}." }, { "math_id": 8, "text": "114^2 - 80^2 = (114 + 80) \\cdot (114 - 80) = 194 \\cdot 34 = k \\cdot 1649" }, { "math_id": 9, "text": "k" }, { "math_id": 10, "text": " 1649 = \\gcd( 194, 1649 ) \\cdot \\gcd( 34, 1649 ) = 97 \\cdot 17" }, { "math_id": 11, "text": "\\mathbb{Z}/2\\mathbb{Z}" }, { "math_id": 12, "text": "(a+b)(a-b)\\equiv0 \\pmod n" }, { "math_id": 13, "text": "y(x)=\\left(\\left\\lceil\\sqrt{n}\\right\\rceil+x\\right)^2-n\\hbox{ (where }x\\hbox{ is a small integer)}" }, { "math_id": 14, "text": "y(x)\\approx 2x\\left\\lceil\\sqrt{n}\\right\\rceil" }, { "math_id": 15, "text": "{21^2\\equiv 7^1\\cdot 11\\pmod{91}}" }, { "math_id": 16, "text": "{29^2\\equiv 2^1\\cdot 11\\pmod{91}}" }, { "math_id": 17, "text": "{(21\\cdot 29)^2\\equiv2^1\\cdot7^1\\cdot11^2\\pmod{91}}" }, { "math_id": 18, "text": "(58\\cdot 21\\cdot 29)^2\\equiv 2^1\\cdot7^1\\pmod{91}" }, { "math_id": 19, "text": "14^2\\equiv 2^1\\cdot7^1\\pmod{91}" }, { "math_id": 20, "text": "f(x)=x^2-n" }, { "math_id": 21, "text": "\\begin{align}\nf(x)&=x^2-n \\\\\nf(x+kp) &= (x+kp)^2-n \\\\\n&= x^2+2xkp+(kp)^2-n \\\\\n&= f(x)+2xkp+(kp)^2\\equiv f(x)\\pmod{p}\n\\end{align}" }, { "math_id": 22, "text": "V_X" }, { "math_id": 23, "text": "Y(X) = (X + \\lceil\\sqrt{N}\\rceil)^2 - N = (X+124)^2-15347" }, { "math_id": 24, "text": "\n\\begin{align}V &= \\begin{bmatrix} Y(0) & Y(1) & Y(2) & Y(3) & Y(4) & Y(5) & \\cdots & Y(99) \\end{bmatrix} \\\\\n & =\\begin{bmatrix} 29 & 278 & 529 & 782 & 1037 & 1294 & \\cdots & 34382 \\end{bmatrix}\\end{align}" }, { "math_id": 25, "text": "\\lbrace 2, 17, 23, 29\\rbrace" }, { "math_id": 26, "text": "Y(X) \\equiv (X + \\lceil\\sqrt{N}\\rceil)^2 - N \\equiv 0 \\pmod{p} " }, { "math_id": 27, "text": "p=2" }, { "math_id": 28, "text": "(X + 124)^2 - 15347 \\equiv 0 \\pmod{2}" }, { "math_id": 29, "text": "X \\equiv \\sqrt{15347}-124 \\equiv 1 \\pmod{2}" }, { "math_id": 30, "text": "V = \\begin{bmatrix} 29 & 139 & 529 & 391 & 1037 & 647 & \\cdots & 17191 \\end{bmatrix}" }, { "math_id": 31, "text": "\\lbrace 17, 23, 29\\rbrace" }, { "math_id": 32, "text": "X \\equiv \\sqrt{15347} - 124 \\pmod{p}" }, { "math_id": 33, "text": "\\begin{align}\n X & \\equiv \\sqrt{15347} - 124 & \\equiv 8 - 124 & \\equiv 3\\pmod{17} \\\\\n & & \\equiv 9 - 124 & \\equiv 4\\pmod{17} \\\\\n X & \\equiv \\sqrt{15347} - 124 & \\equiv 11 - 124 & \\equiv 2\\pmod{23} \\\\\n & & \\equiv 12 - 124 & \\equiv 3\\pmod{23} \\\\\n X & \\equiv \\sqrt{15347} - 124 & \\equiv 8 - 124 & \\equiv 0\\pmod{29} \\\\\n & & \\equiv 21 - 124 & \\equiv 13\\pmod{29} \\\\\n\\end{align}\n" }, { "math_id": 34, "text": "X \\equiv a \\pmod{p}" }, { "math_id": 35, "text": "V_x" }, { "math_id": 36, "text": "V = \\begin{bmatrix} 1 & 139 & 23 & 1 & 61 & 647 & \\cdots & 17191 \\end{bmatrix}" }, { "math_id": 37, "text": "V_0" }, { "math_id": 38, "text": "V_3" }, { "math_id": 39, "text": "V_{71}" }, { "math_id": 40, "text": "Y \\equiv Z^2 \\pmod{N}" }, { "math_id": 41, "text": "\\begin{align}\n29 &= 2^0 \\cdot 17^0 \\cdot 23^0 \\cdot 29^1 \\\\\n782 &= 2^1 \\cdot 17^1 \\cdot 23^1 \\cdot 29^0 \\\\ \n22678 &= 2^1 \\cdot 17^1 \\cdot 23^1 \\cdot 29^1 \\\\\n\\end{align}\n" }, { "math_id": 42, "text": "\\pmod{2}" }, { "math_id": 43, "text": "\nS \\cdot \\begin{bmatrix} 0 & 0 & 0 & 1 \\\\ 1 & 1 & 1 & 0 \\\\ 1 & 1 & 1 & 1 \\end{bmatrix} \\equiv \\begin{bmatrix} 0 & 0 & 0 & 0 \\end{bmatrix} \\pmod{2}" }, { "math_id": 44, "text": " S = \\begin{bmatrix}1 & 1 & 1 \\end{bmatrix} " }, { "math_id": 45, "text": "29 \\cdot 782 \\cdot 22678 = 22678^2" }, { "math_id": 46, "text": "124^2 \\cdot 127^2 \\cdot 195^2 = 3070860^2 " }, { "math_id": 47, "text": "22678^2 \\equiv 3070860^2 \\pmod{15347} " }, { "math_id": 48, "text": "y(x)=(Ax+B)^2-n \\qquad A,B\\in\\mathbb{Z}." }, { "math_id": 49, "text": "B" }, { "math_id": 50, "text": "B^2 = n \\pmod A" }, { "math_id": 51, "text": "B^2 - n = AC" }, { "math_id": 52, "text": "C" }, { "math_id": 53, "text": "y(x) = A\\cdot(Ax^2 + 2Bx + C)" }, { "math_id": 54, "text": "(Ax^2 + 2Bx + C)" } ]
https://en.wikipedia.org/wiki?curid=582340
5823879
Gas/oil ratio
When oil is produced to surface temperature and pressure it is usual for some natural gas to come out of solution. The gas/oil ratio (GOR) is the ratio of the volume of gas ("scf") that comes out of solution to the volume of oil — at standard conditions. In reservoir simulation gas/oil ratio is usually abbreviated formula_0. A point to check is whether the volume of oil is measured before or after the gas comes out of solution, since the remaining oil volume will decrease when the gas comes out. In fact, gas dissolution and oil volume shrinkage will happen at many stages during the path of the hydrocarbon stream from reservoir through the wellbore and processing plant to export. For light oils and rich gas condensates the ultimate GOR of export streams is strongly influenced by the efficiency with which the processing plant strips liquids from the gas phase. Reported GORs may be calculated from export volumes, which may not be at standard conditions. The GOR is a dimensionless ratio (volume per volume) in metric units, but in field units, it is usually quoted in cubic feet of gas (at standard conditions: 0°C, 100 kPa) per barrel of oil or condensate, scf/bbl. In the states of Texas and Pennsylvania, the statutory definition of a gas well is one where the GOR is greater than 100,000 ft3/bbl or 100 Kcf/bbl. The state of New Mexico also designates a gas well as having over 100 MCFG per barrel. The Oklahoma Geological Survey in 2015 published a map that displays "gas wells" with greater than 20 MCFG per barrel of oil. They go on to display "oil wells" with GOR of less than 5 MCFG/BBL and "oil and gas wells" between these limits. The EPA's 2016 Information Collection Request for Oil and Gas Facilities (EPA ICR No. 2548.01, OMB Control No. 2060-NEW) divided well types into five categories: 1. Heavy Oil (GOR ≤ 300 scf/bbl) 2. Light Oil (GOR 300 &lt; GOR ≤ 100,000 scf/bbl) 3. Wet Gas (100,000 &lt; GOR ≤1,000,000 scf/bbl) 4. Dry Gas (GOR &gt; 1,000,000 scf/bbl) 5. Coal Bed Methane. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R_s" } ]
https://en.wikipedia.org/wiki?curid=5823879
582410
Beam splitter
Optical device which splits a beam of light in two A beam splitter or beamsplitter is an optical device that splits a beam of light into a transmitted and a reflected beam. It is a crucial part of many optical experimental and measurement systems, such as interferometers, also finding widespread application in fibre optic telecommunications. Designs. In its most common form, a cube, a beam splitter is made from two triangular glass prisms which are glued together at their base using polyester, epoxy, or urethane-based adhesives. (Before these synthetic resins, natural ones were used, e.g. Canada balsam.) The thickness of the resin layer is adjusted such that (for a certain wavelength) half of the light incident through one "port" (i.e., face of the cube) is reflected and the other half is transmitted due to FTIR (frustrated total internal reflection). Polarizing beam splitters, such as the Wollaston prism, use birefringent materials to split light into two beams of orthogonal polarization states. Another design is the use of a half-silvered mirror. This is composed of an optical substrate, which is often a sheet of glass or plastic, with a partially transparent thin coating of metal. The thin coating can be aluminium deposited from aluminium vapor using a physical vapor deposition method. The thickness of the deposit is controlled so that part (typically half) of the light, which is incident at a 45-degree angle and not absorbed by the coating or substrate material, is transmitted and the remainder is reflected. A very thin half-silvered mirror used in photography is often called a pellicle mirror. To reduce loss of light due to absorption by the reflective coating, so-called "Swiss-cheese" beam-splitter mirrors have been used. Originally, these were sheets of highly polished metal perforated with holes to obtain the desired ratio of reflection to transmission. Later, metal was sputtered onto glass so as to form a discontinuous coating, or small areas of a continuous coating were removed by chemical or mechanical action to produce a very literally "half-silvered" surface. Instead of a metallic coating, a dichroic optical coating may be used. Depending on its characteristics (thin-film interference), the ratio of reflection to transmission will vary as a function of the wavelength of the incident light. Dichroic mirrors are used in some ellipsoidal reflector spotlights to split off unwanted infrared (heat) radiation, and as output couplers in laser construction. A third version of the beam splitter is a dichroic mirrored prism assembly which uses dichroic optical coatings to divide an incoming light beam into a number of spectrally distinct output beams. Such a device was used in three-pickup-tube color television cameras and the three-strip Technicolor movie camera. It is currently used in modern three-CCD cameras. An optically similar system is used in reverse as a beam-combiner in three-LCD projectors, in which light from three separate monochrome LCD displays is combined into a single full-color image for projection. Beam splitters with single-mode fiber for PON networks use the single-mode behavior to split the beam. The splitter is done by physically splicing two fibers "together" as an X. Arrangements of mirrors or prisms used as camera attachments to photograph stereoscopic image pairs with one lens and one exposure are sometimes called "beam splitters", but that is a misnomer, as they are effectively a pair of periscopes redirecting rays of light which are already non-coincident. In some very uncommon attachments for stereoscopic photography, mirrors or prism blocks similar to beam splitters perform the opposite function, superimposing views of the subject from two different perspectives through color filters to allow the direct production of an anaglyph 3D image, or through rapidly alternating shutters to record sequential field 3D video. Phase shift. Beam splitters are sometimes used to recombine beams of light, as in a Mach–Zehnder interferometer. In this case there are two incoming beams, and potentially two outgoing beams. But the amplitudes of the two outgoing beams are the sums of the (complex) amplitudes calculated from each of the incoming beams, and it may result that one of the two outgoing beams has amplitude zero. In order for energy to be conserved (see next section), there must be a phase shift in at least one of the outgoing beams. For example (see red arrows in picture on the right), if a polarized light wave in air hits a dielectric surface such as glass, and the electric field of the light wave is in the plane of the surface, then the reflected wave will have a phase shift of π, while the transmitted wave will not have a phase shift; the blue arrow does not pick up a phase-shift, because it is reflected from a medium with a lower refractive index. The behavior is dictated by the Fresnel equations. This does not apply to partial reflection by conductive (metallic) coatings, where other phase shifts occur in all paths (reflected and transmitted). In any case, the details of the phase shifts depend on the type and geometry of the beam splitter. Classical lossless beam splitter. For beam splitters with two incoming beams, using a classical, lossless beam splitter with electric fields "Ea" and "Eb" each incident at one of the inputs, the two output fields "Ec" and "Ed" are linearly related to the inputs through formula_0 where the 2×2 element formula_1 is the beam-splitter transfer matrix and "r" and "t" are the reflectance and transmittance along a particular path through the beam splitter, that path being indicated by the subscripts. (The values depend on the polarization of the light.) If the beam splitter removes no energy from the light beams, the total output energy can be equated with the total input energy, reading formula_2 Inserting the results from the transfer equation above with formula_3 produces formula_4 and similarly for then formula_5 formula_6 When both formula_7 and formula_8 are non-zero, and using these two results we obtain formula_9 where "formula_10" indicates the complex conjugate. It is now easy to show that formula_11 where formula_12 is the identity, i.e. the beam-splitter transfer matrix is a unitary matrix. Expanding, it can be written each "r" and "t" as a complex number having an amplitude and phase factor; for instance, formula_13. The phase factor accounts for possible shifts in phase of a beam as it reflects or transmits at that surface. Then is obtained formula_14 Further simplifying, the relationship becomes formula_15 which is true when formula_16 and the exponential term reduces to -1. Applying this new condition and squaring both sides, it becomes formula_17 where substitutions of the form formula_18 were made. This leads to the result formula_19 and similarly, formula_20 It follows that formula_21. Having determined the constraints describing a lossless beam splitter, the initial expression can be rewritten as formula_22 Applying different values for the amplitudes and phases can account for many different forms of the beam splitter that can be seen widely used. The transfer matrix appears to have 6 amplitude and phase parameters, but it also has 2 constraints: formula_21 and formula_16. To include the constraints and simplify to 4 independent parameters, we may write formula_23 (and from the constraint formula_24), so that formula_25 where formula_26 is the phase difference between the transmitted beams and similarly for formula_27, and formula_28 is a global phase. Lastly using the other constraint that formula_21 we define formula_29 so that formula_30, hence formula_31 A 50:50 beam splitter is produced when formula_32. The dielectric beam splitter above, for example, has formula_33 i.e. formula_34, while the "symmetric" beam splitter of Loudon has formula_35 i.e. formula_36. Use in experiments. Beam splitters have been used in both thought experiments and real-world experiments in the area of quantum theory and relativity theory and other fields of physics. These include: Quantum mechanical description. In quantum mechanics, the electric fields are operators as explained by second quantization and Fock states. Each electrical field operator can further be expressed in terms of modes representing the wave behavior and amplitude operators, which are typically represented by the dimensionless creation and annihilation operators. In this theory, the four ports of the beam splitter are represented by a photon number state formula_37 and the action of a creation operation is formula_38. The following is a simplified version of Ref. The relation between the classical field amplitudes formula_39, and formula_40 produced by the beam splitter is translated into the same relation of the corresponding quantum creation (or annihilation) operators formula_41, and formula_42, so that formula_43 where the transfer matrix is given in classical lossless beam splitter section above: formula_44 Since formula_1 is unitary, formula_45, i.e. formula_46 This is equivalent to saying that if we start from the vacuum state formula_47 and add a photon in port "a" to produce formula_48 then the beam splitter creates a superposition on the outputs of formula_49 The probabilities for the photon to exit at ports "c" and "d" are therefore formula_50 and formula_51, as might be expected. Likewise, for any input state formula_52 formula_53 and the output is formula_54 Using the multi-binomial theorem, this can be written formula_55 where formula_56 and the formula_57 is a binomial coefficient and it is to be understood that the coefficient is zero if formula_58 etc. The transmission/reflection coefficient factor in the last equation may be written in terms of the reduced parameters that ensure unitarity: formula_59 where it can be seen that if the beam splitter is 50:50 then formula_60 and the only factor that depends on "j" is the formula_61 term. This factor causes interesting interference cancellations. For example, if formula_62 and the beam splitter is 50:50, then formula_63 where the formula_64 term has cancelled. Therefore the output states always have even numbers of photons in each arm. A famous example of this is the Hong–Ou–Mandel effect, in which the input has formula_65, the output is always formula_66 or formula_67, i.e. the probability of output with a photon in each mode (a coincidence event) is zero. Note that this is true for all types of 50:50 beam splitter irrespective of the details of the phases, and the photons need only be indistinguishable. This contrasts with the classical result, in which equal output in both arms for equal inputs on a 50:50 beam splitter does appear for specific beam splitter phases (e.g. a symmetric beam splitter formula_68), and for other phases where the output goes to one arm (e.g. the dielectric beam splitter formula_69) the output is always in the same arm, not random in either arm as is the case here. From the correspondence principle we might expect the quantum results to tend to the classical one in the limits of large "n", but the appearance of large numbers of indistinguishable photons at the input is a non-classical state that does not correspond to a classical field pattern, which instead produces a statistical mixture of different formula_70 known as Poissonian light. Rigorous derivation is given in the Fearn–Loudon 1987 paper and extended in Ref to include statistical mixtures with the density matrix. Non-symmetric beam-splitter. In general, for a non-symmetric beam-splitter, namely a beam-splitter for which the transmission and reflection coefficients are not equal, one can define an angle formula_71 such that formula_72 where formula_73 and formula_74 are the reflection and transmission coefficients. Then the unitary operation associated with the beam-splitter is then formula_75 Application for quantum computing. In 2000 Knill, Laflamme and Milburn (KLM protocol) proved that it is possible to create a universal quantum computer solely with beam splitters, phase shifters, photodetectors and single photon sources. The states that form a qubit in this protocol are the one-photon states of two modes, i.e. the states |01⟩ and |10⟩ in the occupation number representation (Fock state) of two modes. Using these resources it is possible to implement any single qubit gate and 2-qubit probabilistic gates. The beam splitter is an essential component in this scheme since it is the only one that creates entanglement between the Fock states. Similar settings exist for continuous-variable quantum information processing. In fact, it is possible to simulate arbitrary Gaussian (Bogoliubov) transformations of a quantum state of light by means of beam splitters, phase shifters and photodetectors, given two-mode squeezed vacuum states are available as a prior resource only (this setting hence shares certain similarities with a Gaussian counterpart of the KLM protocol). The building block of this simulation procedure is the fact that a beam splitter is equivalent to a squeezing transformation under "partial" time reversal. Reflection beam splitters. Reflection beam splitters reflect parts of the incident radiation in different directions. These partial beams show exactly the same intensity. Typically, reflection beam splitters are made of metal and have a broadband spectral characteristic. Due to their compact design, beam splitters of this type are particularly easy to install in infrared detectors. At this application, the radiation enters through the aperture opening of the detector and is split into several beams of equal intensity but different directions by internal highly reflective microstructures. Each beam hits a sensor element with an upstream optical filter. Particularly in NDIR gas analysis, this design enables measurement with only one beam with a minimal beam cross-section, which significantly increases the interference immunity of the measurement. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\mathbf{E}_\\text{out} =\n\\begin{bmatrix} E_c \\\\ E_d \\end{bmatrix} =\n\\begin{bmatrix} r_{ac}& t_{bc} \\\\ t_{ad}& r_{bd} \\end{bmatrix}\n\\begin{bmatrix} E_a \\\\ E_b \\end{bmatrix} =\n\\tau\\mathbf{E}_\\text{in}, \n" }, { "math_id": 1, "text": "\\tau" }, { "math_id": 2, "text": "\n|E_c|^2+|E_d|^2=|E_a|^2+|E_b|^2.\n" }, { "math_id": 3, "text": "E_b=0" }, { "math_id": 4, "text": "\n|r_{ac}|^2+|t_{ad}|^2=1,\n" }, { "math_id": 5, "text": "E_a=0" }, { "math_id": 6, "text": "\n|r_{bd}|^2+|t_{bc}|^2=1.\n" }, { "math_id": 7, "text": "E_a" }, { "math_id": 8, "text": "E_b" }, { "math_id": 9, "text": "\nr_{ac}t^{\\ast}_{bc}+t_{ad}r^{\\ast}_{bd}=0,\n" }, { "math_id": 10, "text": "^\\ast" }, { "math_id": 11, "text": "\\tau^\\dagger\\tau=\\mathbf{I}" }, { "math_id": 12, "text": "\\mathbf{I}" }, { "math_id": 13, "text": "r_{ac}=|r_{ac}|e^{i\\phi_{ac}}" }, { "math_id": 14, "text": "\n|r_{ac}||t_{bc}|e^{i(\\phi_{ac}-\\phi_{bc})}+|t_{ad}||r_{bd}|e^{i(\\phi_{ad}-\\phi_{bd})}=0.\n" }, { "math_id": 15, "text": "\n\\frac{|r_{ac}|}{|t_{ad}|}=-\\frac{|r_{bd}|}{|t_{bc}|}e^{i(\\phi_{ad}-\\phi_{bd}+\\phi_{bc}-\\phi_{ac})}\n" }, { "math_id": 16, "text": "\\phi_{ad}-\\phi_{bd}+\\phi_{bc}-\\phi_{ac}=\\pi" }, { "math_id": 17, "text": "\n\\frac{1-|t_{ad}|^2}{|t_{ad}|^2}=\\frac{1-|t_{bc}|^2}{|t_{bc}|^2},\n" }, { "math_id": 18, "text": "|r_{ac}|^2=1-|t_{ad}|^2" }, { "math_id": 19, "text": "\n|t_{ad}|=|t_{bc}|\\equiv T,\n" }, { "math_id": 20, "text": "\n|r_{ac}|=|r_{bd}|\\equiv R.\n" }, { "math_id": 21, "text": "R^2+T^2=1" }, { "math_id": 22, "text": "\n\\begin{bmatrix} E_c \\\\ E_d \\end{bmatrix} =\n\\begin{bmatrix} Re^{i\\phi_{ac}}& Te^{i\\phi_{bc}} \\\\ Te^{i\\phi_{ad}}& Re^{i\\phi_{bd}} \\end{bmatrix}\n\\begin{bmatrix} E_a \\\\ E_b \\end{bmatrix}.\n" }, { "math_id": 23, "text": "\\phi_{ad}=\\phi_0+\\phi_T, \\phi_{bc}=\\phi_0-\\phi_T, \\phi_{ac}=\\phi_0+\\phi_R" }, { "math_id": 24, "text": "\\phi_{bd}=\\phi_0-\\phi_R-\\pi" }, { "math_id": 25, "text": "\n\\begin{align}\n\\phi_T & = \\tfrac{1}{2}\\left(\\phi_{ad} - \\phi_{bc} \\right)\\\\\n\\phi_R & = \\tfrac{1}{2}\\left(\\phi_{ac} - \\phi_{bd} +\\pi \\right)\\\\\n\\phi_0 & = \\tfrac{1}{2}\\left(\\phi_{ad} + \\phi_{bc} \\right) \n\\end{align}\n" }, { "math_id": 26, "text": "2\\phi_T" }, { "math_id": 27, "text": "2\\phi_R" }, { "math_id": 28, "text": "\\phi_0" }, { "math_id": 29, "text": "\\theta = \\arctan(R/T) " }, { "math_id": 30, "text": "T=\\cos\\theta,R=\\sin\\theta" }, { "math_id": 31, "text": "\n\\tau=e^{i\\phi_0}\\begin{bmatrix} \n\\sin\\theta e^{i\\phi_R} & \\cos\\theta e^{-i\\phi_T} \\\\ \n\\cos\\theta e^{i\\phi_T} & -\\sin\\theta e^{-i\\phi_R} \\end{bmatrix}.\n" }, { "math_id": 32, "text": "\\theta=\\pi/4" }, { "math_id": 33, "text": "\n\\tau=\\frac{1}{\\sqrt{2}}\\begin{bmatrix} \n1 & 1 \\\\ \n1 & -1 \\end{bmatrix},\n" }, { "math_id": 34, "text": "\\phi_T = \\phi_R =\\phi_0=0" }, { "math_id": 35, "text": "\n\\tau=\\frac{1}{\\sqrt{2}}\\begin{bmatrix} \n1 & i \\\\ \ni & 1 \\end{bmatrix},\n" }, { "math_id": 36, "text": "\\phi_T = 0, \\phi_R =-\\pi/2, \\phi_0=\\pi/2" }, { "math_id": 37, "text": "|n\\rangle" }, { "math_id": 38, "text": "\\hat{a}^\\dagger|n\\rangle=\\sqrt{n+1}|n+1\\rangle" }, { "math_id": 39, "text": "{E}_{a},{E}_{b}, {E}_{c}" }, { "math_id": 40, "text": "{E}_{d}" }, { "math_id": 41, "text": "\\hat{a}_a^\\dagger,\\hat{a}_b^\\dagger, \\hat{a}_c^\\dagger" }, { "math_id": 42, "text": "\\hat{a}_d^\\dagger" }, { "math_id": 43, "text": "\n\\left(\\begin{matrix}\n\\hat{a}_c^\\dagger\\\\\n\\hat{a}_d^\\dagger\n\\end{matrix}\\right)= \\tau\n\\left(\\begin{matrix}\n\\hat{a}_a^\\dagger\\\\\n\\hat{a}_b^\\dagger\n\\end{matrix}\\right)\n" }, { "math_id": 44, "text": "\n\\tau=\\left(\\begin{matrix}\nr_{ac} & t_{bc}\\\\\nt_{ad} & r_{bd}\n\\end{matrix}\\right)\n=e^{i\\phi_0}\\left(\\begin{matrix} \n\\sin\\theta e^{i\\phi_R} & \\cos\\theta e^{-i\\phi_T} \\\\ \n\\cos\\theta e^{i\\phi_T} & -\\sin\\theta e^{-i\\phi_R} \\end{matrix}\\right).\n" }, { "math_id": 45, "text": "\\tau^{-1}=\\tau^\\dagger" }, { "math_id": 46, "text": "\n\\left(\\begin{matrix}\n\\hat{a}_a^\\dagger\\\\\n\\hat{a}_b^\\dagger\n\\end{matrix}\\right)= \n\\left(\\begin{matrix}\nr_{ac}^\\ast & t_{ad}^\\ast\\\\\nt_{bc}^\\ast & r_{bd}^\\ast\n\\end{matrix}\\right)\n\\left(\\begin{matrix}\n\\hat{a}_c^\\dagger\\\\\n\\hat{a}_d^\\dagger\n\\end{matrix}\\right).\n" }, { "math_id": 47, "text": "|00\\rangle_{ab}" }, { "math_id": 48, "text": "|\\psi_\\text{in}\\rangle=\\hat{a}_a^\\dagger|00\\rangle_{ab}=|10\\rangle_{ab}," }, { "math_id": 49, "text": "|\\psi_\\text{out}\\rangle=\\left(r_{ac}^\\ast\\hat{a}_c^\\dagger+t_{ad}^\\ast\\hat{a}_d^\\dagger\\right)|00\\rangle_{cd}=r_{ac}^\\ast|10\\rangle_{cd}+t_{ad}^\\ast|01\\rangle_{cd}." }, { "math_id": 50, "text": "|r_{ac}|^2" }, { "math_id": 51, "text": "|t_{ad}|^2" }, { "math_id": 52, "text": "|nm\\rangle_{ab}" }, { "math_id": 53, "text": "\n|\\psi_\\text{in}\\rangle=|nm\\rangle_{ab}\n=\\frac{1}{\\sqrt{n!}}\\left(\\hat{a}_a^\\dagger\\right)^n\\frac{1}{\\sqrt{m!}}\\left(\\hat{a}_b^\\dagger\\right)^m|00\\rangle_{ab}\n" }, { "math_id": 54, "text": "\n|\\psi_\\text{out}\\rangle\n=\\frac{1}{\\sqrt{n!}}\n\\left(r_{ac}^\\ast\\hat{a}_c^\\dagger+t_{ad}^\\ast\\hat{a}_d^\\dagger\\right)^n\n\\frac{1}{\\sqrt{m!}}\n\\left(t_{bc}^\\ast\\hat{a}_c^\\dagger+r_{bd}^\\ast\\hat{a}_d^\\dagger\\right)^m\n|00\\rangle_{cd}.\n" }, { "math_id": 55, "text": "\n\\begin{align}\n|\\psi_\\text{out}\\rangle\n&=\\frac{1}{\\sqrt{n!m!}} \\sum_{j=0}^n \\sum_{k=0}^m \n\\binom{n}{j}\n\\left( r_{ac}^\\ast \\hat{a}_c^\\dagger \\right)^j \\left( t_{ad}^\\ast \\hat{a}_d^\\dagger \\right) ^{(n-j)} \n\\binom{m}{k}\n\\left( t_{bc}^\\ast \\hat{a}_c^\\dagger \\right)^k \\left( r_{bd}^\\ast \\hat{a}_d^\\dagger \\right) ^{(m-k)} \n|00\\rangle_{cd}\n \\\\\n&=\\frac{1}{\\sqrt{n!m!}} \\sum_{N=0}^{n+m} \\sum_{j=0}^N \n\\binom{n}{j}\nr_{ac}^{\\ast j} t_{ad}^{\\ast (n-j)} \n\\binom{m}{N-j} \nt_{bc}^{\\ast (N-j)} r_{bd}^{\\ast (m-N+j)} \n\\left(\\hat{a}_c^\\dagger\\right)^N\n\\left( \\hat{a}_d^\\dagger\\right)^{M}|00\\rangle_{cd}, \\\\\n&=\\frac{1}{\\sqrt{n!m!}} \\sum_{N=0}^{n+m} \\sum_{j=0}^N \n\\binom{n}{j}\n\\binom{m}{N-j}\nr_{ac}^{\\ast j} t_{ad}^{\\ast (n-j)} t_{bc}^{\\ast (N-j)} r_{bd}^{\\ast (m-N+j)} \n\\sqrt{N!M!} \\quad |N,M\\rangle_{cd},\\end{align}\n" }, { "math_id": 56, "text": "M=n+m-N" }, { "math_id": 57, "text": "\\tbinom{n}{j}" }, { "math_id": 58, "text": "j\\notin\\{ 0,n \\}" }, { "math_id": 59, "text": "\nr_{ac}^{\\ast j} t_{ad}^{\\ast (n-j)} t_{bc}^{\\ast (N-j)} r_{bd}^{\\ast (m-N+j)}\n=(-1)^j\\tan^{2j}\\theta(-\\tan\\theta)^{m-N}\\cos^{n+m}\\theta\\exp-i\\left[(n+m)(\\phi_0+\\phi_T)-m(\\phi_R+\\phi_T)+N(\\phi_R-\\phi_T)\\right].\n" }, { "math_id": 60, "text": "\\tan\\theta=1" }, { "math_id": 61, "text": "(-1)^j" }, { "math_id": 62, "text": "n=m" }, { "math_id": 63, "text": "\n\\begin{align}\n\\left(\\hat{a}_a^\\dagger\\right)^n\\left(\\hat{a}_b^\\dagger\\right)^m\n&\\to \\left[\\hat{a}_a^\\dagger\\hat{a}_b^\\dagger\\right]^n \\\\\n&= \\left[\\left(r_{ac}^\\ast\\hat{a}_c^\\dagger+t_{ad}^\\ast\\hat{a}_d^\\dagger\\right)\n \\left(t_{bc}^\\ast\\hat{a}_c^\\dagger+r_{bd}^\\ast\\hat{a}_d^\\dagger\\right) \\right]^n \\\\\n&= \\left[\\frac{e^{-i\\phi_0}}{\\sqrt{2}}\\right]^{2n}\n \\left[\\left(e^{-i\\phi_R}\\hat{a}_c^\\dagger+e^{-i\\phi_T}\\hat{a}_d^\\dagger\\right)\n \\left(e^{i\\phi_T}\\hat{a}_c^\\dagger-e^{i\\phi_R}\\hat{a}_d^\\dagger\\right) \\right]^n \\\\\n&= \\frac{e^{-2in\\phi_0}}{2^n}\\left[e^{i(\\phi_T-\\phi_R)} \\left(\\hat{a}_c^\\dagger\\right)^2 \n +e^{-i(\\phi_T-\\phi_R)}\\left(\\hat{a}_d^\\dagger\\right)^2 \\right]^n\n\\end{align}\n" }, { "math_id": 64, "text": " \\hat{a}_c^\\dagger \\hat{a}_d^\\dagger " }, { "math_id": 65, "text": "n=m=1" }, { "math_id": 66, "text": "|20\\rangle_{cd}" }, { "math_id": 67, "text": "|02\\rangle_{cd}" }, { "math_id": 68, "text": "\\phi_0=\\phi_T=0,\\phi_R=\\pi/2" }, { "math_id": 69, "text": "\\phi_0=\\phi_T=\\phi_R=0" }, { "math_id": 70, "text": "|n,m\\rangle" }, { "math_id": 71, "text": "\\theta" }, { "math_id": 72, "text": "\\begin{cases}\n|R| = \\sin(\\theta)\\\\\n|T| = \\cos(\\theta)\n\\end{cases}" }, { "math_id": 73, "text": "R" }, { "math_id": 74, "text": "T" }, { "math_id": 75, "text": "\n\\hat{U}=e^{i\\theta\\left(\\hat{a}_{a}^{\\dagger}\\hat{a}_{b}+\\hat{a}_{a}\\hat{a}_{b}^{\\dagger}\\right)}.\n" } ]
https://en.wikipedia.org/wiki?curid=582410
582440
Luhn algorithm
Simple checksum formula The Luhn algorithm or Luhn formula, also known as the "modulus 10" or "mod 10" algorithm, named after its creator, IBM scientist Hans Peter Luhn, is a simple check digit formula used to validate a variety of identification numbers. It is described in US patent 2950048A, granted on none }}. The algorithm is in the public domain and is in wide use today. It is specified in ISO/IEC 7812-1. It is not intended to be a cryptographically secure hash function; it was designed to protect against accidental errors, not malicious attacks. Most credit cards and many government identification numbers use the algorithm as a simple method of distinguishing valid numbers from mistyped or otherwise incorrect numbers. Description. The check digit is computed as follows: Example for computing check digit. Assume an example of an account number 1789372997 (just the "payload", check digit not yet included): The sum of the resulting digits is 56. The check digit is equal to formula_6. This makes the full account number read 17893729974. Strengths and weaknesses. The Luhn algorithm will detect all single-digit errors, as well as almost all transpositions of adjacent digits. It will not, however, detect transposition of the two-digit sequence "09" to "90" (or vice versa). It will detect most of the possible twin errors (it will not detect "22" ↔ "55", "33" ↔ "66" or "44" ↔ "77"). Other, more complex check-digit algorithms (such as the Verhoeff algorithm and the Damm algorithm) can detect more transcription errors. The Luhn mod N algorithm is an extension that supports non-numerical strings. Because the algorithm operates on the digits in a right-to-left manner and zero digits affect the result only if they cause shift in position, zero-padding the beginning of a string of numbers does not affect the calculation. Therefore, systems that pad to a specific number of digits (by converting 1234 to 0001234 for instance) can perform Luhn validation before or after the padding and achieve the same result. The algorithm appeared in a United States Patent for a simple, hand-held, mechanical device for computing the checksum. The device took the mod 10 sum by mechanical means. The "substitution digits", that is, the results of the double and reduce procedure, were not produced mechanically. Rather, the digits were marked in their permuted order on the body of the machine. Pseudocode implementation. The following function takes a card number, including the check digit, as an array of integers and outputs true if the check digit is correct, false otherwise. function isValid(cardNumber[1..length]) sum := 0 parity := length mod 2 for i from 1 to length do if i mod 2 != parity then sum := sum + cardNumber[i] elseif cardNumber[i] &gt; 4 then sum := sum + 2 * cardNumber[i] - 9 else sum := sum + 2 * cardNumber[i] end if end for return cardNumber[length] == (10 - (sum mod 10)) end function Code implementation. C#. bool IsValidLuhn(int[] digits) int checkDigit = 0; for (int i = digits.Length - 2; i &gt;= 0; --i) checkDigit += (i &amp; 1) == (digits.Length &amp; 1) ? digits[i] &gt; 4 ? digits[i] * 2 - 9 : digits[i] * 2 : digits[i]; return (10 - checkDigit % 10) % 10 == digits[^1]; Java. public static boolean isValidLuhn(String number) { int n = number.length(); int total = 0; boolean even = true; // iterate from right to left, double every 'even' value for (int i = n - 2; i &gt;= 0; i--) { int digit = number.charAt(i) - '0'; if (digit &lt; 0 || digit &gt; 9) { // value may only contain digits return false; if (even) { digit «= 1; // double value even = !even; total += digit &gt; 9 ? digit - 9 : digit; int checksum = number.charAt(n - 1) - '0'; return (total + checksum) % 10 == 0; TypeScript. function luhnCheck(input: number): boolean { const number = input.toString(); const digits = number.replace(/\D/g, ").split(").map(Number); let sum = 0; let isSecond = false; for (let i = digits.length - 1; i &gt;= 0; i--) { let digit = digits[i]; if (isSecond) { digit *= 2; if (digit &gt; 9) { digit -= 9; sum += digit; isSecond = !isSecond; return sum % 10 === 0; JavaScript. function luhnCheck(input) { const number = input.toString(); const digits = number.replace(/\D/g, "").split("").map(Number); let sum = 0; let isSecond = false; for (let i = digits.length - 1; i &gt;= 0; i--) { let digit = digits[i]; if (isSecond) { digit *= 2; if (digit &gt; 9) { digit -= 9; sum += digit; isSecond = !isSecond; return sum % 10 === 0; Python. """Verhoeff's algorithm implementation for checksum digit calculation""" class LuhnAlgorithm(BaseChecksumAlgorithm): Class to validate a number using Luhn algorithm. Arguments: input_value (str): The input value to validate. Returns: bool: True if the number is valid, False otherwise. def __init__(self, input_value: str) -&gt; None: self.input_value = input_value.replace(' ', ") def last_digit_and_remaining_numbers(self) -&gt; tuple: """Returns the last digit and the remaining numbers""" return int(self.input_value[-1]), self.input_value[:-1] def __checksum(self) -&gt; int: last_digit, remaining_numbers = self.last_digit_and_remaining_numbers() nums = [int(num) if idx % 2 != 0 else int(num) * 2 if int(num) * 2 &lt;= 9 else int(num) * 2 % 10 + int(num) * 2 // 10 for idx, num in enumerate(reversed(remaining_numbers))] return (sum(nums) + last_digit) % 10 == 0 def verify(self) -&gt; bool: """Verify a number using Luhn algorithm""" return self.__checksum() Uses. The Luhn algorithm is used in a variety of systems, including:
[ { "math_id": 0, "text": "(10 - (s \\bmod 10))" }, { "math_id": 1, "text": "s" }, { "math_id": 2, "text": "9 - ((s + 9)\\bmod 10)" }, { "math_id": 3, "text": "(10 - s)\\bmod 10" }, { "math_id": 4, "text": "10\\lceil s/10\\rceil - s" }, { "math_id": 5, "text": "(10 - s)\\bmod 10" }, { "math_id": 6, "text": "(10 - (56\\operatorname{mod} 10)) = 4" } ]
https://en.wikipedia.org/wiki?curid=582440
5824570
Nodec space
In topology and related areas of mathematics, a topological space formula_0 is a nodec space if every nowhere dense subset of formula_0 is closed. This concept was introduced and studied by .
[ { "math_id": 0, "text": "X" } ]
https://en.wikipedia.org/wiki?curid=5824570
58246920
Equal Earth projection
Pseudocylindrical equal-area map projection The Equal Earth map projection is an equal-area pseudocylindrical global map projection, invented by Bojan Šavrič, Bernhard Jenny, and Tom Patterson in 2018. It is inspired by the widely used Robinson projection, but unlike the Robinson projection, retains the relative size of areas. The projection equations are simple to implement and fast to evaluate. The features of the Equal Earth projection include: According to the creators, the projection was created in response to the decision of the Boston public schools to adopt the Gall-Peters projection for world maps in March 2017, to accurately show the relative sizes of equatorial and non-equatorial regions. The decision generated controversy in the world of cartography due to this projection’s extreme distortion in the polar regions. At that time Šavrič, Jenny, and Patterson sought alternative map projections of equal areas for world maps, but could not find any that met their aesthetic criteria. Therefore, they created a new projection that had more visual appeal compared to existing projections of equal areas. As with the earlier Natural Earth projection (2012) introduced by Patterson, a visual method was used to choose the parameters of the projection. A combination of Putniņš P4ʹ and Eckert IV projections was used as the basis. Mathematical formulae for the projection were derived from a polynomial used to define the spacing of parallels. Formulation. The projection is formulated as the equations formula_0 where formula_1 and formula_2 refers to latitude and formula_3 to longitude. Use. The first known thematic map published using the Equal Earth projection is a map of the global mean temperature anomaly for July 2018, produced by the NASA’s Goddard Institute for Space Studies. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\n x &= \\frac{2\\sqrt{3}\\, \\lambda \\cos{\\theta}}{3\\,(9\\,A_4\\,\\theta^8 + 7\\,A_3\\,\\theta^6 + 3\\,A_2\\,\\theta^2 + A_1)} \\\\\n y &= A_4\\,\\theta^9 + A_3\\,\\theta^7 + A_2\\,\\theta^3 + A_1\\, \\theta\n\\end{align}\n" }, { "math_id": 1, "text": "\n\\begin{align}\n &\\sin{\\theta} = \\frac{\\sqrt{3}}{2}\\sin{\\varphi} \\\\\n &A_1 = 1.340264,\\ A_2 = -0.081106,\\ A_3 = 0.000893,\\ A_4 = 0.003796\n\\end{align}\n" }, { "math_id": 2, "text": "\\varphi" }, { "math_id": 3, "text": "\\lambda" } ]
https://en.wikipedia.org/wiki?curid=58246920
5824808
Bounded quantifier
Logical quantification that ranges over a subset of the universe of discourse In the study of formal theories in mathematical logic, bounded quantifiers (a.k.a. restricted quantifiers) are often included in a formal language in addition to the standard quantifiers "∀" and "∃". Bounded quantifiers differ from "∀" and "∃" in that bounded quantifiers restrict the range of the quantified variable. The study of bounded quantifiers is motivated by the fact that determining whether a sentence with only bounded quantifiers is true is often not as difficult as determining whether an arbitrary sentence is true. Examples. Examples of bounded quantifiers in the context of real analysis include: Bounded quantifiers in arithmetic. Suppose that "L" is the language of Peano arithmetic (the language of second-order arithmetic or arithmetic in all finite types would work as well). There are two types of bounded quantifiers: formula_4 and formula_5. These quantifiers bind the number variable "n" using a numeric term "t" not containing "n" but which may have other free variables. ("Numeric terms" here means terms such as "1 + 1", "2", "2 × 3", ""m" + 3", etc.) These quantifiers are defined by the following rules (formula_6 denotes formulas): formula_7 formula_8 There are several motivations for these quantifiers. In general, a relation on natural numbers is definable by a bounded formula if and only if it is computable in the linear-time hierarchy, which is defined similarly to the polynomial hierarchy, but with linear time bounds instead of polynomial. Consequently, all predicates definable by a bounded formula are Kalmár elementary, context-sensitive, and primitive recursive. In the arithmetical hierarchy, an arithmetical formula that contains only bounded quantifiers is called formula_12, formula_13, and formula_14. The superscript 0 is sometimes omitted. Bounded quantifiers in set theory. Suppose that "L" is the language formula_15 of the Zermelo–Fraenkel set theory, where the ellipsis may be replaced by term-forming operations such as a symbol for the powerset operation. There are two bounded quantifiers: formula_16 and formula_17. These quantifiers bind the set variable "x" and contain a term "t" which may not mention "x" but which may have other free variables. The semantics of these quantifiers is determined by the following rules: formula_18 formula_19 A ZF formula that contains only bounded quantifiers is called formula_20, formula_21, and formula_22. This forms the basis of the Lévy hierarchy, which is defined analogously with the arithmetical hierarchy. Bounded quantifiers are important in Kripke–Platek set theory and constructive set theory, where only Δ0 separation is included. That is, it includes separation for formulas with only bounded quantifiers, but not separation for other formulas. In KP the motivation is the fact that whether a set "x" satisfies a bounded quantifier formula only depends on the collection of sets that are close in rank to "x" (as the powerset operation can only be applied finitely many times to form a term). In constructive set theory, it is motivated on predicative grounds.
[ { "math_id": 0, "text": "\\forall x > 0" }, { "math_id": 1, "text": "\\exists y < 0" }, { "math_id": 2, "text": "\\forall x \\isin \\mathbb{R}" }, { "math_id": 3, "text": "\\forall x > 0 \\quad \\exists y < 0 \\quad (x = y^2)" }, { "math_id": 4, "text": "\\forall n < t" }, { "math_id": 5, "text": "\\exists n < t" }, { "math_id": 6, "text": "\\phi" }, { "math_id": 7, "text": "\\exists n < t\\, \\phi \\Leftrightarrow \\exists n ( n < t \\land \\phi)" }, { "math_id": 8, "text": "\\forall n < t\\, \\phi \\Leftrightarrow \\forall n ( n < t \\rightarrow \\phi)" }, { "math_id": 9, "text": "\\exists n < t \\, \\phi" }, { "math_id": 10, "text": "\\forall n < t\\, \\phi" }, { "math_id": 11, "text": "\\langle 0,1,+,\\times, <, =\\rangle" }, { "math_id": 12, "text": "\\Sigma^0_0" }, { "math_id": 13, "text": "\\Delta^0_0" }, { "math_id": 14, "text": "\\Pi^0_0" }, { "math_id": 15, "text": "\\langle \\in, \\ldots, =\\rangle" }, { "math_id": 16, "text": "\\forall x \\in t" }, { "math_id": 17, "text": "\\exists x \\in t" }, { "math_id": 18, "text": "\\exists x \\in t\\ (\\phi) \\Leftrightarrow \\exists x ( x \\in t \\land \\phi)" }, { "math_id": 19, "text": "\\forall x \\in t\\ (\\phi) \\Leftrightarrow \\forall x ( x \\in t \\rightarrow \\phi)" }, { "math_id": 20, "text": "\\Sigma_0" }, { "math_id": 21, "text": "\\Delta_0" }, { "math_id": 22, "text": "\\Pi_0" } ]
https://en.wikipedia.org/wiki?curid=5824808
58248385
Walsh–Lebesgue theorem
The Walsh–Lebesgue theorem is a famous result from harmonic analysis proved by the American mathematician Joseph L. Walsh in 1929, using results proved by Lebesgue in 1907. The theorem states the following: Let "K" be a compact subset of the Euclidean plane ℝ2 such the relative complement of formula_0 with respect to ℝ2 is connected. Then, every real-valued continuous function on formula_1 ("i.e." the boundary of "K") can be approximated uniformly on formula_1 by (real-valued) harmonic polynomials in the real variables x and y. Generalizations. The Walsh–Lebesgue theorem has been generalized to Riemann surfaces and to ℝn. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;This Walsh-Lebesgue theorem has also served as a catalyst for entire chapters in the theory of function algebras such as the theory of Dirichlet algebras and logmodular algebras. In 1974 Anthony G. O'Farrell gave a generalization of the Walsh–Lebesgue theorem by means of the 1964 Browder–Wermer theorem with related techniques.
[ { "math_id": 0, "text": "K" }, { "math_id": 1, "text": "\\partial{K}" } ]
https://en.wikipedia.org/wiki?curid=58248385
5825279
James Earl Baumgartner
American logician (1943–2011) James Earl Baumgartner (March 23, 1943 – December 28, 2011) was an American mathematician who worked in set theory, mathematical logic and foundations, and topology. Baumgartner was born in Wichita, Kansas, began his undergraduate study at the California Institute of Technology in 1960, then transferred to the University of California, Berkeley, from which he received his PhD in 1970 from for a dissertation titled "Results and Independence Proofs in Combinatorial Set Theory". His advisor was Robert Vaught. He became a professor at Dartmouth College in 1969, and spent his entire career there. One of Baumgartner's results is the consistency of the statement that any two formula_0-dense sets of reals are order isomorphic (a set of reals is formula_0-dense if it has exactly formula_0 points in every open interval). With András Hajnal he proved the Baumgartner–Hajnal theorem, which states that the partition relation formula_1 holds for formula_2 and formula_3. He died in 2011 of a heart attack at his home in Hanover, New Hampshire. The mathematical context in which Baumgartner worked spans Suslin's problem, Ramsey theory, uncountable order types, disjoint refinements, almost disjoint families, cardinal arithmetics, filters, ideals, and partition relations, iterated forcing and Axiom A, proper forcing and the proper forcing axiom, chromatic number of graphs, a thin very-tall superatomic Boolean algebra, closed unbounded sets, and partition relations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\aleph_1" }, { "math_id": 1, "text": "\\omega_1\\to(\\alpha)^2_n" }, { "math_id": 2, "text": "\\alpha<\\omega_1" }, { "math_id": 3, "text": "n<\\omega" } ]
https://en.wikipedia.org/wiki?curid=5825279
58252879
T2*-weighted imaging
Type of neuroimaging "T"2*-weighted imaging is an MRI sequence to quantify observable or effective "T"2 (T2* or "T2-star"). In this sequence, hemorrhages and hemosiderin deposits become hypointense. Physics. "T"2*-weighted imaging is built from the basic physics of magnetic resonance imaging where there is spin–spin relaxation, that is, the transverse component of the magnetization vector exponentially decays towards its equilibrium value. It is characterized by the "spin–spin relaxation time", known as T2. In an idealized system, all nuclei in a given chemical environment, in a magnetic field, relax with the same frequency. However, in real systems, there are minor differences in chemical environment which can lead to a distribution of resonance frequencies around the ideal. Over time, this distribution can lead to a dispersion of the tight distribution of magnetic spin vectors, and loss of signal (free induction decay). In fact, for most magnetic resonance experiments, this "relaxation" dominates. This results in dephasing. However, decoherence because of magnetic field inhomogeneity is not a true "relaxation" process; it is not random, but dependent on the location of the molecule in the magnet. For molecules that aren't moving, the deviation from ideal relaxation is consistent over time, and the signal can be recovered by performing a spin echo experiment. The corresponding transverse relaxation time constant is thus "T"2*, which is usually much smaller than "T"2. The relation between them is: formula_0 where "γ" represents gyromagnetic ratio, and Δ"B"0 the difference in strength of the locally varying field. Unlike "T"2, "T"2* is influenced by magnetic field gradient irregularities. The "T"2* relaxation time is always shorter than the "T"2 relaxation time and is typically milliseconds for water samples in imaging magnets. "T"2*-weighted imaging can be created as a postexcitation refocused gradient echo (GRE) sequence with small flip angle. The sequence of gradient echo "T"2*-weighted imaging (GRE T2*WI) requires a high uniformity of the magnetic field. Clinical applications. "T"2*-weighted sequences are used to detect deoxygenated hemoglobin, methemoglobin, or hemosiderin in lesions and tissues. Diseases with such patterns include intracranial hemorrhage, arteriovenous malformation, cavernoma, hemorrhage in a tumor, punctate hemorrhages in diffuse axonal injury, superficial siderosis, thrombosed aneurysm, phleboliths in vascular lesions, and some forms of calcification. "T"2*-weighted GRE sequences can detect microhemorrhages as seen in most vestibular schwannomas, thereby differentiating them from meningiomas. The "T"2*-weighted GRE sequence can detect a "middle cerebral artery susceptibility sign", which is a dark linear filling defect that is wider than the corresponding artery on the contralateral side. This sign is 83% sensitive and 100% specific for thrombotic occlusion of the internal carotid artery. It can detect hemosiderin deposition in joints as seen in arthropathy by hemophilia, as well as pigmented villonodular synovitis . "T"2*-weighted sequences are very useful for evaluation of articular cartilages and ligaments because a relatively long "T"2* makes the articular cartilage becomes more hyperintense, while bone becomes hypointense. "T"2*-weighted sequences can be used with MRI contrast, mainly ferucarbotran or superparamagnetic iron oxide (SPIO), to depict liver lesions. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{1}{T_2^*}=\\frac{1}{T_2}+\\frac{1}{T_{\\rm inhom}} = \\frac{1}{T_2}+\\gamma \\Delta B_0 " } ]
https://en.wikipedia.org/wiki?curid=58252879
582530
Lefschetz fixed-point theorem
Counts the fixed points of a continuous mapping from a compact topological space to itself In mathematics, the Lefschetz fixed-point theorem is a formula that counts the fixed points of a continuous mapping from a compact topological space formula_0 to itself by means of traces of the induced mappings on the homology groups of formula_0. It is named after Solomon Lefschetz, who first stated it in 1926. The counting is subject to an imputed multiplicity at a fixed point called the fixed-point index. A weak version of the theorem is enough to show that a mapping without "any" fixed point must have rather special topological properties (like a rotation of a circle). Formal statement. For a formal statement of the theorem, let formula_1 be a continuous map from a compact triangulable space formula_0 to itself. Define the Lefschetz number formula_2 of formula_3 by formula_4 the alternating (finite) sum of the matrix traces of the linear maps induced by formula_3 on formula_5, the singular homology groups of formula_0 with rational coefficients. A simple version of the Lefschetz fixed-point theorem states: if formula_6 then formula_3 has at least one fixed point, i.e., there exists at least one formula_7 in formula_0 such that formula_8. In fact, since the Lefschetz number has been defined at the homology level, the conclusion can be extended to say that any map homotopic to formula_3 has a fixed point as well. Note however that the converse is not true in general: formula_2 may be zero even if formula_3 has fixed points, as is the case for the identity map on odd-dimensional spheres. Sketch of a proof. First, by applying the simplicial approximation theorem, one shows that if formula_3 has no fixed points, then (possibly after subdividing formula_0) formula_3 is homotopic to a fixed-point-free simplicial map (i.e., it sends each simplex to a different simplex). This means that the diagonal values of the matrices of the linear maps induced on the simplicial chain complex of formula_0 must be all be zero. Then one notes that, in general, the Lefschetz number can also be computed using the alternating sum of the matrix traces of the aforementioned linear maps (this is true for almost exactly the same reason that the Euler characteristic has a definition in terms of homology groups; see below for the relation to the Euler characteristic). In the particular case of a fixed-point-free simplicial map, all of the diagonal values are zero, and thus the traces are all zero. Lefschetz–Hopf theorem. A stronger form of the theorem, also known as the Lefschetz–Hopf theorem, states that, if formula_3 has only finitely many fixed points, then formula_9 where formula_10 is the set of fixed points of formula_3, and formula_11 denotes the index of the fixed point formula_7. From this theorem one deduces the Poincaré–Hopf theorem for vector fields. Relation to the Euler characteristic. The Lefschetz number of the identity map on a finite CW complex can be easily computed by realizing that each formula_12 can be thought of as an identity matrix, and so each trace term is simply the dimension of the appropriate homology group. Thus the Lefschetz number of the identity map is equal to the alternating sum of the Betti numbers of the space, which in turn is equal to the Euler characteristic formula_13. Thus we have formula_14 Relation to the Brouwer fixed-point theorem. The Lefschetz fixed-point theorem generalizes the Brouwer fixed-point theorem, which states that every continuous map from the formula_15-dimensional closed unit disk formula_16 to formula_16 must have at least one fixed point. This can be seen as follows: formula_16 is compact and triangulable, all its homology groups except formula_17 are zero, and every continuous map formula_18 induces the identity map formula_19, whose trace is one; all this together implies that formula_2 is non-zero for any continuous map formula_18. Historical context. Lefschetz presented his fixed-point theorem in . Lefschetz's focus was not on fixed points of maps, but rather on what are now called coincidence points of maps. Given two maps formula_3 and formula_20 from an orientable manifold formula_0 to an orientable manifold formula_21 of the same dimension, the "Lefschetz coincidence number" of formula_3 and formula_20 is defined as formula_22 where formula_23 is as above, formula_24 is the homomorphism induced by formula_20 on the cohomology groups with rational coefficients, and formula_25 and formula_26 are the Poincaré duality isomorphisms for formula_0 and formula_21, respectively. Lefschetz proved that if the coincidence number is nonzero, then formula_3 and formula_20 have a coincidence point. He noted in his paper that letting formula_27 and letting formula_20 be the identity map gives a simpler result, which we now know as the fixed-point theorem. Frobenius. Let formula_0 be a variety defined over the finite field formula_28 with formula_29 elements and let formula_30 be the base change of formula_0 to the algebraic closure of formula_28. The Frobenius endomorphism of formula_30 (often the "geometric Frobenius", or just "the Frobenius"), denoted by formula_31, maps a point with coordinates formula_32 to the point with coordinates formula_33. Thus the fixed points of formula_31 are exactly the points of formula_0 with coordinates in formula_28; the set of such points is denoted by formula_34. The Lefschetz trace formula holds in this context, and reads: formula_35 This formula involves the trace of the Frobenius on the étale cohomology, with compact supports, of formula_30 with values in the field of formula_36-adic numbers, where formula_36 is a prime coprime to formula_29. If formula_0 is smooth and equidimensional, this formula can be rewritten in terms of the "arithmetic Frobenius" formula_37, which acts as the inverse of formula_31 on cohomology: formula_38 This formula involves usual cohomology, rather than cohomology with compact supports. The Lefschetz trace formula can also be generalized to algebraic stacks over finite fields. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "f\\colon X \\rightarrow X\\," }, { "math_id": 2, "text": "\\Lambda_f" }, { "math_id": 3, "text": "f" }, { "math_id": 4, "text": "\\Lambda_f:=\\sum_{k\\geq 0}(-1)^k\\mathrm{tr}(f_*|H_k(X,\\Q))," }, { "math_id": 5, "text": "H_k(X,\\Q)" }, { "math_id": 6, "text": "\\Lambda_f \\neq 0\\," }, { "math_id": 7, "text": "x" }, { "math_id": 8, "text": "f(x) = x" }, { "math_id": 9, "text": "\\sum_{x \\in \\mathrm{Fix}(f)} \\mathrm{ind}(f,x) = \\Lambda_f," }, { "math_id": 10, "text": "\\mathrm{Fix}(f)" }, { "math_id": 11, "text": "\\mathrm{ind}(f,x)" }, { "math_id": 12, "text": "f_\\ast" }, { "math_id": 13, "text": "\\chi(X)" }, { "math_id": 14, "text": "\\Lambda_{\\mathrm{id}} = \\chi(X).\\ " }, { "math_id": 15, "text": "n" }, { "math_id": 16, "text": "D^n" }, { "math_id": 17, "text": "H_0" }, { "math_id": 18, "text": "f\\colon D^n \\to D^n" }, { "math_id": 19, "text": "f_* \\colon H_0(D^n, \\Q) \\to H_0(D^n, \\Q)" }, { "math_id": 20, "text": "g" }, { "math_id": 21, "text": "Y" }, { "math_id": 22, "text": "\\Lambda_{f,g} = \\sum (-1)^k \\mathrm{tr}( D_X \\circ g^* \\circ D_Y^{-1} \\circ f_*)," }, { "math_id": 23, "text": "f_*" }, { "math_id": 24, "text": "g_*" }, { "math_id": 25, "text": "D_X" }, { "math_id": 26, "text": "D_Y" }, { "math_id": 27, "text": "X= Y" }, { "math_id": 28, "text": "k" }, { "math_id": 29, "text": "q" }, { "math_id": 30, "text": "\\bar X" }, { "math_id": 31, "text": "F_q" }, { "math_id": 32, "text": "x_1,\\ldots,x_n" }, { "math_id": 33, "text": "x_1^q,\\ldots,x_n^q" }, { "math_id": 34, "text": "X(k)" }, { "math_id": 35, "text": "\\#X(k)=\\sum_i (-1)^i \\mathrm{tr}(F_q^*| H^i_c(\\bar{X},\\Q_{\\ell}))." }, { "math_id": 36, "text": "\\ell" }, { "math_id": 37, "text": "\\Phi_q" }, { "math_id": 38, "text": "\\#X(k)=q^{\\dim X}\\sum_i (-1)^i \\mathrm{tr}((\\Phi_q^{-1})^*| H^i(\\bar X,\\Q_\\ell))." } ]
https://en.wikipedia.org/wiki?curid=582530
5825422
Robbins algebra
In abstract algebra, a Robbins algebra is an algebra containing a single binary operation, usually denoted by formula_0, and a single unary operation usually denoted by formula_1 satisfying the following axioms: For all elements "a", "b", and "c": For many years, it was conjectured, but unproven, that all Robbins algebras are Boolean algebras. This was proved in 1996, so the term "Robbins algebra" is now simply a synonym for "Boolean algebra". History. In 1933, Edward Huntington proposed a new set of axioms for Boolean algebras, consisting of (1) and (2) above, plus: From these axioms, Huntington derived the usual axioms of Boolean algebra. Very soon thereafter, Herbert Robbins posed the Robbins conjecture, namely that the Huntington equation could be replaced with what came to be called the Robbins equation, and the result would still be Boolean algebra. formula_0 would interpret Boolean join and formula_1 Boolean complement. Boolean meet and the constants 0 and 1 are easily defined from the Robbins algebra primitives. Pending verification of the conjecture, the system of Robbins was called "Robbins algebra." Verifying the Robbins conjecture required proving Huntington's equation, or some other axiomatization of a Boolean algebra, as theorems of a Robbins algebra. Huntington, Robbins, Alfred Tarski, and others worked on the problem, but failed to find a proof or counterexample. William McCune proved the conjecture in 1996, using the automated theorem prover EQP. For a complete proof of the Robbins conjecture in one consistent notation and following McCune closely, see Mann (2003). Dahn (1998) simplified McCune's machine proof.
[ { "math_id": 0, "text": "\\lor" }, { "math_id": 1, "text": "\\neg" }, { "math_id": 2, "text": "a \\lor \\left(b \\lor c \\right) = \\left(a \\lor b \\right) \\lor c" }, { "math_id": 3, "text": "a \\lor b = b \\lor a" }, { "math_id": 4, "text": "\\neg \\left( \\neg \\left(a \\lor b \\right) \\lor \\neg \\left(a \\lor \\neg b \\right) \\right) = a" }, { "math_id": 5, "text": "\\neg(\\neg a \\lor b) \\lor \\neg(\\neg a \\lor \\neg b) = a." } ]
https://en.wikipedia.org/wiki?curid=5825422
58259190
Virasoro conformal block
Special functions used to build correlation functions in 2D CFTs In two-dimensional conformal field theory, Virasoro conformal blocks (named after Miguel Ángel Virasoro) are special functions that serve as building blocks of correlation functions. On a given punctured Riemann surface, Virasoro conformal blocks form a particular basis of the space of solutions of the conformal Ward identities. Zero-point blocks on the torus are characters of representations of the Virasoro algebra; four-point blocks on the sphere reduce to hypergeometric functions in special cases, but are in general much more complicated. In two dimensions as in other dimensions, conformal blocks play an essential role in the conformal bootstrap approach to conformal field theory. Definition. Definition from OPEs. Using operator product expansions (OPEs), an formula_0-point function on the sphere can be written as a combination of three-point structure constants, and universal quantities called formula_0-point conformal blocks. Given an formula_0-point function, there are several types of conformal blocks, depending on which OPEs are used. In the case formula_1, there are three types of conformal blocks, corresponding to three possible decompositions of the same four-point function. Schematically, these decompositions read formula_2 where formula_3 are structure constants and formula_4 are conformal blocks. The sums are over representations of the conformal algebra that appear in the CFT's spectrum. OPEs involve sums over the spectrum, i.e. over representations and over states in representations, but the sums over states are absorbed in the conformal blocks. In two dimensions, the symmetry algebra factorizes into two copies of the Virasoro algebra, called left-moving and right-moving. If the fields are factorized too, then the conformal blocks factorize as well, and the factors are called Virasoro conformal blocks. Left-moving Virasoro conformal blocks are locally holomorphic functions of the fields' positions formula_5; right-moving Virasoro conformal blocks are the same functions of formula_6. The factorization of a conformal block into Virasoro conformal blocks is of the type formula_7 where formula_8 are representations of the left- and right-moving Virasoro algebras respectively. Definition from Virasoro Ward identities. Conformal Ward identities are the linear equations that correlation functions obey, as a result of conformal symmetry. In two dimensions, conformal Ward identities decompose into left-moving and right-moving Virasoro Ward identities. Virasoro conformal blocks are solutions of the Virasoro Ward identities. OPEs define specific bases of Virasoro conformal blocks, such as the s-channel basis in the case of four-point blocks. The blocks that are defined from OPEs are special cases of the blocks that are defined from Ward identities. Properties. Any linear holomorphic equation that is obeyed by a correlation function, must also hold for the corresponding conformal blocks. In addition, specific bases of conformal blocks come with extra properties that are not inherited from the correlation function. Conformal blocks that involve only primary fields have relatively simple properties. Conformal blocks that involve descendant fields can then be deduced using local Ward identities. An s-channel four-point block of primary fields depends on the four fields' conformal dimensions formula_9 on their positions formula_10 and on the s-channel conformal dimension formula_11. It can be written as formula_12 where the dependence on the Virasoro algebra's central charge is kept implicit. Linear equations. From the corresponding correlation function, conformal blocks inherit linear equations: global and local Ward identities, and BPZ equations if at least one field is degenerate. In particular, in an formula_0-point block on the sphere, global Ward identities reduce the dependence on the formula_0 field positions to a dependence on formula_13 cross-ratios. In the case formula_14 formula_15 where formula_16 and formula_17 is the cross-ratio, and the reduced block formula_18 coincides with the original block where three positions are sent to formula_19 formula_20 Singularities. Like correlation functions, conformal blocks are singular when two fields coincide. Unlike correlation functions, conformal blocks have very simple behaviours at some of these singularities. As a consequence of their definition from OPEs, s-channel four-point blocks obey formula_21 for some coefficients formula_22 On the other hand, s-channel blocks have complicated singular behaviours at formula_23: it is t-channel blocks that are simple at formula_24, and u-channel blocks that are simple at formula_25 In a four-point block that obeys a BPZ differential equation, formula_26 are regular singular points of the differential equation, and formula_27 is a characteristic exponent of the differential equation. For a differential equation of order formula_28, the formula_28 characteristic exponents correspond to the formula_28 values of formula_11 that are allowed by the fusion rules. Field permutations. Permutations of the fields formula_29 leave the correlation function formula_30 invariant, and therefore relate different bases of conformal blocks with one another. In the case of four-point blocks, t-channel blocks are related to s-channel blocks by formula_31 or equivalently formula_32 Fusing matrix. The change of bases from s-channel to t-channel four-point blocks is characterized by the fusing matrix (or fusion kernel) formula_33, such that formula_34 The fusing matrix is a function of the central charge and conformal dimensions, but it does not depend on the positions formula_35 The momentum formula_36 is defined in terms of the dimension formula_37 by formula_38 The values formula_39 correspond to the spectrum of Liouville theory. We also need to introduce two parameters formula_40 related to the central charge formula_41, formula_42 Assuming formula_43 and formula_44, the explicit expression of the fusing matrix is formula_45 where formula_46 is a double gamma function, formula_47 Although its expression is simpler in terms of momentums formula_48 than in terms of conformal dimensions formula_49, the fusing matrix is really a function of formula_49, i.e. a function of formula_48 that is invariant under formula_50. In the expression for the fusing matrix, the integral is a hyperbolic Barnes integral. Up to normalization, the fusing matrix coincides with Ruijsenaars' hypergeometric function, with the arguments formula_51 and parameters formula_52. The fusing matrix has several different integral representations, and obeys many nontrivial identities. In formula_0-point blocks on the sphere, the change of bases between two sets of blocks that are defined from different sequences of OPEs can always be written in terms of the fusing matrix, and a simple matrix that describes the permutation of the first two fields in an s-channel block, formula_53 Computation of conformal blocks. From the definition. The definition from OPEs leads to an expression for an s-channel four-point conformal block as a sum over states in the s-channel representation, of the type formula_54 The sums are over creation modes formula_55 of the Virasoro algebra, i.e. combinations of the type formula_56 of Virasoro generators with formula_57, whose level is formula_58. Such generators correspond to basis states in the Verma module with the conformal dimension formula_11. The coefficient formula_59 is a function of formula_60, which is known explicitly. The matrix element formula_61 is a function of formula_62 which vanishes if formula_63, and diverges for formula_64 if there is a null vector at level formula_0. Up to formula_65, this reads formula_66 Zamolodchikov's recursive representation. In Alexei Zamolodchikov's recursive representation of four-point blocks on the sphere, the cross-ratio formula_68 appears via the nome formula_69 where formula_33 is the hypergeometric function, and we used the Jacobi theta functions formula_70 The representation is of the type formula_71 The function formula_72 is a power series in formula_73, which is recursively defined by formula_74 In this formula, the positions formula_75 of the poles are the dimensions of degenerate representations, which correspond to the momentums formula_76 The residues formula_77 are given by formula_78 where the superscript in formula_79 indicates a product that runs by increments of formula_80. The recursion relation for formula_72 can be solved, giving rise to an explicit (but impractical) formula. While the coefficients of the power series formula_72 need not be positive in unitary theories, the coefficients of formula_81 are positive, due to this combination's interpretation in terms of sums of states in the pillow geometry. And the block's prefactors can be interpreted in terms of the conformal transformation from the sphere to the pillow. The recursive representation can be seen as an expansion around formula_82. It is sometimes called the formula_83-recursion, in order to distinguish it from the formula_41-recursion: another recursive representation, also due to Alexei Zamolodchikov, which expands around formula_84, and generates a series in powers of formula_68. The formula_41-recursion can be generalized to formula_0-point Virasoro conformal blocks on arbitrary Riemann surfaces. The formula_83-recursion can be generalized to one-point blocks on the torus. In other cases, there are no known generalizations of the formula_83-recursion, but there exist modified formula_83-recursions that generate series in powers of formula_68. From the relation to instanton counting. The Alday–Gaiotto–Tachikawa relation between two-dimensional conformal field theory and supersymmetric gauge theory, more specifically, between the conformal blocks of Liouville theory and Nekrasov partition functions of supersymmetric gauge theories in four dimensions, leads to combinatorial expressions for conformal blocks as sums over Young diagrams. Each diagram can be interpreted as a state in a representation of the Virasoro algebra, times an abelian affine Lie algebra. Special cases. Zero-point blocks on the torus. A zero-point block does not depend on field positions, but it depends on the moduli of the underlying Riemann surface. In the case of the torus formula_85 that dependence is better written through formula_86 and the zero-point block associated to a representation formula_87 of the Virasoro algebra is formula_88 where formula_89 is a generator of the Virasoro algebra. This coincides with the character of formula_90 The characters of some highest-weight representations are: formula_92 where formula_93 is the Dedekind eta function. formula_95 formula_97 The characters transform linearly under the modular transformations: formula_98 In particular their transformation under formula_99 is described by the modular S-matrix. Using the S-matrix, constraints on a CFT's spectrum can be derived from the modular invariance of the torus partition function, leading in particular to the ADE classification of minimal models. One-point blocks on the torus. An arbitrary one-point block on the torus can be written in terms of a four-point block on the sphere at a different central charge. This relation maps the modulus of the torus to the cross-ratio of the four points' positions, and three of the four fields on the sphere have the fixed momentum formula_100: formula_101 where The recursive representation of one-point blocks on the torus is formula_108 where the residues are formula_109 Under modular transformations, one-point blocks on the torus behave as formula_110 where the modular kernel is formula_111 Hypergeometric blocks. For a four-point function on the sphere formula_112 where one field has a null vector at level two, the second-order BPZ equation reduces to the hypergeometric equation. A basis of solutions is made of the two s-channel conformal blocks that are allowed by the fusion rules, and these blocks can be written in terms of the hypergeometric function, formula_113 with formula_114 Another basis is made of the two t-channel conformal blocks, formula_115 The fusing matrix is the matrix of size two such that formula_116 whose explicit expression is formula_117 Hypergeometric conformal blocks play an important role in the analytic bootstrap approach to two-dimensional CFT. Solutions of the Painlevé VI equation. If formula_118 then certain linear combinations of s-channel conformal blocks are solutions of the Painlevé VI nonlinear differential equation. The relevant linear combinations involve sums over sets of momentums of the type formula_119 This allows conformal blocks to be deduced from solutions of the Painlevé VI equation and vice versa. This also leads to a relatively simple formula for the fusing matrix at formula_120 Curiously, the formula_84 limit of conformal blocks is also related to the Painlevé VI equation. The relation between the formula_84 and the formula_121 limits, mysterious on the conformal field theory side, is explained naturally in the context of four dimensional gauge theories, using blowup equations, and can be generalized to more general pairs formula_122of central charges. Generalizations. Other representations of the Virasoro algebra. The Virasoro conformal blocks that are described in this article are associated to a certain type of representations of the Virasoro algebra: highest-weight representations, in other words Verma modules and their cosets. Correlation functions that involve other types of representations give rise to other types of conformal blocks. For example: Larger symmetry algebras. In a theory whose symmetry algebra is larger than the Virasoro algebra, for example a WZW model or a theory with W-symmetry, correlation functions can in principle be decomposed into Virasoro conformal blocks, but that decomposition typically involves too many terms to be useful. Instead, it is possible to use conformal blocks based on the larger algebra: for example, in a WZW model, conformal blocks based on the corresponding affine Lie algebra, which obey Knizhnik–Zamolodchikov equations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "N=4" }, { "math_id": 2, "text": "\\left\\langle V_1V_2V_3V_4\\right\\rangle\n=\\sum_s C_{12s}C_{s34}\\mathcal{F}^{\\text{(s-channel)}}_s = \\sum_t C_{14t}C_{t23}\\mathcal{F}^{\\text{(t-channel)}}_t = \\sum_u C_{13u}C_{24u}\\mathcal{F}^{\\text{(u-channel)}}_u\\ ,\n" }, { "math_id": 3, "text": "C" }, { "math_id": 4, "text": "\\mathcal{F}" }, { "math_id": 5, "text": "z_i" }, { "math_id": 6, "text": "\\bar z_i" }, { "math_id": 7, "text": " \\mathcal{F}^{\\text{(s-channel)}}_{s_L\\otimes s_R}(\\{z_i\\}) = \\mathcal{F}^{\\text{(s-channel, Virasoro)}}_{s_L}(\\{z_i\\})\\mathcal{F}^{\\text{(s-channel, Virasoro)}}_{s_R}(\\{\\bar z_i\\})\\ ,\n" }, { "math_id": 8, "text": "s_L,s_R" }, { "math_id": 9, "text": "\\Delta_i," }, { "math_id": 10, "text": "z_i," }, { "math_id": 11, "text": "\\Delta_s" }, { "math_id": 12, "text": "\\mathcal{F}^{(s)}_{\\Delta_s}(\\Delta_i|\\{z_i\\})," }, { "math_id": 13, "text": "N-3" }, { "math_id": 14, "text": "N=4," }, { "math_id": 15, "text": "\\mathcal{F}^{(s)}_{\\Delta_s}(\\{\\Delta_i\\}|\\{z_i\\})= z_{23}^{\\Delta_1-\\Delta_2-\\Delta_3+\\Delta_4} z_{13}^{-2\\Delta_1} z_{34}^{\\Delta_1+\\Delta_2-\\Delta_3-\\Delta_4} z_{24}^{-\\Delta_1-\\Delta_2+\\Delta_3-\\Delta_4}\\mathcal{F}^{(s)}_{\\Delta_s} (\\{\\Delta_i \\}|z)," }, { "math_id": 16, "text": "z_{ij}=z_i-z_j," }, { "math_id": 17, "text": "z= \\frac{z_{12}z_{34}}{z_{13}z_{24}}" }, { "math_id": 18, "text": "\\mathcal{F}^{(s)}_{\\Delta_s}(\\{\\Delta_i\\}|z)" }, { "math_id": 19, "text": "(0,\\infty, 1)," }, { "math_id": 20, "text": "\\mathcal{F}^{(s)}_{\\Delta_s}(\\{\\Delta_i\\}|z)= \\mathcal{F}^{(s)}_{\\Delta_s}(\\{\\Delta_i\\}|z,0,\\infty,1)." }, { "math_id": 21, "text": "\\mathcal{F}^{(s)}_{\\Delta_s}(\\{\\Delta_i\\}|z) \\underset{z\\to 0}{=} z^{\\Delta_s-\\Delta_1-\\Delta_2}\\left(1 + \\sum_{n=1}^\\infty c_n z^n\\right)," }, { "math_id": 22, "text": "c_n." }, { "math_id": 23, "text": "z=1,\\infty" }, { "math_id": 24, "text": "z=1" }, { "math_id": 25, "text": "z=\\infty." }, { "math_id": 26, "text": "z=0,1,\\infty" }, { "math_id": 27, "text": "\\Delta_s-\\Delta_1-\\Delta_2" }, { "math_id": 28, "text": "n" }, { "math_id": 29, "text": "V_i(z_i)" }, { "math_id": 30, "text": "\\left\\langle\\prod_{i=1}^NV_i(z_i)\\right\\rangle" }, { "math_id": 31, "text": "\\mathcal{F}^{(t)}_{\\Delta}(\\Delta_1,\\Delta_2,\\Delta_3,\\Delta_4|z_1,z_2,z_3,z_4) = \\mathcal{F}^{(s)}_{\\Delta}(\\Delta_1, \\Delta_4, \\Delta_3,\\Delta_2|z_1,z_4,z_3,z_2)," }, { "math_id": 32, "text": "\\mathcal{F}^{(t)}_{\\Delta}(\\Delta_1,\\Delta_2,\\Delta_3,\\Delta_4|z) = \\mathcal{F}^{(s)}_{\\Delta} (\\Delta_1, \\Delta_4, \\Delta_3, \\Delta_2| 1-z)." }, { "math_id": 33, "text": "F" }, { "math_id": 34, "text": "\\mathcal{F}^{(s)}_{\\Delta_s}(\\{\\Delta_i\\}|\\{z_i\\}) \n= \\int_{i\\mathbb{R}} dP_t\\ F_{\\Delta_s,\\Delta_t}\\begin{bmatrix} \\Delta_2 & \\Delta_3 \\\\ \\Delta_1 & \\Delta_4 \\end{bmatrix} \\mathcal{F}^{(t)}_{\\Delta_t}(\\{\\Delta_i\\}|\\{z_i\\})." }, { "math_id": 35, "text": "z_i." }, { "math_id": 36, "text": "P_t" }, { "math_id": 37, "text": "\\Delta_t" }, { "math_id": 38, "text": " \\Delta = \\frac{c-1}{24}-P^2." }, { "math_id": 39, "text": "P\\in i\\mathbb{R}" }, { "math_id": 40, "text": "Q,b" }, { "math_id": 41, "text": "c" }, { "math_id": 42, "text": " c= 1+6Q^2, \\qquad Q=b+b^{-1}. " }, { "math_id": 43, "text": " c\\notin (-\\infty, 1)" }, { "math_id": 44, "text": "P_i\\in i\\R" }, { "math_id": 45, "text": "\\begin{align}\nF_{\\Delta_s,\\Delta_t} &\\begin{bmatrix} \\Delta_2 & \\Delta_3 \\\\ \\Delta_1 & \\Delta_4 \\end{bmatrix} = \\\\\n&= \\left(\\prod_{\\pm}\\frac{\\Gamma_b(Q\\pm 2P_s)}{\\Gamma_b(\\pm 2P_t)}\\right) \\frac{\\Xi_+(P_1,P_4,P_t)\\Xi_+(P_2,P_3,P_t)}{\\Xi_-(P_1,P_2,P_s)\\Xi_-(P_3,P_4,P_s)} \\times \n\\\\\n&\\quad \\times \\int_{\\frac{Q}{4}+i\\R}du \\ S_b \\left (u-P_{12s} \\right ) S_b \\left (u-P_{s34} \\right ) S_b \\left (u-P_{23t} \\right )S_b \\left (u-P_{t14} \\right )\n\\\\ \n& \\qquad \\times \nS_b \\left ( \\tfrac{Q}{2}-u+P_{1234} \\right ) S_b \\left (\\tfrac{Q}{2}-u+P_{st13} \\right ) S_b \\left(\\tfrac{Q}{2}-u+P_{st24} \\right)S_b \\left(\\tfrac{Q}{2}-u \\right )\n\\end{align}" }, { "math_id": 46, "text": "\\Gamma_b" }, { "math_id": 47, "text": "\\begin{align}\nS_b(x) &= \\frac{\\Gamma_b(x)}{\\Gamma_b(Q-x)} \\\\[6pt]\n\\Xi_\\epsilon(P_1,P_2,P_3) &=\\prod_{\\underset{\\epsilon_1\\epsilon_2\\epsilon_3=\\epsilon}{\\epsilon_1,\\epsilon_2,\\epsilon_3=\\pm }} \\Gamma_b\\left(\\tfrac{Q}{2}+\\sum_i\\epsilon_iP_i\\right) \\\\[6pt]\nP_{ijk} &= P_i+P_j+P_k\n\\end{align}" }, { "math_id": 48, "text": "P_i" }, { "math_id": 49, "text": "\\Delta_i" }, { "math_id": 50, "text": "P_i\\to -P_i" }, { "math_id": 51, "text": "P_s,P_t" }, { "math_id": 52, "text": "b,b^{-1},P_1,P_2,P_3,P_4" }, { "math_id": 53, "text": "\\mathcal{F}^{(s)}_{\\Delta_s}(\\Delta_1,\\Delta_2,\\Delta_3,\\Delta_4|z_1,z_2,z_3,z_4) = e^{i\\pi(\\Delta_s-\\Delta_1-\\Delta_2)} \\mathcal{F}^{(s)}_{\\Delta_s}(\\Delta_2,\\Delta_1,\\Delta_3,\\Delta_4|z_2,z_1,z_3,z_4)." }, { "math_id": 54, "text": " \\mathcal{F}^{\\text{(s)}}_{\\Delta_s}(\\{\\Delta_i\\}|z)= z^{\\Delta_s-\\Delta_1-\\Delta_2}\\sum_{L,L'} z^{|L|} f_{12s}^L Q_{L,L'}^s f_{43s}^{L'}\\ .\n" }, { "math_id": 55, "text": "L,L'" }, { "math_id": 56, "text": " L=\\prod_i L_{-n_i}" }, { "math_id": 57, "text": "1\\leq n_1\\leq n_2\\leq \\cdots" }, { "math_id": 58, "text": "|L|=\\sum n_i" }, { "math_id": 59, "text": "f_{12s}^L" }, { "math_id": 60, "text": "\\Delta_1,\\Delta_2,\\Delta_s,L" }, { "math_id": 61, "text": "Q_{L,L'}^s" }, { "math_id": 62, "text": "c,\\Delta_s,L,L'" }, { "math_id": 63, "text": "|L|\\neq |L'|" }, { "math_id": 64, "text": "|L|=N" }, { "math_id": 65, "text": "|L|=1" }, { "math_id": 66, "text": "\n\\mathcal{F}^{\\text{(s)}}_{\\Delta_s}(\\{\\Delta_i\\}|z)= z^{\\Delta_s-\\Delta_1-\\Delta_2}\n\\Bigg\\{ 1 \n+ \\frac{(\\Delta_s+\\Delta_1-\\Delta_2)(\\Delta_s+\\Delta_4-\\Delta_3)}{2\\Delta_s} z + O(z^2)\\Bigg\\}\\ . \n" }, { "math_id": 67, "text": "Q_{L_{-1},L_{-1}}^s=\\frac{1}{2\\Delta_s}" }, { "math_id": 68, "text": "z" }, { "math_id": 69, "text": "\nq = \\exp -\\pi \\frac{F(\\frac12,\\frac12,1,1-z)}{F(\\frac12,\\frac12,1,z)} \\underset{z\\to 0}{=} \\frac{z}{16}+\\frac{z^2}{32}+O(z^3) \\quad \\iff \\quad z = \\frac{\\theta_2(q)^4}{\\theta_3(q)^4} \\underset{q\\to 0}{=} 16 q - 128 q^2 + O(q^3)\n" }, { "math_id": 70, "text": "\n\\theta_2(q) = 2q^\\frac14\\sum_{n=0}^\\infty q^{n(n+1)} \\quad , \\quad \\theta_3(q) = \\sum_{n\\in{\\mathbb{Z}}} q^{n^2}\n" }, { "math_id": 71, "text": "\n\\mathcal{F}^{(s)}_{\\Delta}(\\{\\Delta_i\\}|z) = (16q)^{\\Delta -\\frac14 Q^2} z^{\\frac14 Q^2-\\Delta_1-\\Delta_2} (1-z)^{\\frac14 Q^2-\\Delta_1-\\Delta_4} \\theta_3(q)^{3Q^2-4(\\Delta_1+\\Delta_2+\\Delta_3+\\Delta_4)} H_{\\Delta}(\\{\\Delta_i\\}|q)\\ .\n" }, { "math_id": 72, "text": "H_{\\Delta}(\\{\\Delta_i\\}|q)" }, { "math_id": 73, "text": "q" }, { "math_id": 74, "text": "\nH_{\\Delta}(\\{\\Delta_i\\}|q) = 1 + \\sum_{m,n=1}^\\infty \\frac{(16q)^{mn}}{\\Delta-\\Delta_{(m,n)}} R_{m,n} H_{\\Delta_{(m,-n)}}(\\{\\Delta_i\\}|q)\\ .\n" }, { "math_id": 75, "text": "\\Delta_{(m,n)}" }, { "math_id": 76, "text": "\nP_{(m,n)} = \\frac12 \\left(mb+nb^{-1}\\right)\\ .\n" }, { "math_id": 77, "text": "R_{m,n}" }, { "math_id": 78, "text": "\n R_{m,n} = \\frac{2P_{( 0,0)} P_{( m,n)}}{\\prod_{r=1-m}^m \\prod_{s=1-n}^n 2P_{(r,s)}}\n\\prod_{r\\overset{2}{=}1-m}^{m-1} \\prod_{s\\overset{2}{=}1-n}^{n-1} \\prod_\\pm (P_2\\pm P_1 + P_{( r,s)}) (P_3\\pm P_4 +P_{( r,s)})\\ ,\n" }, { "math_id": 79, "text": "\\overset{2}{=}" }, { "math_id": 80, "text": "2" }, { "math_id": 81, "text": "\\prod_{k=1}^\\infty (1-q^{2k})^{-\\frac12} H_{\\Delta}(\\{\\Delta_i\\}|q)" }, { "math_id": 82, "text": "\\Delta=\\infty" }, { "math_id": 83, "text": "\\Delta" }, { "math_id": 84, "text": "c=\\infty" }, { "math_id": 85, "text": "\\frac{\\Complex}{\\Z+\\tau\\Z}," }, { "math_id": 86, "text": "q=e^{2\\pi i\\tau}" }, { "math_id": 87, "text": "\\mathcal{R}" }, { "math_id": 88, "text": "\\chi_\\mathcal{R}(\\tau) = \\operatorname{Tr}_\\mathcal{R} q^{L_0-\\frac{c}{24}}," }, { "math_id": 89, "text": "L_0" }, { "math_id": 90, "text": "\\mathcal{R}." }, { "math_id": 91, "text": "\\Delta=\\tfrac{c-1}{24}-P^2" }, { "math_id": 92, "text": " \\chi_P(\\tau) = \\frac{q^{-P^2}}{\\eta(\\tau)}," }, { "math_id": 93, "text": "\\eta(\\tau)" }, { "math_id": 94, "text": "P_{(r,s)} " }, { "math_id": 95, "text": " \\chi_{(r,s)}(\\tau) = \\chi_{P_{(r,s)}}(\\tau) - \\chi_{P_{(r,-s)}}(\\tau)." }, { "math_id": 96, "text": "b^2 = -\\tfrac{p}{q}" }, { "math_id": 97, "text": "\\chi_{(r,s)}(\\tau) = \\sum_{k\\in\\Z} \\left(\\chi_{P_{(r,s)}+ik\\sqrt{pq}}(\\tau) - \\chi_{P_{(r,-s)}+ik\\sqrt{pq}}(\\tau) \\right). " }, { "math_id": 98, "text": "\\tau \\to \\frac{a\\tau + b}{c\\tau +d}, \\qquad \\begin{pmatrix} a & b \\\\ c & d \\end{pmatrix} \\in SL_2(\\Z)." }, { "math_id": 99, "text": "\\tau \\to -\\tfrac{1}{\\tau}" }, { "math_id": 100, "text": "P_{(0,\\frac12)} = \\tfrac{1}{4b}" }, { "math_id": 101, "text": "\nH^\\text{torus}_{P'}(P_1|q^2) = H_{P}\\left(\\left.\\tfrac{1}{4b},P_2,\\tfrac{1}{4b},\\tfrac{1}{4b}\\right|q\\right) \\quad \\text{with}\\quad \\left\\{\\begin{array}{l} b=\\frac{b'}{\\sqrt{2}}\\\\ P_2=\\frac{P_1}{\\sqrt{2}}\\\\ P=\\sqrt{2}P' \\end{array}\\right.\n" }, { "math_id": 102, "text": "H_{P_s}\\left(\\left.P_1,P_2,P_3,P_4\\right|q\\right)" }, { "math_id": 103, "text": "H^\\text{torus}_{P}(P_1|q)" }, { "math_id": 104, "text": "\\mathcal{F}^\\text{torus}_{\\Delta}(\\Delta_1|q) = q^{\\Delta-\\frac{c-1}{24}}\\eta(q)^{-1}H^\\text{torus}_{\\Delta}(\\Delta_1|q)" }, { "math_id": 105, "text": "\\eta(q)" }, { "math_id": 106, "text": "\\tau" }, { "math_id": 107, "text": "\\Delta_1" }, { "math_id": 108, "text": "H^\\text{torus}_{\\Delta}(\\Delta_1|q) = 1 + \\sum_{m,n=1}^\\infty \\frac{q^{mn}}{\\Delta-\\Delta_{(m,n)}} R^\\text{torus}_{m,n} H^\\text{torus}_{\\Delta_{(m,-n)}}(\\Delta_1|q)\\ ,\n" }, { "math_id": 109, "text": "\nR^\\text{torus}_{m,n} = \\frac{2P_{( 0,0)} P_{( m,n)}}{\\prod_{r=1-m}^m \\prod_{s=1-n}^n 2P_{(r,s)}}\n\\prod_{r\\overset{2}{=}1-2m}^{2m-1} \\prod_{s\\overset{2}{=}1-2n}^{2n-1} \\left(P_1+P_{(r,s)}\\right)\\ .\n" }, { "math_id": 110, "text": "\n\\mathcal{F}^\\text{torus}_{P}\\left(P_1|-\\tfrac{1}{\\tau}\\right) = \\int_{i\\mathbb{R}} dP'\\ S_{P,P'}(P_1)\\mathcal{F}^\\text{torus}_{P'}\\left(P_1|\\tau\\right)\\ ,\n" }, { "math_id": 111, "text": "\nS_{P,P'}(P_1) = \\frac{\\sqrt{2}}{S_b(\\frac{Q}{2}+P_1)}\n\\prod_\\pm \\frac{\\Gamma_b(Q\\pm 2P)}{\\Gamma_b(\\pm 2P')} \\frac{\\Gamma_b(\\frac{Q}{2}-P_1\\pm 2P')}{\\Gamma_b(\\frac{Q}{2}-P_1\\pm 2P)} \\int_{i\\mathbb{R}} \\frac{du}{i}\\ e^{4\\pi iPu} \\prod_{\\pm,\\pm} S_b\\left(\\tfrac{Q}{4}+\\tfrac{P_1}{2} \\pm u\\pm P'\\right)\\ .\n" }, { "math_id": 112, "text": "\\left\\langle V_{\\langle 2,1 \\rangle}(x)\\prod_{i=1}^3 V_{\\Delta_i}(z_i)\\right\\rangle " }, { "math_id": 113, "text": "\n\\begin{align}\n\\mathcal{F}^{(s)}_{P_1+\\epsilon\\frac{b}{2}}(z) \n&= z^{\\frac12+\\frac{b^2}{2}+b\\epsilon P_1} (1-z)^{\\frac12+\\frac{b^2}{2}+bP_3} \n\\\\ &\\times\nF\\left(\\tfrac12 + b(\\epsilon P_1+P_2+P_3),\\tfrac12 + b(\\epsilon P_1-P_2+P_3),1 + 2b\\epsilon P_1,z\\right),\n\\end{align}\n" }, { "math_id": 114, "text": "\\epsilon\\in\\{+,-\\}." }, { "math_id": 115, "text": "\n\\begin{align}\n\\mathcal{F}^{(t)}_{P_3+\\epsilon\\frac{b}{2}}(z) \n&= z^{\\frac12+\\frac{b^2}{2}+b P_1} (1-z)^{\\frac12+\\frac{b^2}{2}+b\\epsilon P_3} \n\\\\ &\\times \nF\\left(\\tfrac12 + b( P_1+P_2+\\epsilon P_3),\\tfrac12 + b(P_1-P_2+\\epsilon P_3), 1 + 2b\\epsilon P_3,1-z\\right).\n\\end{align}\n" }, { "math_id": 116, "text": "\\mathcal{F}^{(s)}_{P_1+\\epsilon_1\\frac{b}{2}}(x) = \\sum_{\\epsilon_3=\\pm} F_{\\epsilon_1,\\epsilon_3} \\mathcal{F}^{(t)}_{P_3+\\epsilon_3\\frac{b}{2}}(x), " }, { "math_id": 117, "text": " F_{\\epsilon_1,\\epsilon_3} = \\frac{\\Gamma(1-2b\\epsilon_1P_1)\\Gamma(2b\\epsilon_3P_3)}{\\prod_\\pm \\Gamma(\\frac12+b(-\\epsilon_1P_1\\pm P_2+\\epsilon_3P_3))}. " }, { "math_id": 118, "text": " c=1," }, { "math_id": 119, "text": "P_s+i\\Z." }, { "math_id": 120, "text": "c=1." }, { "math_id": 121, "text": "c=1" }, { "math_id": 122, "text": "c, c'" } ]
https://en.wikipedia.org/wiki?curid=58259190
5826
Complex number
Number with a real and an imaginary part In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted i, called the imaginary unit and satisfying the equation formula_0; every complex number can be expressed in the form formula_1, where a and b are real numbers. Because no real number satisfies the above equation, i was called an imaginary number by René Descartes. For the complex number formula_2, a is called the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;real part, and b is called the &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;imaginary part. The set of complex numbers is denoted by either of the symbols formula_3 or C. Despite the historical nomenclature, "imaginary" complex numbers have a mathematical existence as firm as that of the real numbers, and they are fundamental tools in the scientific description of the natural world. Complex numbers allow solutions to all polynomial equations, even those that have no solutions in real numbers. More precisely, the fundamental theorem of algebra asserts that every non-constant polynomial equation with real or complex coefficients has a solution which is a complex number. For example, the equation formula_4 has no real solution, because the square of a real number cannot be negative, but has the two nonreal complex solutions formula_5 and formula_6. Addition, subtraction and multiplication of complex numbers can be naturally defined by using the rule formula_7 along with the associative, commutative, and distributive laws. Every nonzero complex number has a multiplicative inverse. This makes the complex numbers a field with the real numbers as a subfield. The complex numbers also form a real vector space of dimension two, with formula_8 as a standard basis. This standard basis makes the complex numbers a Cartesian plane, called the complex plane. This allows a geometric interpretation of the complex numbers and their operations, and conversely some geometric objects and operations can be expressed in terms of complex numbers. For example, the real numbers form the real line, which is pictured as the horizontal axis of the complex plane, while real multiples of formula_9 are the vertical axis. A complex number can also be defined by its geometric polar coordinates: the radius is called the absolute value of the complex number, while the angle from the positive real axis is called the argument of the complex number. The complex numbers of absolute value one form the unit circle. Adding a fixed complex number to all complex numbers defines a translation in the complex plane, and multiplying by a fixed complex number is a similarity centered at the origin (dilating by the absolute value, and rotating by the argument). The operation of complex conjugation is the reflection symmetry with respect to the real axis. The complex numbers form a rich structure that is simultaneously an algebraically closed field, a commutative algebra over the reals, and a Euclidean vector space of dimension two. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Definition and basic operations. A complex number is an expression of the form "a" + "bi", where a and b are real numbers, and "i" is an abstract symbol, the so-called imaginary unit, whose meaning will be explained further below. For example, 2 + 3"i" is a complex number. For a complex number "a" + "bi", the real number a is called its "real part" , and the real number b (not the complex number "bi") is its "imaginary part". The real part of a complex number z is denoted Re("z"), formula_10, or formula_11; the imaginary part is Im("z"), formula_12, or formula_13: for example,formula_14, formula_15. A complex number z can be identified with the ordered pair of real numbers formula_16, which may be interpreted as coordinates of a point in a Euclidean plane with standard coordinates, which is then called the "complex plane" or "Argand diagram,". The horizontal axis is generally used to display the real part, with increasing values to the right, and the imaginary part marks the vertical axis, with increasing values upwards. A real number a can be regarded as a complex number "a" + 0"i", whose imaginary part is 0. A purely imaginary number "bi" is a complex number 0 + "bi", whose real part is zero. As with polynomials, it is common to write "a" + 0"i" = "a", 0 + "bi" = "bi", and "a" + (−"b")"i" = "a" − "bi"; for example, 3 + (−4)"i" = 3 − 4"i". The set of all complex numbers is denoted by formula_17 (blackboard bold) or C (upright bold). In some disciplines such as electromagnetism and electrical engineering, j is used instead of i, as i frequently represents electric current, and complex numbers are written as "a" + "bj" or "a" + "jb". Addition and subtraction. Two complex numbers formula_18 and formula_19 are added by separately adding their real and imaginary parts. That is to say: formula_20 Similarly, subtraction can be performed as formula_21 The addition can be geometrically visualized as follows: the sum of two complex numbers a and b, interpreted as points in the complex plane, is the point obtained by building a parallelogram from the three vertices O, and the points of the arrows labeled a and b (provided that they are not on a line). Equivalently, calling these points A, B, respectively and the fourth point of the parallelogram X the triangles OAB and XBA are congruent. Multiplication. The product of two complex numbers is computed as follows: formula_22 For example, formula_23 In particular, this includes as a special case the fundamental formula formula_24 This formula distinguishes the complex number "i" from any real number, since the square of any (negative or positive) real number "x" always satisfies formula_25. With this definition of multiplication and addition, familiar rules for the arithmetic of rational or real numbers continue to hold for complex numbers. More precisely, the distributive property, the commutative properties (of addition and multiplication) hold. Therefore, the complex numbers form an algebraic structure known as a "field", the same way as the rational or real numbers do. Complex conjugate, absolute value and argument. The "complex conjugate" of the complex number "z" = "x" + "yi" is defined as formula_26 It is also denoted by some authors by formula_27. Geometrically, is the "reflection" of z about the real axis. Conjugating twice gives the original complex number: formula_28 A complex number is real if and only if it equals its own conjugate. The unary operation of taking the complex conjugate of a complex number cannot be expressed by applying only their basic operations addition, subtraction, multiplication and division. For any complex number "z" = "x" + "yi" , the product formula_29 is a "non-negative real" number. This allows to define the "absolute value" (or "modulus" or "magnitude") of "z" to be the square root formula_30 By Pythagoras' theorem, formula_31 is the distance from the origin to the point representing the complex number "z" in the complex plane. In particular, the circle of radius one around the origin consists precisely of the numbers "z" such that formula_32. If formula_33 is a real number, then formula_34: its absolute value as a complex number and as a real number are equal. Using the conjugate, the reciprocal of a nonzero complex number formula_35 can be computed to be formula_36 More generally, the division of an arbitrary complex number formula_37 by a non-zero complex number formula_35 equals formula_38 This process is sometimes called "rationalization" of the denominator (although the denominator in the final expression might be an irrational real number), because it resembles the method to remove roots from simple expressions in a denominator. The "argument" of z (sometimes called the "phase" φ) is the angle of the radius Oz with the positive real axis, and is written as arg "z", expressed in radians in this article. The angle is defined only up to adding integer multiples of formula_39, since a rotation by formula_40 (or 360°) around the origin leaves all points in the complex plane unchanged. One possible choice to uniquely specify the argument is to require it to be within the interval formula_41, which is referred to as the principal value. The argument can be computed from the rectangular form x + yi by means of the arctan (inverse tangent) function. Polar form. For any complex number "z", with absolute value formula_42 and argument formula_43, the equation formula_44 holds. This identity is referred to as the polar form of "z". It is sometimes abbreviated as formula_45. In electronics, one represents a phasor with amplitude r and phase φ in angle notation:formula_46 If two complex numbers are given in polar form, i.e., "z"1 = "r"1(cos "φ"1 + "i" sin "φ"1) and "z"2 = "r"2(cos "φ"2 + "i" sin "φ"2), the product and division can be computed as formula_47 formula_48 In other words, the absolute values are "multiplied" and the arguments are "added" to yield the polar form of the product. The picture at the right illustrates the multiplication of formula_49 Because the real and imaginary part of 5 + 5"i" are equal, the argument of that number is 45 degrees, or "π"/4 (in radian). On the other hand, it is also the sum of the angles at the origin of the red and blue triangles are arctan(1/3) and arctan(1/2), respectively. Thus, the formula formula_50 holds. As the arctan function can be approximated highly efficiently, formulas like this – known as Machin-like formulas – are used for high-precision approximations of π. Powers and roots. The "n"-th power of a complex number can be computed using de Moivre's formula, which is obtained by repeatedly applying the above formula for the product: formula_51 For example, the first few powers of the imaginary unit "i" are formula_52. The n nth roots of a complex number z are given by formula_53 for 0 ≤ "k" ≤ "n" − 1. (Here formula_54 is the usual (positive) nth root of the positive real number r.) Because sine and cosine are periodic, other integer values of k do not give other values. For any formula_55, there are, in particular "n" distinct complex "n"-th roots. For example, there are 4 fourth roots of 1, namely formula_56 In general there is "no" natural way of distinguishing one particular complex nth root of a complex number. (This is in contrast to the roots of a positive real number "x", which has a unique positive real "n"-th root, which is therefore commonly referred to as "the" "n"-th root of "x".) One refers to this situation by saying that the nth root is a n-valued function of z. Fundamental theorem of algebra. The fundamental theorem of algebra, of Carl Friedrich Gauss and Jean le Rond d'Alembert, states that for any complex numbers (called coefficients) "a"0, ..., "a""n", the equation formula_57 has at least one complex solution "z", provided that at least one of the higher coefficients "a"1, ..., "a""n" is nonzero. This property does not hold for the field of rational numbers formula_58 (the polynomial "x"2 − 2 does not have a rational root, because √2 is not a rational number) nor the real numbers formula_59 (the polynomial "x"2 + 4 does not have a real root, because the square of x is positive for any real number x). Because of this fact, formula_17 is called an algebraically closed field. It is a cornerstone of various applications of complex numbers, as is detailed further below. There are various proofs of this theorem, by either analytic methods such as Liouville's theorem, or topological ones such as the winding number, or a proof combining Galois theory and the fact that any real polynomial of "odd" degree has at least one real root. History. The solution in radicals (without trigonometric functions) of a general cubic equation, when all three of its roots are real numbers, contains the square roots of negative numbers, a situation that cannot be rectified by factoring aided by the rational root test, if the cubic is irreducible; this is the so-called "casus irreducibilis" ("irreducible case"). This conundrum led Italian mathematician Gerolamo Cardano to conceive of complex numbers in around 1545 in his "Ars Magna", though his understanding was rudimentary; moreover, he later described complex numbers as being "as subtle as they are useless". Cardano did use imaginary numbers, but described using them as "mental torture." This was prior to the use of the graphical complex plane. Cardano and other Italian mathematicians, notably Scipione del Ferro, in the 1500s created an algorithm for solving cubic equations which generally had one real solution and two solutions containing an imaginary number. Because they ignored the answers with the imaginary numbers, Cardano found them useless. Work on the problem of general polynomials ultimately led to the fundamental theorem of algebra, which shows that with complex numbers, a solution exists to every polynomial equation of degree one or higher. Complex numbers thus form an algebraically closed field, where any polynomial equation has a root. Many mathematicians contributed to the development of complex numbers. The rules for addition, subtraction, multiplication, and root extraction of complex numbers were developed by the Italian mathematician Rafael Bombelli. A more abstract formalism for the complex numbers was further developed by the Irish mathematician William Rowan Hamilton, who extended this abstraction to the theory of quaternions. The earliest fleeting reference to square roots of negative numbers can perhaps be said to occur in the work of the Greek mathematician Hero of Alexandria in the 1st century AD, where in his "Stereometrica" he considered, apparently in error, the volume of an impossible frustum of a pyramid to arrive at the term formula_60 in his calculations, which today would simplify to formula_61. Negative quantities were not conceived of in Hellenistic mathematics and Hero merely replaced it by its positive formula_62 The impetus to study complex numbers as a topic in itself first arose in the 16th century when algebraic solutions for the roots of cubic and quartic polynomials were discovered by Italian mathematicians (Niccolò Fontana Tartaglia and Gerolamo Cardano). It was soon realized (but proved much later) that these formulas, even if one were interested only in real solutions, sometimes required the manipulation of square roots of negative numbers. In fact, it was proved later that the use of complex numbers is unavoidable when all three roots are real and distinct. However, the general formula can still be used in this case, with some care to deal with the ambiguity resulting from the existence of three cubic roots for nonzero complex numbers. Rafael Bombelli was the first to address explicitly these seemingly paradoxical solutions of cubic equations and developed the rules for complex arithmetic, trying to resolve these issues. The term "imaginary" for these quantities was coined by René Descartes in 1637, who was at pains to stress their unreal nature: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;... sometimes only imaginary, that is one can imagine as many as I said in each equation, but sometimes there exists no quantity that matches that which we imagine. ["... quelquefois seulement imaginaires c'est-à-dire que l'on peut toujours en imaginer autant que j'ai dit en chaque équation, mais qu'il n'y a quelquefois aucune quantité qui corresponde à celle qu'on imagine."] A further source of confusion was that the equation formula_63 seemed to be capriciously inconsistent with the algebraic identity formula_64, which is valid for non-negative real numbers a and b, and which was also used in complex number calculations with one of a, b positive and the other negative. The incorrect use of this identity in the case when both a and b are negative, and the related identity formula_65, even bedeviled Leonhard Euler. This difficulty eventually led to the convention of using the special symbol "i" in place of formula_66 to guard against this mistake. Even so, Euler considered it natural to introduce students to complex numbers much earlier than we do today. In his elementary algebra text book, "Elements of Algebra", he introduces these numbers almost at once and then uses them in a natural way throughout. In the 18th century complex numbers gained wider use, as it was noticed that formal manipulation of complex expressions could be used to simplify calculations involving trigonometric functions. For instance, in 1730 Abraham de Moivre noted that the identities relating trigonometric functions of an integer multiple of an angle to powers of trigonometric functions of that angle could be re-expressed by the following de Moivre's formula: formula_67 In 1748, Euler went further and obtained Euler's formula of complex analysis: formula_68 by formally manipulating complex power series and observed that this formula could be used to reduce any trigonometric identity to much simpler exponential identities. The idea of a complex number as a point in the complex plane (above) was first described by Danish–Norwegian mathematician Caspar Wessel in 1799, although it had been anticipated as early as 1685 in Wallis's "A Treatise of Algebra". Wessel's memoir appeared in the Proceedings of the Copenhagen Academy but went largely unnoticed. In 1806 Jean-Robert Argand independently issued a pamphlet on complex numbers and provided a rigorous proof of the fundamental theorem of algebra. Carl Friedrich Gauss had earlier published an essentially topological proof of the theorem in 1797 but expressed his doubts at the time about "the true metaphysics of the square root of −1". It was not until 1831 that he overcame these doubts and published his treatise on complex numbers as points in the plane, largely establishing modern notation and terminology: If one formerly contemplated this subject from a false point of view and therefore found a mysterious darkness, this is in large part attributable to clumsy terminology. Had one not called +1, −1, formula_66 positive, negative, or imaginary (or even impossible) units, but instead, say, direct, inverse, or lateral units, then there could scarcely have been talk of such darkness. In the beginning of the 19th century, other mathematicians discovered independently the geometrical representation of the complex numbers: Buée, Mourey, Warren, Français and his brother, Bellavitis. The English mathematician G.H. Hardy remarked that Gauss was the first mathematician to use complex numbers in "a really confident and scientific way" although mathematicians such as Norwegian Niels Henrik Abel and Carl Gustav Jacob Jacobi were necessarily using them routinely before Gauss published his 1831 treatise. Augustin-Louis Cauchy and Bernhard Riemann together brought the fundamental ideas of complex analysis to a high state of completion, commencing around 1825 in Cauchy's case. The common terms used in the theory are chiefly due to the founders. Argand called cos "φ" + "i" sin "φ" the "direction factor", and formula_69 the "modulus"; Cauchy (1821) called cos "φ" + "i" sin "φ" the "reduced form" (l'expression réduite) and apparently introduced the term "argument"; Gauss used "i" for formula_66, introduced the term "complex number" for "a" + "bi", and called "a"2 + "b"2 the "norm". The expression "direction coefficient", often used for cos "φ" + "i" sin "φ", is due to Hankel (1867), and "absolute value," for "modulus," is due to Weierstrass. Later classical writers on the general theory include Richard Dedekind, Otto Hölder, Felix Klein, Henri Poincaré, Hermann Schwarz, Karl Weierstrass and many others. Important work (including a systematization) in complex multivariate calculus has been started at beginning of the 20th century. Important results have been achieved by Wilhelm Wirtinger in 1927. Abstract algebraic aspects. While the above low-level definitions, including the addition and multiplication, accurately describes the complex numbers, there are other, equivalent approaches that reveal the abstract algebraic structure of the complex numbers more immediately. Construction as a quotient field. One approach to formula_70 is via polynomials, i.e., expressions of the form formula_71 where the coefficients "a"0, ..., "a""n" are real numbers. The set of all such polynomials is denoted by formula_72. Since sums and products of polynomials are again polynomials, this set formula_72 forms a commutative ring, called the polynomial ring (over the reals). To every such polynomial "p", one may assign the complex number formula_73, i.e., the value obtained by setting formula_74. This defines a function formula_75 This function is surjective since every complex number can be obtained in such a way: the evaluation of a linear polynomial formula_76 at formula_74 is formula_2. However, the evaluation of polynomial formula_77 at "i" is 0, since formula_78 This polynomial is irreducible, i.e., cannot be written as a product of two linear polynomials. Basic facts of abstract algebra then imply that the kernel of the above map is an ideal generated by this polynomial, and that the quotient by this ideal is a field, and that there is an isomorphism formula_79 between the quotient ring and formula_70. Some authors take this as the definition of formula_70. Accepting that formula_17 is algebraically closed, because it is an algebraic extension of formula_80 in this approach, formula_17 is therefore the algebraic closure of formula_81 Matrix representation of complex numbers. Complex numbers "a" + "bi" can also be represented by 2 × 2 matrices that have the form formula_82 Here the entries a and b are real numbers. As the sum and product of two such matrices is again of this form, these matrices form a subring of the ring of 2 × 2 matrices. A simple computation shows that the map formula_83 is a ring isomorphism from the field of complex numbers to the ring of these matrices, proving that these matrices form a field. This isomorphism associates the square of the absolute value of a complex number with the determinant of the corresponding matrix, and the conjugate of a complex number with the transpose of the matrix. The geometric description of the multiplication of complex numbers can also be expressed in terms of rotation matrices by using this correspondence between complex numbers and such matrices. The action of the matrix on a vector ("x", "y") corresponds to the multiplication of "x" + "iy" by "a" + "ib". In particular, if the determinant is 1, there is a real number t such that the matrix has the form formula_84 In this case, the action of the matrix on vectors and the multiplication by the complex number formula_85 are both the rotation of the angle t. Complex analysis. The study of functions of a complex variable is known as "complex analysis" and has enormous practical use in applied mathematics as well as in other branches of mathematics. Often, the most natural proofs for statements in real analysis or even number theory employ techniques from complex analysis (see prime number theorem for an example). Unlike real functions, which are commonly represented as two-dimensional graphs, complex functions have four-dimensional graphs and may usefully be illustrated by color-coding a three-dimensional graph to suggest four dimensions, or by animating the complex function's dynamic transformation of the complex plane. Convergence. The notions of convergent series and continuous functions in (real) analysis have natural analogs in complex analysis. A sequence of complex numbers is said to converge if and only if its real and imaginary parts do. This is equivalent to the (ε, δ)-definition of limits, where the absolute value of real numbers is replaced by the one of complex numbers. From a more abstract point of view, formula_86, endowed with the metric formula_87 is a complete metric space, which notably includes the triangle inequality formula_88 for any two complex numbers "z"1 and "z"2. Complex exponential. Like in real analysis, this notion of convergence is used to construct a number of elementary functions: the "exponential function" exp "z", also written "e""z", is defined as the infinite series, which can be shown to converge for any "z": formula_89 For example, formula_90 is Euler's constant formula_91. "Euler's formula" states: formula_92 for any real number φ. This formula is a quick consequence of general basic facts about convergent power series and the definitions of the involved functions as power series. As a special case, this includes Euler's identity formula_93 Complex logarithm. For any positive real number "t", there is a unique real number "x" such that formula_94. This leads to the definition of the natural logarithm as the inverse formula_95 of the exponential function. The situation is different for complex numbers, since formula_96 by the functional equation and Euler's identity. For example, "e""iπ" = "e"3"iπ" = −1 , so both iπ and 3"iπ" are possible values for the complex logarithm of −1. In general, given any non-zero complex number "w", any number "z" solving the equation formula_97 is called a complex logarithm of w, denoted formula_98. It can be shown that these numbers satisfy formula_99 where arg is the argument defined above, and ln the (real) natural logarithm. As arg is a multivalued function, unique only up to a multiple of 2"π", log is also multivalued. The principal value of log is often taken by restricting the imaginary part to the interval (−"π", "π"]. This leads to the complex logarithm being a bijective function taking values in the strip formula_100 (that is denoted formula_101 in the above illustration) formula_102 If formula_103 is not a non-positive real number (a positive or a non-real number), the resulting principal value of the complex logarithm is obtained with −"π" &lt; "φ" &lt; "π". It is an analytic function outside the negative real numbers, but it cannot be prolongated to a function that is continuous at any negative real number formula_104, where the principal value is ln "z" = ln(−"z") + "iπ". Complex exponentiation "z""ω" is defined as formula_105 and is multi-valued, except when ω is an integer. For "ω" = 1 / "n", for some natural number n, this recovers the non-uniqueness of nth roots mentioned above. If "z" &gt; 0 is real (and ω an arbitrary complex number), one has a preferred choice of formula_106, the real logarithm, which can be used to define a preferred exponential function. Complex numbers, unlike real numbers, do not in general satisfy the unmodified power and logarithm identities, particularly when naïvely treated as single-valued functions; see failure of power and logarithm identities. For example, they do not satisfy formula_107 Both sides of the equation are multivalued by the definition of complex exponentiation given here, and the values on the left are a subset of those on the right. Complex sine and cosine. The series defining the real trigonometric functions sine and cosine, as well as the hyperbolic functions sinh and cosh, also carry over to complex arguments without change. For the other trigonometric and hyperbolic functions, such as tangent, things are slightly more complicated, as the defining series do not converge for all complex values. Therefore, one must define them either in terms of sine, cosine and exponential, or, equivalently, by using the method of analytic continuation. Holomorphic functions. A function formula_108 → formula_86 is called holomorphic or "complex differentiable" at a point formula_109 if the limit formula_110 exists (in which case it is denoted by formula_111). This mimics the definition for real differentiable functions, except that all quantities are complex numbers. Loosely speaking, the freedom of approaching formula_109 in different directions imposes a much stronger condition than being (real) differentiable. For example, the function formula_112 is differentiable as a function formula_113, but is "not" complex differentiable. A real differentiable function is complex differentiable if and only if it satisfies the Cauchy–Riemann equations, which are sometimes abbreviated as formula_114 Complex analysis shows some features not apparent in real analysis. For example, the identity theorem asserts that two holomorphic functions f and g agree if they agree on an arbitrarily small open subset of formula_86. Meromorphic functions, functions that can locally be written as "f"("z")/("z" − "z"0)"n" with a holomorphic function f, still share some of the features of holomorphic functions. Other functions have essential singularities, such as sin(1/"z") at "z" = 0. Applications. Complex numbers have applications in many scientific areas, including signal processing, control theory, electromagnetism, fluid dynamics, quantum mechanics, cartography, and vibration analysis. Some of these applications are described below. Complex conjugation is also employed in inversive geometry, a branch of geometry studying reflections more general than ones about a line. In the network analysis of electrical circuits, the complex conjugate is used in finding the equivalent impedance when the maximum power transfer theorem is looked for. Geometry. Shapes. Three non-collinear points formula_115 in the plane determine the shape of the triangle formula_116. Locating the points in the complex plane, this shape of a triangle may be expressed by complex arithmetic as formula_117 The shape formula_118 of a triangle will remain the same, when the complex plane is transformed by translation or dilation (by an affine transformation), corresponding to the intuitive notion of shape, and describing similarity. Thus each triangle formula_116 is in a similarity class of triangles with the same shape. Fractal geometry. The Mandelbrot set is a popular example of a fractal formed on the complex plane. It is defined by plotting every location formula_119 where iterating the sequence formula_120 does not diverge when iterated infinitely. Similarly, Julia sets have the same rules, except where formula_119 remains constant. Triangles. Every triangle has a unique Steiner inellipse – an ellipse inside the triangle and tangent to the midpoints of the three sides of the triangle. The foci of a triangle's Steiner inellipse can be found as follows, according to Marden's theorem: Denote the triangle's vertices in the complex plane as "a" = "x""A" + "y""A""i", "b" = "x""B" + "y""B""i", and "c" = "x""C" + "y""C""i". Write the cubic equation formula_121, take its derivative, and equate the (quadratic) derivative to zero. Marden's theorem says that the solutions of this equation are the complex numbers denoting the locations of the two foci of the Steiner inellipse. Algebraic number theory. As mentioned above, any nonconstant polynomial equation (in complex coefficients) has a solution in formula_86. "A fortiori", the same is true if the equation has rational coefficients. The roots of such equations are called algebraic numbers – they are a principal object of study in algebraic number theory. Compared to formula_122, the algebraic closure of formula_123, which also contains all algebraic numbers, formula_86 has the advantage of being easily understandable in geometric terms. In this way, algebraic methods can be used to study geometric questions and vice versa. With algebraic methods, more specifically applying the machinery of field theory to the number field containing roots of unity, it can be shown that it is not possible to construct a regular nonagon using only compass and straightedge – a purely geometric problem. Another example is the Gaussian integers; that is, numbers of the form "x" + "iy", where x and y are integers, which can be used to classify sums of squares. Analytic number theory. Analytic number theory studies numbers, often integers or rationals, by taking advantage of the fact that they can be regarded as complex numbers, in which analytic methods can be used. This is done by encoding number-theoretic information in complex-valued functions. For example, the Riemann zeta function ζ("s") is related to the distribution of prime numbers. Improper integrals. In applied fields, complex numbers are often used to compute certain real-valued improper integrals, by means of complex-valued functions. Several methods exist to do this; see methods of contour integration. Dynamic equations. In differential equations, it is common to first find all complex roots r of the characteristic equation of a linear differential equation or equation system and then attempt to solve the system in terms of base functions of the form "f"("t") = "e""rt". Likewise, in difference equations, the complex roots r of the characteristic equation of the difference equation system are used, to attempt to solve the system in terms of base functions of the form "f"("t") = "r""t". Linear algebra. Since formula_70 is algebraically closed, any non-empty complex square matrix has at least one (complex) eigenvalue. By comparison, real matrices do not always have real eigenvalues, for example rotation matrices (for rotations of the plane for angles other than 0° or 180°) leave no direction fixed, and therefore do not have any "real" eigenvalue. The existence of (complex) eigenvalues, and the ensuing existence of eigendecomposition is a useful tool for computing matrix powers and matrix exponentials. Complex numbers often generalize concepts originally conceived in the real numbers. For example, the conjugate transpose generalizes the transpose, hermitian matrices generalize symmetric matrices, and unitary matrices generalize orthogonal matrices. In applied mathematics. Control theory. In control theory, systems are often transformed from the time domain to the complex frequency domain using the Laplace transform. The system's zeros and poles are then analyzed in the "complex plane". The root locus, Nyquist plot, and Nichols plot techniques all make use of the complex plane. In the root locus method, it is important whether zeros and poles are in the left or right half planes, that is, have real part greater than or less than zero. If a linear, time-invariant (LTI) system has poles that are If a system has zeros in the right half plane, it is a nonminimum phase system. Signal analysis. Complex numbers are used in signal analysis and other fields for a convenient description for periodically varying signals. For given real functions representing actual physical quantities, often in terms of sines and cosines, corresponding complex functions are considered of which the real parts are the original quantities. For a sine wave of a given frequency, the absolute value |"z"| of the corresponding z is the amplitude and the argument arg "z" is the phase. If Fourier analysis is employed to write a given real-valued signal as a sum of periodic functions, these periodic functions are often written as complex-valued functions of the form formula_124 and formula_125 where ω represents the angular frequency and the complex number "A" encodes the phase and amplitude as explained above. This use is also extended into digital signal processing and digital image processing, which use digital versions of Fourier analysis (and wavelet analysis) to transmit, compress, restore, and otherwise process digital audio signals, still images, and video signals. Another example, relevant to the two side bands of amplitude modulation of AM radio, is: formula_126 In physics. Electromagnetism and electrical engineering. In electrical engineering, the Fourier transform is used to analyze varying voltages and currents. The treatment of resistors, capacitors, and inductors can then be unified by introducing imaginary, frequency-dependent resistances for the latter two and combining all three in a single complex number called the impedance. This approach is called phasor calculus. In electrical engineering, the imaginary unit is denoted by j, to avoid confusion with I, which is generally in use to denote electric current, or, more particularly, i, which is generally in use to denote instantaneous electric current. Because the voltage in an AC circuit is oscillating, it can be represented as formula_127 To obtain the measurable quantity, the real part is taken: formula_128 The complex-valued signal "V"("t") is called the analytic representation of the real-valued, measurable signal "v"("t"). Fluid dynamics. In fluid dynamics, complex functions are used to describe potential flow in two dimensions. Quantum mechanics. The complex number field is intrinsic to the mathematical formulations of quantum mechanics, where complex Hilbert spaces provide the context for one such formulation that is convenient and perhaps most standard. The original foundation formulas of quantum mechanics – the Schrödinger equation and Heisenberg's matrix mechanics – make use of complex numbers. Relativity. In special and general relativity, some formulas for the metric on spacetime become simpler if one takes the time component of the spacetime continuum to be imaginary. (This approach is no longer standard in classical relativity, but is used in an essential way in quantum field theory.) Complex numbers are essential to spinors, which are a generalization of the tensors used in relativity. Characterizations, generalizations and related notions. Algebraic characterization. The field formula_17 has the following three properties: It can be shown that any field having these properties is isomorphic (as a field) to formula_130 For example, the algebraic closure of the field formula_131 of the p-adic number also satisfies these three properties, so these two fields are isomorphic (as fields, but not as topological fields). Also, formula_17 is isomorphic to the field of complex Puiseux series. However, specifying an isomorphism requires the axiom of choice. Another consequence of this algebraic characterization is that formula_17 contains many proper subfields that are isomorphic to formula_17. Characterization as a topological field. The preceding characterization of formula_17 describes only the algebraic aspects of formula_130 That is to say, the properties of nearness and continuity, which matter in areas such as analysis and topology, are not dealt with. The following description of formula_17 as a topological field (that is, a field that is equipped with a topology, which allows the notion of convergence) does take into account the topological properties. formula_17 contains a subset "P" (namely the set of positive real numbers) of nonzero elements satisfying the following three conditions: Moreover, formula_17 has a nontrivial involutive automorphism "x" ↦ "x"* (namely the complex conjugation), such that "x x"* is in "P" for any nonzero x in formula_130 Any field F with these properties can be endowed with a topology by taking the sets "B"("x", "p") = { "y" | "p" − ("y" − "x")("y" − "x")* ∈ "P" } as a base, where x ranges over the field and p ranges over "P". With this topology F is isomorphic as a "topological" field to formula_130 The only connected locally compact topological fields are formula_59 and formula_130 This gives another characterization of formula_17 as a topological field, because formula_17 can be distinguished from formula_59 because the nonzero complex numbers are connected, while the nonzero real numbers are not. Other number systems. The process of extending the field formula_133 of reals to formula_3 is an instance of the "Cayley–Dickson construction". Applying this construction iteratively to formula_70 then yields the quaternions, the octonions and the sedenions. This construction turns out to diminish the structural properties of the involved number systems. Unlike the reals, formula_17 is not an ordered field, that is to say, it is not possible to define a relation "z"1 &lt; "z"2 that is compatible with the addition and multiplication. In fact, in any ordered field, the square of any element is necessarily positive, so "i"2 = −1 precludes the existence of an ordering on formula_130 Passing from formula_70 to the quaternions formula_132 loses commutativity, while the octonions (additionally to not being commutative) fail to be associative. The reals, complex numbers, quaternions and octonions are all normed division algebras over formula_133. By Hurwitz's theorem they are the only ones; the sedenions, the next step in the Cayley–Dickson construction, fail to have this structure. The Cayley–Dickson construction is closely related to the regular representation of formula_134 thought of as an formula_133-algebra (an formula_80-vector space with a multiplication), with respect to the basis (1, "i"). This means the following: the formula_133-linear map formula_135 for some fixed complex number w can be represented by a 2 × 2 matrix (once a basis has been chosen). With respect to the basis (1, "i"), this matrix is formula_136 that is, the one mentioned in the section on matrix representation of complex numbers above. While this is a linear representation of formula_3 in the 2 × 2 real matrices, it is not the only one. Any matrix formula_137 has the property that its square is the negative of the identity matrix: "J"2 = −"I". Then formula_138 is also isomorphic to the field formula_134 and gives an alternative complex structure on formula_139 This is generalized by the notion of a linear complex structure. Hypercomplex numbers also generalize formula_140 formula_134 formula_141 and formula_142 For example, this notion contains the split-complex numbers, which are elements of the ring formula_143 (as opposed to formula_144 for complex numbers). In this ring, the equation "a"2 = 1 has four solutions. The field formula_133 is the completion of formula_145 the field of rational numbers, with respect to the usual absolute value metric. Other choices of metrics on formula_146 lead to the fields formula_147 of p-adic numbers (for any prime number p), which are thereby analogous to formula_80. There are no other nontrivial ways of completing formula_146 than formula_133 and formula_148 by Ostrowski's theorem. The algebraic closures formula_149 of formula_147 still carry a norm, but (unlike formula_3) are not complete with respect to it. The completion formula_150 of formula_149 turns out to be algebraically closed. By analogy, the field is called p-adic complex numbers. The fields formula_140 formula_148 and their finite field extensions, including formula_134 are called local fields. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt; Historical references. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "i^{2}= -1" }, { "math_id": 1, "text": "a + bi" }, { "math_id": 2, "text": "a+bi" }, { "math_id": 3, "text": "\\mathbb C" }, { "math_id": 4, "text": "(x+1)^2 = -9" }, { "math_id": 5, "text": "-1+3i" }, { "math_id": 6, "text": "-1-3i" }, { "math_id": 7, "text": "i^{2}=-1" }, { "math_id": 8, "text": "\\{1,i\\}" }, { "math_id": 9, "text": "i" }, { "math_id": 10, "text": "\\mathcal{Re}(z)" }, { "math_id": 11, "text": "\\mathfrak{R}(z)" }, { "math_id": 12, "text": "\\mathcal{Im}(z)" }, { "math_id": 13, "text": "\\mathfrak{I}(z)" }, { "math_id": 14, "text": " \\operatorname{Re}(2 + 3i) = 2 " }, { "math_id": 15, "text": " \\operatorname{Im}(2 + 3i) = 3 " }, { "math_id": 16, "text": "(\\Re (z),\\Im (z))" }, { "math_id": 17, "text": "\\Complex" }, { "math_id": 18, "text": "a =x+yi" }, { "math_id": 19, "text": "b =u+vi" }, { "math_id": 20, "text": "a + b =(x+yi) + (u+vi) = (x+u) + (y+v)i." }, { "math_id": 21, "text": "a - b =(x+yi) - (u+vi) = (x-u) + (y-v)i." }, { "math_id": 22, "text": "(a+bi) \\cdot (c+di) = ac - bd + (ad+bc)i." }, { "math_id": 23, "text": "(3+2i)(4-i) = 3 \\cdot 4 - (2 \\cdot (-1)) + (3 \\cdot (-1) + 2 \\cdot 4)i = 14 +5i." }, { "math_id": 24, "text": "i^2 = i \\cdot i = -1." }, { "math_id": 25, "text": "x^2 \\ge 0" }, { "math_id": 26, "text": "\\overline z = x-yi." }, { "math_id": 27, "text": "z^*" }, { "math_id": 28, "text": "\\overline{\\overline{z}}=z." }, { "math_id": 29, "text": "z \\cdot \\overline z = (x+iy)(x-iy) = x^2 + y^2" }, { "math_id": 30, "text": "|z|=\\sqrt{x^2+y^2}." }, { "math_id": 31, "text": "|z|" }, { "math_id": 32, "text": "|z| = 1 " }, { "math_id": 33, "text": " z = x = x + 0i " }, { "math_id": 34, "text": " |z|= |x| " }, { "math_id": 35, "text": "z = x + yi" }, { "math_id": 36, "text": "\n\\frac{1}{z}\n= \\frac{\\bar{z}}{z\\bar{z}}\n= \\frac{\\bar{z}}{|z|^2}\n= \\frac{x - yi}{x^2 + y^2}\n= \\frac{x}{x^2 + y^2} - \\frac{y}{x^2 + y^2}i." }, { "math_id": 37, "text": "w = u + vi" }, { "math_id": 38, "text": "\n\\frac{w}{z}\n= \\frac{w\\bar{z}}{|z|^2}\n= \\frac{(u + vi)(x - iy)}{x^2 + y^2}\n= \\frac{ux + vy}{x^2 + y^2} + \\frac{vx - uy}{x^2 + y^2}i.\n" }, { "math_id": 39, "text": " 2\\pi " }, { "math_id": 40, "text": "2\\pi" }, { "math_id": 41, "text": " (-\\pi,\\pi] " }, { "math_id": 42, "text": "r = |z|" }, { "math_id": 43, "text": "\\varphi" }, { "math_id": 44, "text": "z=r(\\cos\\varphi +i\\sin\\varphi) " }, { "math_id": 45, "text": " z = r \\operatorname\\mathrm{cis} \\varphi " }, { "math_id": 46, "text": "z = r \\angle \\varphi . " }, { "math_id": 47, "text": "z_1 z_2 = r_1 r_2 (\\cos(\\varphi_1 + \\varphi_2) + i \\sin(\\varphi_1 + \\varphi_2))." }, { "math_id": 48, "text": "\\frac{z_1}{z_2} = \\frac{r_1}{r_2} \\left(\\cos(\\varphi_1 - \\varphi_2) + i \\sin(\\varphi_1 - \\varphi_2)\\right), \\text{if }z_2 \\ne 0." }, { "math_id": 49, "text": "(2+i)(3+i)=5+5i. " }, { "math_id": 50, "text": "\\frac{\\pi}{4} = \\arctan\\left(\\frac{1}{2}\\right) + \\arctan\\left(\\frac{1}{3}\\right) " }, { "math_id": 51, "text": " z^{n}=\\underbrace{z \\cdot \\dots \\cdot z}_{n \\text{ factors}} = (r(\\cos \\varphi + i\\sin \\varphi ))^n = r^n \\, (\\cos n\\varphi + i \\sin n \\varphi)." }, { "math_id": 52, "text": "i, i^2 = -1, i^3 = -i, i^4 = 1, i^5 = i, \\dots" }, { "math_id": 53, "text": "z^{1/n} = \\sqrt[n]r \\left( \\cos \\left(\\frac{\\varphi+2k\\pi}{n}\\right) + i \\sin \\left(\\frac{\\varphi+2k\\pi}{n}\\right)\\right)" }, { "math_id": 54, "text": "\\sqrt[n]r" }, { "math_id": 55, "text": "z \\ne 0" }, { "math_id": 56, "text": "z_1 = 1, z_2 = i, z_3 = -1, z_4 = -i." }, { "math_id": 57, "text": "a_n z^n + \\dotsb + a_1 z + a_0 = 0" }, { "math_id": 58, "text": "\\Q" }, { "math_id": 59, "text": "\\R" }, { "math_id": 60, "text": "\\sqrt{81 - 144}" }, { "math_id": 61, "text": "\\sqrt{-63} = 3i\\sqrt{7}" }, { "math_id": 62, "text": "\\sqrt{144 - 81} = 3\\sqrt{7}." }, { "math_id": 63, "text": "\\sqrt{-1}^2 = \\sqrt{-1}\\sqrt{-1} = -1" }, { "math_id": 64, "text": "\\sqrt{a}\\sqrt{b} = \\sqrt{ab}" }, { "math_id": 65, "text": "\\frac{1}{\\sqrt{a}} = \\sqrt{\\frac{1}{a}}" }, { "math_id": 66, "text": "\\sqrt{-1}" }, { "math_id": 67, "text": "(\\cos \\theta + i\\sin \\theta)^{n} = \\cos n \\theta + i\\sin n \\theta. " }, { "math_id": 68, "text": "e ^{i\\theta } = \\cos \\theta + i\\sin \\theta " }, { "math_id": 69, "text": "r = \\sqrt{a^2 + b^2}" }, { "math_id": 70, "text": "\\C" }, { "math_id": 71, "text": "p(X) = a_nX^n+\\dotsb+a_1X+a_0," }, { "math_id": 72, "text": "\\R[X]" }, { "math_id": 73, "text": "p(i) = a_n i^n + \\dotsb + a_1 i + a_0" }, { "math_id": 74, "text": "X = i" }, { "math_id": 75, "text": "\\R[X] \\to \\C" }, { "math_id": 76, "text": "a+bX" }, { "math_id": 77, "text": "X^2 + 1" }, { "math_id": 78, "text": "i^2 + 1 = 0." }, { "math_id": 79, "text": "\\R[X] / (X^2 + 1) \\stackrel \\cong \\to \\C" }, { "math_id": 80, "text": "\\mathbb{R}" }, { "math_id": 81, "text": "\\R." }, { "math_id": 82, "text": "\n\\begin{pmatrix}\n a & -b \\\\\n b & \\;\\; a\n\\end{pmatrix}.\n" }, { "math_id": 83, "text": "a+ib\\mapsto \\begin{pmatrix}\n a & -b \\\\\n b & \\;\\; a\n\\end{pmatrix}" }, { "math_id": 84, "text": "\\begin{pmatrix}\n \\cos t & - \\sin t \\\\\n \\sin t & \\;\\; \\cos t\n\\end{pmatrix}." }, { "math_id": 85, "text": "\\cos t+i\\sin t" }, { "math_id": 86, "text": "\\mathbb{C}" }, { "math_id": 87, "text": "\\operatorname{d}(z_1, z_2) = |z_1 - z_2|" }, { "math_id": 88, "text": "|z_1 + z_2| \\le |z_1| + |z_2|" }, { "math_id": 89, "text": "\\exp z:= 1+z+\\frac{z^2}{2\\cdot 1}+\\frac{z^3}{3\\cdot 2\\cdot 1}+\\cdots = \\sum_{n=0}^{\\infty} \\frac{z^n}{n!}. " }, { "math_id": 90, "text": "\\exp (1)" }, { "math_id": 91, "text": "e \\approx 2.718" }, { "math_id": 92, "text": "\\exp(i\\varphi) = \\cos \\varphi + i\\sin \\varphi " }, { "math_id": 93, "text": "\\exp(i \\pi) = -1. " }, { "math_id": 94, "text": "\\exp(x) = t" }, { "math_id": 95, "text": "\\ln \\colon \\R^+ \\to \\R ; x \\mapsto \\ln x " }, { "math_id": 96, "text": "\\exp(z+2\\pi i) = \\exp z \\exp (2 \\pi i) = \\exp z" }, { "math_id": 97, "text": "\\exp z = w" }, { "math_id": 98, "text": "\\log w" }, { "math_id": 99, "text": "z = \\log w = \\ln|w| + i\\arg w, " }, { "math_id": 100, "text": "\\R^+ + \\; i \\, \\left(-\\pi, \\pi\\right]" }, { "math_id": 101, "text": "S_0" }, { "math_id": 102, "text": "\\ln \\colon \\; \\Complex^\\times \\; \\to \\; \\; \\; \\R^+ + \\; i \\, \\left(-\\pi, \\pi\\right] ." }, { "math_id": 103, "text": "z \\in \\Complex \\setminus \\left( -\\R_{\\ge 0} \\right)" }, { "math_id": 104, "text": "z \\in -\\R^+ " }, { "math_id": 105, "text": "z^\\omega = \\exp(\\omega \\ln z), " }, { "math_id": 106, "text": "\\ln x" }, { "math_id": 107, "text": "a^{bc} = \\left(a^b\\right)^c." }, { "math_id": 108, "text": "f: \\mathbb{C}" }, { "math_id": 109, "text": "z_0" }, { "math_id": 110, "text": "\\lim_{z \\to z_0} {f(z) - f(z_0) \\over z - z_0 }" }, { "math_id": 111, "text": "f'(z_0)" }, { "math_id": 112, "text": "f(z) = \\overline z" }, { "math_id": 113, "text": "\\R^2 \\to \\R^2" }, { "math_id": 114, "text": "\\frac{\\partial f}{\\partial \\overline z} = 0." }, { "math_id": 115, "text": "u, v, w" }, { "math_id": 116, "text": "\\{u, v, w\\}" }, { "math_id": 117, "text": "S(u, v, w) = \\frac {u - w}{u - v}. " }, { "math_id": 118, "text": "S" }, { "math_id": 119, "text": "c" }, { "math_id": 120, "text": "f_c(z)=z^2+c" }, { "math_id": 121, "text": "(x-a)(x-b)(x-c)=0" }, { "math_id": 122, "text": "\\overline{\\mathbb{Q}}" }, { "math_id": 123, "text": "\\mathbb{Q}" }, { "math_id": 124, "text": "x(t) = \\operatorname{Re} \\{X( t ) \\} " }, { "math_id": 125, "text": "X( t ) = A e^{i\\omega t} = a e^{ i \\phi } e^{i\\omega t} = a e^{i (\\omega t + \\phi) } " }, { "math_id": 126, "text": "\\begin{align}\n \\cos((\\omega + \\alpha)t) + \\cos\\left((\\omega - \\alpha)t\\right)\n & = \\operatorname{Re}\\left(e^{i(\\omega + \\alpha)t} + e^{i(\\omega - \\alpha)t}\\right) \\\\\n & = \\operatorname{Re}\\left(\\left(e^{i\\alpha t} + e^{-i\\alpha t}\\right) \\cdot e^{i\\omega t}\\right) \\\\\n & = \\operatorname{Re}\\left(2\\cos(\\alpha t) \\cdot e^{i\\omega t}\\right) \\\\\n & = 2 \\cos(\\alpha t) \\cdot \\operatorname{Re}\\left(e^{i\\omega t}\\right) \\\\\n & = 2 \\cos(\\alpha t) \\cdot \\cos\\left(\\omega t\\right).\n\\end{align}" }, { "math_id": 127, "text": " V(t) = V_0 e^{j \\omega t} = V_0 \\left (\\cos\\omega t + j \\sin\\omega t \\right )," }, { "math_id": 128, "text": " v(t) = \\operatorname{Re}(V) = \\operatorname{Re}\\left [ V_0 e^{j \\omega t} \\right ] = V_0 \\cos \\omega t." }, { "math_id": 129, "text": "\\Complex," }, { "math_id": 130, "text": "\\Complex." }, { "math_id": 131, "text": "\\Q_p" }, { "math_id": 132, "text": "\\mathbb H" }, { "math_id": 133, "text": "\\mathbb R" }, { "math_id": 134, "text": "\\mathbb C," }, { "math_id": 135, "text": "\\begin{align}\n \\mathbb{C} &\\rightarrow \\mathbb{C} \\\\\n z &\\mapsto wz\n \\end{align}" }, { "math_id": 136, "text": "\\begin{pmatrix}\n \\operatorname{Re}(w) & -\\operatorname{Im}(w) \\\\\n \\operatorname{Im}(w) & \\operatorname{Re}(w)\n \\end{pmatrix}," }, { "math_id": 137, "text": "J = \\begin{pmatrix}p & q \\\\ r & -p \\end{pmatrix}, \\quad p^2 + qr + 1 = 0" }, { "math_id": 138, "text": "\\{ z = a I + b J : a,b \\in \\mathbb{R} \\}" }, { "math_id": 139, "text": "\\mathbb R^2." }, { "math_id": 140, "text": "\\mathbb R," }, { "math_id": 141, "text": "\\mathbb H," }, { "math_id": 142, "text": "\\mathbb{O}." }, { "math_id": 143, "text": "\\mathbb R[x]/(x^2-1)" }, { "math_id": 144, "text": "\\mathbb R[x]/(x^2+1)" }, { "math_id": 145, "text": "\\mathbb Q," }, { "math_id": 146, "text": "\\mathbb Q" }, { "math_id": 147, "text": "\\mathbb Q_p" }, { "math_id": 148, "text": "\\mathbb Q_p," }, { "math_id": 149, "text": "\\overline {\\mathbb{Q}_p}" }, { "math_id": 150, "text": "\\mathbb{C}_p" }, { "math_id": 151, "text": "\\mathcal{G}_2^+" } ]
https://en.wikipedia.org/wiki?curid=5826
58264058
Measurable acting group
In mathematics, a measurable acting group is a special group that acts on some space in a way that is compatible with structures of measure theory. Measurable acting groups are found in the intersection of measure theory and group theory, two sub-disciplines of mathematics. Measurable acting groups are the basis for the study of invariant measures in abstract settings, most famously the Haar measure, and the study of stationary random measures. Definition. Let formula_0 be a measurable group, where formula_1 denotes the formula_2-algebra on formula_3 and formula_4 the group law. Let further formula_5 be a measurable space and let formula_6 be the product formula_2-algebra of the formula_2-algebras formula_7 and formula_8. Let formula_3 act on formula_9 with group action formula_10 If formula_11 is a measurable function from formula_12 to formula_13, then it is called a measurable group action. In this case, the group formula_3 is said to act measurably on formula_9. Example: Measurable groups as measurable acting groups. One special case of measurable acting groups are measurable groups themselves. If formula_14, and the group action is the group law, then a measurable group is a group formula_3, acting measurably on formula_3.
[ { "math_id": 0, "text": " (G, \\mathcal G, \\circ) " }, { "math_id": 1, "text": " \\mathcal G " }, { "math_id": 2, "text": " \\sigma " }, { "math_id": 3, "text": " G " }, { "math_id": 4, "text": " \\circ " }, { "math_id": 5, "text": " (S, \\mathcal S) " }, { "math_id": 6, "text": " \\mathcal A \\otimes \\mathcal B " }, { "math_id": 7, "text": " \\mathcal A " }, { "math_id": 8, "text": " \\mathcal B " }, { "math_id": 9, "text": " S " }, { "math_id": 10, "text": " \\Phi \\colon G \\times S \\to S " }, { "math_id": 11, "text": " \\Phi " }, { "math_id": 12, "text": " \\mathcal G \\otimes \\mathcal S " }, { "math_id": 13, "text": " \\mathcal S " }, { "math_id": 14, "text": " S=G " } ]
https://en.wikipedia.org/wiki?curid=58264058
5826615
Representation rigid group
In mathematics, in the representation theory of groups, a group is said to be representation rigid if for every formula_0, it has only finitely many isomorphism classes of complex irreducible representations of dimension formula_0.
[ { "math_id": 0, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=5826615
5826733
Decalage
Aeronautical engineering measurement Decalage on a fixed-wing aircraft is a measure of the relative incidences of wing surfaces. Various sources have defined it in multiple ways, depending on context: In biplanes. Decalage is said to be positive when the upper wing has a higher angle of incidence than the lower wing, and negative when the lower wing's incidence is greater than that of the upper wing. Positive decalage results in greater lift from the upper wing than the lower wing, the difference increasing with the amount of decalage. In a survey of representative biplanes, real-life design decalage is typically zero, with both wings having equal incidence. A notable exception is the Stearman PT-17, which has 4° of incidence in the lower wing, and 3° in the upper wing. Considered from an aerodynamic perspective, it is desirable to have the forward-most wing stall first, which will induce a pitch-down moment, aiding in stall recovery. Biplane designers may use incidence to control stalling behavior, but may also use airfoil selection or other means to accomplish correct behavior. In other fixed-wing aircraft. In other fixed-wing aircraft, "decalage" typically refers to geometric decalage, which is the difference in the angle of the wing's chord line and the horizontal stabilizer's chord line. The term "aerodynamic decalage" can refer to a similar angular measure that is taken with respect to each surface's respective zero-lift line rather than its chord line. Aerodynamic decalage may be modified in-flight through control surface deflections, as these change a surface's zero-lift line. Aerodynamic decalage can be used to quickly assess the trim and stability of any two-surface airplane (e.g., a conventional or canard configuration, neglecting fuselage aerodynamics). This is based on the notable property that an airplane in stable (formula_0), trimmed (formula_1), and lifting (formula_2) flight must have positive aerodynamic decalage. In other words, the forward wing will have a higher local wing loading than the aft wing. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_{m\\alpha}<0" }, { "math_id": 1, "text": "C_m=0" }, { "math_id": 2, "text": "C_L>0" } ]
https://en.wikipedia.org/wiki?curid=5826733
58267671
Half flux diameter
The half flux diameter or HFD is a definition used by astronomers to define the star size in an astronomical image. Mainly due to the seeing, stars are not imaged as a dot but spread out like a Gaussian shape. The half flux diameter defines the diameter of a circle around the bright center in which half of the star flux or energy is contained. The other half of the flux is outside this circle. The half flux diameter unit is pixels. The lower the half flux diameter value the better the seeing is and the sharper the image. It is a similar measurement to full width at half maximum (FWHM), but is a more robust measurement especially for stars out of focus. For a perfect Gaussian shaped star image, both the FWHM and half flux diameter values are theoretically 2.3548 σ. The half flux diameter calculation is an approximate but fast routine, assuming that the half flux diameter line splits the star in equal portions of gravity. Variables: The center of gravity is zero at H: formula_0 This can be rewritten as: formula_1 The H is then: formula_2 HFD is linked to H by: formula_3 Since normally the number of pixels illuminated is small and the calculated star center of star is not at the center of a pixel, the above summation should be calculated on sub-pixel level or the image should be re-sampled to a higher resolution. Using this approximate method, the half flux diameter of a perfect Gaussian shaped star, highly over sampled is 2.5066 σ. An offset of +6.4%. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\sum_{i=1}^N V_i(d_i - H)=0\n" }, { "math_id": 1, "text": "\n\\sum_{i=1}^N V_i H=\\sum_{i=1}^N V_i d_i\n" }, { "math_id": 2, "text": "\nH= \\frac{\\sum_{i=1}^N V_i d_i}{\\sum_{i=1}^N V_i}\n" }, { "math_id": 3, "text": "\nHFD=2 H\n" } ]
https://en.wikipedia.org/wiki?curid=58267671
582702
Quasistatic process
Thermodynamic process in which equilibrium is maintained throughout the process's duration In thermodynamics, a quasi-static process, also known as a quasi-equilibrium process (from Latin "quasi", meaning ‘as if’), is a thermodynamic process that happens slowly enough for the system to remain in internal physical (but not necessarily chemical) thermodynamic equilibrium. An example of this is quasi-static expansion of a mixture of hydrogen and oxygen gas, where the volume of the system changes so slowly that the pressure remains uniform throughout the system at each instant of time during the process. Such an idealized process is a succession of physical equilibrium states, characterized by infinite slowness. Only in a quasi-static thermodynamic process can we exactly define intensive quantities (such as pressure, temperature, specific volume, specific entropy) of the system at any instant during the whole process; otherwise, since no internal equilibrium is established, different parts of the system would have different values of these quantities, so a single value per quantity may not be sufficient to represent the whole system. In other words, when an equation for a change in a state function contains "P" or "T", it implies a quasi-static process. Relation to reversible process. While all reversible processes are quasi-static, most authors do not require a general quasi-static process to maintain equilibrium between system and surroundings and avoid dissipation, which are defining characteristics of a reversible process. For example, quasi-static compression of a system by a piston subject to friction is irreversible; although the system is always in internal thermal equilibrium, the friction ensures the generation of dissipative entropy, which goes against the definition of reversibility. Any engineer would remember to include friction when calculating the dissipative entropy generation. An example of a quasi-static process that is not idealizable as reversible is slow heat transfer between two bodies on two finitely different temperatures, where the heat transfer rate is controlled by a poorly conductive partition between the two bodies. In this case, no matter how slowly the process takes place, the state of the composite system consisting of the two bodies is far from equilibrium, since thermal equilibrium for this composite system requires that the two bodies be at the same temperature. Nevertheless, the entropy change for each body can be calculated using the Clausius equality for reversible heat transfer. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "W_{1-2} = \\int P \\, dV = P(V_2 - V_1)" }, { "math_id": 1, "text": "W_{1-2} = \\int P dV = 0" }, { "math_id": 2, "text": "W_{1-2} = \\int P \\, dV," }, { "math_id": 3, "text": "PV = P_1 V_1 = C" }, { "math_id": 4, "text": "W_{1-2} = P_1 V_1 \\ln \\frac{V_2}{V_1}" }, { "math_id": 5, "text": "W_{1-2} = \\frac{P_1 V_1 - P_2 V_2}{n-1}" } ]
https://en.wikipedia.org/wiki?curid=582702
58270684
G. Peter Scott
British mathematician Godfrey Peter Scott, known as Peter Scott, (1944 – 19 September 2023) was a British-American mathematician, known for the Scott core theorem. Education and career. He was born in England to Bernard Scott (a mathematician) and Barbara Scott (a poet and sculptor). After completing his BA at the University of Oxford, Peter Scott received his PhD in 1969 from the University of Warwick under Brian Joseph Sanderson, with thesis "Some Problems in Topology". Scott held appointments at the University of Liverpool from 1968 to 1987, at which time he moved to the University of Michigan, where he was a professor until his retirement in 2018. His research dealt with low-dimensional geometric topology, differential geometry, and geometric group theory. He has done research on the geometric topology of 3-dimensional manifolds, 3-dimensional hyperbolic geometry, minimal surface theory, hyperbolic groups, and Kleinian groups with their associated geometry, topology, and group theory. In 1973, he proved what is now known as the "Scott core theorem" or the "Scott compact core theorem". This states that every 3-manifold formula_0 with finitely generated fundamental group has a compact core formula_1, "i.e.", formula_1 is a compact submanifold such that inclusion induces a homotopy equivalence between formula_1 and formula_0; the submanifold formula_1 is called a "Scott compact core" of the manifold formula_0. He had previously proved that, given a fundamental group formula_2 of a 3-manifold, if formula_2 is finitely generated then formula_2 must be finitely presented. Awards and honours. In 1986, he was awarded the Senior Berwick Prize by the London Mathematical Society. In 2013, he was elected a Fellow of the American Mathematical Society. Death. Scott died of cancer on 19 September 2023.
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "G" } ]
https://en.wikipedia.org/wiki?curid=58270684
5827181
Diffuse reflectance spectroscopy
Diffuse reflectance spectroscopy, or diffuse reflection spectroscopy, is a subset of absorption spectroscopy. It is sometimes called remission spectroscopy. Remission is the reflection or back-scattering of light by a material, while transmission is the passage of light through a material. The word "remission" implies a direction of scatter, independent of the scattering process. Remission includes both specular and diffusely back-scattered light. The word "reflection" often implies a particular physical process, such as specular reflection. The use of the term "remission spectroscopy" is relatively recent, and found first use in applications related to medicine and biochemistry. While the term is becoming more common in certain areas of absorption spectroscopy, the term "diffuse reflectance" is firmly entrenched, as in diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) and diffuse-reflectance ultraviolet–visible spectroscopy. Mathematical treatments related to diffuse reflectance and transmittance. The mathematical treatments of absorption spectroscopy for scattering materials were originally largely borrowed from other fields. The most successful treatments use the concept of dividing a sample into layers, called plane parallel layers. The treatments are generally those consistent with a two-flux or two-stream approximation. Some of the treatments require all the scattered light, both remitted and transmitted light, to be measured. Others apply only to remitted light, with the assumption that the sample is "infinitely thick" and transmits no light. These are special cases of the more general treatments. There are several general treatments, all of which are compatible with each other, related to the mathematics of plane parallel layers. They are the Stokes formulas, equations of Benford, Hecht finite difference formula, and the Dahm equation. For the special case of infinitesimal layers, the Kubelka–Munk and Schuster–Kortüm treatments also give compatible results. Treatments which involve different assumptions and which yield incompatible results are the Giovanelli exact solutions, and the particle theories of Melamed and Simmons. George Gabriel Stokes. George Gabriel Stokes (not to neglect the later work of Gustav Kirchhoff) is often given credit for having first enunciated the fundamental principles of spectroscopy. In 1862, Stokes published formulas for determining the quantities of light remitted and transmitted from "a pile of plates". He described his work as addressing a "mathematical problem of some interest". He solved the problem using summations of geometric series, but the results are expressed as continuous functions. This means that the results can be applied to fractional numbers of plates, though they have the intended meaning only for an integral number. The results below are presented in a form compatible with discontinuous functions. Stokes used the term "reflexion", not "remission", specifically referring to what is often called regular or specular reflection. In regular reflection, the Fresnel equations describe the physics, which includes both reflection and refraction, at the optical boundary of a plate. A "pile of plates" is still a term of art used to describe a polarizer in which a polarized beam is obtained by tilting a pile of plates at an angle to an unpolarized incident beam. The area of polarization was specifically what interested Stokes in this mathematical problem. Stokes formulas for remission from and transmission through a "pile of plates". For a sample that consists of n layers, each having its absorption, remission, and transmission (ART) fractions symbolized by {"a", "r", "t" }, with "a" + "r" + "t" = 1, one may symbolize the ART fractions for the sample as {"Αn", "Rn", "Tn"} and calculate their values by formula_0 formula_1 formula_2 where formula_3 formula_4 and formula_5 Franz Arthur Friedrich Schuster. In 1905, in an article entitled "Radiation through a foggy atmosphere", Arthur Schuster published a solution to the equation of radiative transfer, which describes the propagation of radiation through a medium, affected by absorption, emission, and scattering processes. His mathematics used a two flux approximation; i.e., all light is assumed to travel with a component either in the same direction as the incident beam, or in the opposite direction. He used the word scattering rather than reflection, and considered scatter to be in all directions. He used the symbols k and s for absorption and isotropic scattering coefficients, and repeatedly refers to radiation entering a "layer", which ranges in size from infinitesimal to infinitely thick. In his treatment, the radiation enters the layers at all possible angles, referred to as "diffuse illumination". Kubelka and Munk. In 1931, Paul Kubelka (with Franz Munk) published "An article on the optics of paint", the contents of which has come to be known as the Kubelka-Munk theory. They used absorption and remission (or back-scatter) constants, noting (as translated by Stephen H. Westin) that "an infinitesimal layer of the coating absorbs and scatters a certain constant portion of all the light passing through it". While symbols and terminology are changed here, it seems clear from their language that the terms in their differential equations stand for absorption and backscatter (remission) fractions. They also noted that the reflectance from an infinite number of these infinitesimal layers is "solely a function of the ratio of the absorption and back-scatter (remission) constants "a"0/"r"0, but not in any way on the absolute numerical values of these constants". This turns out to be incorrect for layers of finite thickness, and the equation was modified for spectroscopic purposes (below), but Kubelka-Munk theory has found extensive use in coatings. However, in revised presentations of their mathematical treatment, including that of Kubelka, Kortüm and Hecht (below), the following symbolism became popular, using coefficients rather than fractions: The Kubelka–Munk equation. The Kubelka–Munk equation describes the remission from a sample composed of an infinite number of infinitesimal layers, each having "a"0 as an absorption fraction, and "r"0 as a remission fraction. formula_8 Deane B. Judd. Deane Judd was very interested the effect of light polarization and degree of diffusion on the appearance of objects. He made important contributions to the fields of colorimetry, color discrimination, color order, and color vision. Judd defined the scattering power for a sample as Sd, where d is the particle diameter. This is consistent with the belief that the scattering from a single particle is conceptually more important than the derived coefficients. The above Kubelka–Munk equation can be resolved for the ratio "a"0/"r"0 in terms of "R"∞. This led to a very early (perhaps the first) use of the term "remission" in place of "reflectance" when Judd defined a "remission function" as formula_9, where k and s are absorption and scattering coefficients, which replace "a"0 and "r"0 in the Kubelka–Munk equation above. Judd tabulated the remission function as a function of percent reflectance from an infinitely thick sample. This function, when used as a measure of absorption, was sometimes referred to as "pseudo-absorbance", a term which has been used later with other definitions as well. General Electric. In the 1920s and 30s, Albert H. Taylor, Arthur C. Hardy , and others of the General Electric company developed a series of instruments that were capable of easily recording spectral data "in reflection". Their display preference for the data was "% Reflectance". In 1946, Frank Benford published a series of parametric equations that gave results equivalent to the Stokes formulas. The formulas used fractions to express reflectance and transmittance. Equations of Benford. If "A"1, "R"1, and "T"1 are known for the representative layer of a sample, and An, Rn and Tn are known for a layer composed of n representative layers, the ART fractions for a layer with thickness of "n" + 1 are formula_10 formula_11 formula_12 If Ad, Rd and Td are known for a layer with thickness d, the ART fractions for a layer with thickness of "d"/2 are formula_13 formula_14 formula_15 and the fractions for a layer with thickness of 2"d" are formula_16 formula_17 formula_18 If Ax, Rx and Tx are known for layer x and Ay Ry and Ty are known for layer y, the ART fractions for a sample composed of layer x and layer y are formula_19 formula_20 formula_21 The symbol formula_22 refers to the reflectance of layer formula_23 when the direction of illumination is antiparallel to that of the incident beam. The difference in direction is important when dealing with inhomogeneous layers. This consideration was added by Paul Kubelka in 1954. Giovanelli and Chandrasekhar. In 1955, Ron Giovanelli published explicit expressions for several cases of interest which are touted as exact solutions to the radiative transfer equation for a semi-infinite ideal diffuser. His solutions have become the standard against which results from approximate theoretical treatments are measured. Many of the solutions appear deceptively simple due to the work of Subrahmanyan (Chandra) Chandrasekhar. For example, the total reflectance for light incident in the direction μ0 is formula_24 Here ω0 is known as the albedo of single scatter σ/(α+σ), representing the fraction of the radiation lost by scattering in a medium where both absorption (α) and scattering (σ) take place. The function "H"(μ0) is called the H-integral, the values of which were tabulated by Chandrasekhar. Gustav Kortüm. Kortüm was a physical chemist who had a broad range of interests, and published prolifically. His research covered many aspects of light scattering. He began to pull together what was known in various fields into an understanding of how “reflectance spectroscopy” worked. In 1969, the English translation of his book entitled Reflectance Spectroscopy (long in preparation and translation) was published. This book came to dominate thinking of the day for 20 years in the emerging fields of both DRIFTS and NIR Spectroscopy. Kortüm's position was that since regular (or specular) reflection is governed by different laws than diffuse reflection, they should therefore be accorded different mathematical treatments. He developed an approach based on Schuster's work by ignoring the emissivity of the clouds in the "foggy atmosphere". If we take α as the fraction of incident light absorbed and σ as the fraction scattered isotropically by a single particle (referred to by Kortüm as the "true coefficients of single scatter"), and define the absorption and isotropic scattering for a layer as formula_25 and formula_26 then: formula_27 This is the same "remission function" as used by Judd, but Kortüm's translator referred to it as "the so-called reflectance function". If we substitute back for the particle properties, we obtain formula_28 and then we obtain the Schuster equation for isotropic scattering: formula_29 Additionally, Kortüm derived "the Kubelka-Munk exponential solution" by defining k and s as the absorption and scattering coefficient per centimeter of the material and substituting: "K" ≡ 2"k" and "S" ≡ 2"s", while pointing out in a footnote that S is a back-scattering coefficient. He wound up with what he called the "Kubelka–Munk function", commonly called the Kubelka–Munk equation: formula_30 Kortüm concluded that "the two constant theory of Kubelka and Munk leads to conclusions accessible to experimental test. In practice these are found to be at least qualitatively confirmed, and suitable conditions fulfilling the assumptions made, quantitatively as well." Kortüm tended to eschew the "particle theories", though he did record that one author, N.T. Melamed of Westinghouse Research Labs, "abandoned the idea of plane parallel layers and substituted them with a statistical summation over individual particles." Hecht and Simmons. In 1966, Harry G. Hecht (with Wesley W. Wendlandt) published a book entitled "Reflectance Spectroscopy", because "unlike transmittance spectroscopy, there were no reference books written on the subject" of "diffuse reflectance spectroscopy", and "the fundamentals were only to be found in the old literature, some of which was not readily accessible". Hecht describes himself as a novice in the field at the time, and said that if he had known that Gustav Kortüm, "a great pillar in the field", was in the process of writing a book on the subject, he "would not have undertaken the task". Hecht was asked to write a review of Kortüm's book and their correspondence concerning it led to Hecht spending a year in Kortüm's laboratories. Kortüm is the author most often cited in the book. One of the features of the remission function emphasized by Hecht was the fact that formula_31 should yield the absorption spectrum displaced by -log "s". While the scattering coefficient might change with particle size, the absorption coefficient, which should be proportional to concentration of an absorber, would be obtainable by a background correction for a spectrum. However, experimental data showed the relationship did not hold in strongly absorbing materials. Many papers were published with various explanations for this failure of the Kubelka-Munk equation. Proposed culprits included: incomplete diffusion, anisotropic scatter ("the invalid assumption that radiation is returned equally in all directions from a given particle"), and presence of regular reflection. The situation resulted in a myriad of models and theories being proposed to correct these supposed deficiencies. The various alternative theories were evaluated and compared. In his book, Hecht reported the mathematics of Stokes and Melamed formulas (which he called “statistical methods”). He believed the approach of Melamed, which “involve a summation over individual particles” was more satisfactory than summations over “plane parallel layers”. Unfortunately, Melamed's method failed as the refractive index of the particles approached unity, but he did call attention to the importance of using individual particle properties, as opposed to coefficients that represent averaged properties for a sample. E.L. Simmons used a simplified modification of the particle model to relate diffuse reflectance to fundamental optical constants without the use of the cumbersome equations. In 1975, Simmons evaluated various theories of diffuse reflectance spectroscopy and concluded that a modified particle model theory is probably the most nearly correct. In 1976, Hecht wrote a lengthy paper comprehensively describing the myriad of mathematical treatments that had been proposed to deal with diffuse reflectance. In this paper, Hecht states that he assumed (as did Simmons) that in the plane-parallel treatment, the layers could not be made infinitesimally small, but should be restricted to layers of finite thickness interpreted as the mean particle diameter of the sample. This is also supported by the observation that the ratio of the Kubelka–Munk absorption and scattering coefficients is &lt;templatestyles src="Fraction/styles.css" /&gt;3⁄8 that of corresponding ratio of the Mie coefficients for a sphere. That factor can be rationalized by simple geometric considerations, recognizing that to a first approximation, the absorption is proportional to volume and the scatter is proportional to cross sectional surface area. This is entirely consistent with the Mie coefficients measuring absorption and scatter at a point, and the Kubelka–Munk coefficients measuring scatter by a sphere. To correct this deficiency of the Kubelka–Munk approach, for the case of an infinitely thick sample, Hecht blended the particle and layer methods by replacing the differential equations in the Kubelka–Munk treatment by finite difference equations, and obtained the Hecht finite difference formula: formula_32 Hecht apparently did not know that this result could be generalized, but he realized that the above formula "represents an improvement … and shows the need to consider the particulate nature of scattering media in developing a more precise theory". Karl Norris (USDA), Gerald Birth. Karl Norris pioneered the field of near-infrared spectroscopy. He began by using log(1/"R") as a metric of absorption. While often the samples examined were “infinitely thick”, partially transparent samples were analyzed (especially later) in cells that had a rear reflecting surface (reflector) in a mode called "transflectance". Therefore, the remission from the sample contained light that was back-scattered from the sample, as well as light that was transmitted through the sample, then reflected back to be transmitted through the sample again, thereby doubling the path length. Having no sound theoretical basis for data treatment, Norris used the same electronic processing that was used for absorption data collected in transmission. He pioneered the use of multiple linear regression for analysis of data. Gerry Birth was the founder of the International Diffuse Reflectance Conference (IDRC). He also worked at the USDA. He was known to have a deep desire to have a better understanding of the process of light scattering. He teamed up with Harry Hecht (who was active in the early meetings of IDRC) to write the Physics theory chapter, with many photographic illustrations, in an influential Handbook edited by Phil Williams and Karl Norris: "Nearinfrared Technology in the Agriculture and Food Industries". Donald J. Dahm, Kevin D. Dahm. In 1994, Donald and Kevin Dahm began using numerical techniques to calculate remission and transmission from samples of varying numbers of plane parallel layers from absorption and remission fractions for a single layer. Their plan was to "start with a simple model, treat the problem numerically rather than analytically, then look for analytical functions that describe the numerical results. Assuming success with that, the model would be made more complex, allowing more complex analytical expressions to be derived, eventually, leading to an understanding of diffuse reflection at a level that appropriately approximated particulate samples." They were able to show the fraction of incident light remitted, R, and transmitted, T, by a sample composed of layers, each absorbing a fraction formula_33 and remitting a fraction formula_34 of the light incident upon it, could be quantified by an Absorption/Remission function (symbolized "A"("R","T") and called the ART function), which is constant for a sample composed of any number of identical layers. formula_35 Dahm equation. Also from this process came results for several special cases of two stream solutions for plane parallel layers. For the case of zero absorption, formula_36 formula_37 formula_38. For the case of infinitesimal layers, formula_39, and the ART function gives results approaching equivalence to the remission function. As the void fraction "v"0 of a layer becomes large, formula_40. The ART is related to the Kortüm–Schuster equation for isotopic scatter by formula_41. The Dahms argued that the conventional absorption and scattering coefficients, as well as the differential equations which employ them, implicitly assume that a sample is homogenous at the molecular level. While this is a good approximation for absorption, as the domain of absorption is molecular, the domain of scattering is the particle as a whole. Any approach using continuous mathematics will therefore tend to fail as particles become large. Successful application of theory to a real world sample using the mathematics of plane parallel layers requires assigning properties to the layers that are representative of the sample as a whole (which does not require extensively reworking the mathematics). Such a layer was termed a representative layer, and the theory was termed the representative layer theory. Furthermore, they argued that it was irrelevant whether the light moving from one layer to another was reflected specularly or diffusely. The reflection and back scatter is lumped together as remission. All light leaving the sample on the same side as the incident beam is termed remission, whether it arises from reflection or back scatter. All light leaving the sample on the opposite side from the incident beam is termed transmission. (In a three-flux or higher treatment, such as Giovanelli's, the forward scatter is not indistinguishable from the directly transmitted light. Additionally, Giovanelli's treatment makes the implied assumption of infinitesimal particles.) They developed a scheme, subject to the limitations of a two-flux model, to calculate the "" for a sample. The decadic absorbance of a scattering sample is defined as −log10("R"+"T") or −log10(1−"A"). For a non scattering sample, "R" = 0, and the expression becomes −log10"T" or log(), which is more familiar. In a non-scattering sample, the absorbance has the property that the numerical value is proportional to sample thickness. Consequently, a scatter-corrected absorbance might reasonably be defined as one that has that property. If one has measured the remission and transmission fractions for a sample, Rs and Ts, then the scatter-corrected absorbance should have half the value for half the sample thickness. By calculating the values for R and T for successively thinner samples ("s", "s", "s", …) using the Benford's equations for half thickness, a place will be reached where, for successive values of n (0,1,2,3...), the expression 2"n" (−log("R"+"T")) becomes constant to within a some specified limit, typically 0.01 absorbance units. This value is the scatter-corrected absorbance. Definitions. Remission. In spectroscopy, "remission" refers to the reflection or back-scattering of light by a material. While seeming similar to the word "re-emission", it is the light which is scattered back from a material, as opposed to that which is "transmitted" through the material. The word "re-emission" connotes no such directional character. Based on the origin of the word "emit", which means "to send out or away", "re-emit" means "to send out again", "transmit" means "to send across or through", and "remit" means "to send back". Plane-parallel layers. In spectroscopy, the term "plane parallel layers" may be employed as a mathematical construct in discussing theory. The layers are considered to be semi-infinite. (In mathematics, semi-infinite objects are objects which are infinite or unbounded in some, but not all, possible ways.) Generally, a semi-infinite layer is envisioned as a being bounded by two flat parallel planes, each extending indefinitely, and normal (perpendicular) to the direction of a collimated (or directed) incident beam. The planes are not necessarily physical surfaces which refract and reflect light, but may just describe a mathematical plane, suspended in space. When the plane parallel layers have surfaces, they have been variously called plates, sheets, or slabs. Representative layer. The term "representative layer" refers to a hypothetical plane parallel layer that has properties relevant to absorption spectroscopy that are representative of a sample as a whole. For particulate samples, a layer is representative if each type of particle in the sample makes up the same fraction of volume and surface area in the layer as in the sample. The void fraction in the layer is also the same as in the sample. Implicit in the representative layer theory is that absorption occurs at the molecular level, but that scatter is from a whole particle. List of principal symbols used. Note: Where a given letter is used in both capital and lower case form (r, R and t ,T ) the capital letter refers to the macroscopic observable and the lower case letter to the corresponding variable for an individual particle or layer of the material. Greek symbols are used for properties of a single particle. a – absorption fraction of a single layer r – remission fraction of a single layer t – transmission fraction of a single layer An, Rn, Tn – the absorption, remission, and transmission fractions for a sample composed of n layers α – absorption fraction of a particle β – back-scattering from a particle σ – isotropic scattering from a particle k – absorption coefficient, defined as the fraction of incident light absorbed by a very thin layer divided by the thickness of that layer s – scattering coefficient, defined as the fraction of incident light scattered by a very thin layer divided by the thickness of that layer References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_n= \\frac {\\Omega - \\frac {1}{\\Omega}}{\\Omega \\Psi^n- \\frac {1}{\\Omega \\Psi^n}},\\qquad" }, { "math_id": 1, "text": "R_n= \\frac {\\Psi^n - \\frac {1}{\\Psi^n}}{\\Omega \\Psi^n- \\frac {1}{\\Omega \\Psi^n}},\\qquad" }, { "math_id": 2, "text": "A_n = 1 - T_n - R_n," }, { "math_id": 3, "text": "\\Omega = \\frac {1+r^2-t^2+\\Delta}{2r},\\qquad" }, { "math_id": 4, "text": "\\Psi = \\frac {1-r^2+t^2+\\Delta}{2t}" }, { "math_id": 5, "text": "\\Delta = \\sqrt{(1 + r + t)(1 + r - t)(1 - r + t)(1 - r - t)}." }, { "math_id": 6, "text": "K" }, { "math_id": 7, "text": "S" }, { "math_id": 8, "text": "R_\\infty = 1 + \\frac {a_0}{r_0} - \\sqrt{\\frac {a_0^2}{r_0^2} + 2 \\frac {a_0}{r_0} }" }, { "math_id": 9, "text": "\\frac{(1-R_\\infty)^2}{2R_\\infty} = \\frac{k}{s}" }, { "math_id": 10, "text": "T_{n+1} = \\frac {T_n T_1}{1-R_n R_1},\\qquad" }, { "math_id": 11, "text": "R_{n+1} = R_n + \\frac {T_n^2 R_1}{1-R_n R_1},\\qquad" }, { "math_id": 12, "text": "A_{n+1} = 1 - T_{n+1} - R_{n+1}" }, { "math_id": 13, "text": "R_{d/2} = \\frac {R_d}{1+T_d},\\qquad" }, { "math_id": 14, "text": "T_{d/2} = \\sqrt{T_d (1-R_{d/2}^2)},\\qquad" }, { "math_id": 15, "text": "A_{d/2} = 1 - T_{d/2} - R_{d/2}," }, { "math_id": 16, "text": "T_{2d} = \\frac {T_d^2}{1-R_d^2},\\qquad" }, { "math_id": 17, "text": "R_{2d} = R_d (1 + T_{2d}), \\qquad" }, { "math_id": 18, "text": "A_{2d} = 1 - T_{2d} - R_{2d}" }, { "math_id": 19, "text": "T_{x+y} = \\frac {T_x T_y}{1-R_{(-x)} R_y},\\qquad" }, { "math_id": 20, "text": "R_{x+y} = R_x + \\frac {T_x^2 R_y}{1-R_{(-x)} R_y},\\qquad" }, { "math_id": 21, "text": "A_{x+y} = 1 - T_{x+y} - R_{x+y}" }, { "math_id": 22, "text": "R_{(-x)}" }, { "math_id": 23, "text": "x" }, { "math_id": 24, "text": "R(\\mu_0) = 1 - H(\\mu_0) \\sqrt{1-\\omega_0} " }, { "math_id": 25, "text": "k=\\frac {2\\alpha}{\\alpha+\\sigma}" }, { "math_id": 26, "text": "s=\\frac{\\sigma}{\\alpha+\\sigma}" }, { "math_id": 27, "text": "\\frac {(1-R_\\infty)^2}{2 R_\\infty} = \\frac {k}{s}" }, { "math_id": 28, "text": "\\frac {k}{s} = \\frac {\\left( \\frac {2\\alpha}{\\alpha + \\sigma}\\right)}{\\left( \\frac {\\sigma}{\\alpha + \\sigma}\\right)} = 2 \\frac {\\alpha}{\\sigma}" }, { "math_id": 29, "text": "F(R_\\infty) = \\frac {(1-R_\\infty)^2}{2R_\\infty} = 2\\frac{\\alpha}{\\sigma}" }, { "math_id": 30, "text": "F(R_\\infty) \\equiv \\frac {(1-R_\\infty)^2}{2R_\\infty} = \\frac{K}{S}" }, { "math_id": 31, "text": "\\log F(R_\\infty) = \\log k - \\log s" }, { "math_id": 32, "text": "F(R_\\infty) = a\\left(\\frac{1}{r} - 1\\right) - \\frac {a^2}{2r}" }, { "math_id": 33, "text": "a" }, { "math_id": 34, "text": "r" }, { "math_id": 35, "text": "A(R,T) \\equiv \\frac {(1-R_n)^2-T_n^2}{R_n} = \\frac {(2-a-2r)a}{r} = \\frac {a(1+t-r)}{r}." }, { "math_id": 36, "text": "R_n = \\frac {nr}{nr+t},\\qquad" }, { "math_id": 37, "text": "T_n = \\frac {t}{nr+t}," }, { "math_id": 38, "text": "R_n + T_n = 1" }, { "math_id": 39, "text": "A(R_\\infty,0) = \\frac {(2-a-2r)a}{r} \\approx 2 \\frac {a}{r} = 2F(R_\\infty)" }, { "math_id": 40, "text": "\\lim_{v_0 \\to 1} A(R,T) = \\frac {(2-\\alpha-2\\beta)\\alpha}{\\beta} \\approx 2\\frac{\\alpha}{\\beta}" }, { "math_id": 41, "text": "\\lim_{v_0 \\to 1} A(R,T) = 4\\frac{\\alpha}{\\sigma}" } ]
https://en.wikipedia.org/wiki?curid=5827181
5827247
Auction theory
Branch of applied economics regarding the behavior of bidders in auctions Auction theory is a branch of applied economics that deals with how bidders act in auctions and researches how the features of auctions incentivise predictable outcomes. Auction theory is a tool used to inform the design of real-world auctions. Sellers use auction theory to raise higher revenues while allowing buyers to procure at a lower cost. The confluence of the price between the buyer and seller is an economic equilibrium. Auction theorists design rules for auctions to address issues that can lead to market failure. The design of these rulesets encourages optimal bidding strategies in a variety of informational settings. The 2020 Nobel Prize for Economics was awarded to Paul R. Milgrom and Robert B. Wilson "for improvements to auction theory and inventions of new auction formats." Introduction. Auctions facilitate transactions by enforcing a specific set of rules regarding the resource allocations of a group of bidders. Theorists consider auctions to be economic games that have two aspects: format and information. The format defines the rules for the announcement of prices, the placement of bids, the updating of prices, when the auction closes, and the way a winner is picked. The way auctions differ with respect to information regards the asymmetries of information that exist between bidders. In most auctions, bidders have some private information that they choose to withhold from their competitors. For example, bidders usually know their personal valuation of the item, which is unknown to the other bidders and the seller; however, the behaviour of bidders can influence valuations by other bidders. History. A purportedly historical event related to auctions is a custom in Babylonia, namely when men make an offers to women in order to marry them. The more familiar the auction system is, the more situations where auctions are conducted. There are auctions for various things, such as livestock, rare and unusual items, and financial assets. Non-cooperative games have a long history, beginning with Cournot's "duopoly" model. A 1994 Nobel Laureate for Economic Sciences, John Nash, proved a general-existence theorem for non-cooperative games, which moves beyond simple zero-sum games. This theory was generalized by Vickrey (1961) to deal with the unobservable value of each buyer. By the early 1970s, auction theorists had begun defining equilibrium bidding conditions for single-object auctions under most realistic auction formats and information settings. Recent developments in auction theory consider how multiple-object auctions can be performed efficiently. Auction types. There are traditionally four types of auctions that are used for the sale of a single item: Most auction theory revolves around these four "basic" auction types. However, other types have also received some academic study (see ). Developments in the world and in technology have also influenced the current auction system. With the existence of the internet, online auctions have become an option. Auction process. There are six basic activities that complement the auction-based trading process: Auction envelope theorem. The auction envelope theorem defines certain probabilities expected to arise in an auction. Benchmark model. The "benchmark model" for auctions, as defined by McAfee and McMillan (1987), is as follows: Win probability. In an auction a buyer bidding formula_0 wins if the opposing bidders make lower bids. The mapping from valuations to bids is strictly increasing; the high-valuation bidder therefore wins. In statistics the probability of having the "first" valuation is written as: formula_1 With independent valuations and N other bidders formula_2 The auction. A buyer's payoff is formula_3 Let formula_4 be the bid that maximizes the buyer's payoff. Therefore formula_5 The equilibrium payoff is therefore formula_6 Necessary condition for the maximum: formula_7 when formula_8 The final step is to take the total derivative of the equilibrium payoff formula_9 The second term is zero. Therefore formula_10 Then formula_10formula_11 Example uniform distribution with two buyers. For the uniform distribution the probability if having a higher value that one other buyer is formula_12. Then formula_13 The equilibrium payoff is therefore formula_14. The win probability is formula_15. formula_6 Then formula_16. Rearranging this expression, formula_17 With three buyers, formula_10formula_18, then formula_19 With formula_20 buyers formula_21 Lebrun (1996) provides a general proof that there are no asymmetric equilibriums. Optimal auctions. Auctions from a buyer's perspective. The revelation principle is a simple but powerful insight. In 1979 proved a general revenue equivalence theorem that applies to all buyers and hence to the seller. Their primary interest was finding out which auction rule would be better for the buyers. For example, there might be a rule that all buyers pay a nonrefundable bid (such auctions are conducted on-line). The equivalence theorem shows that any allocation mechanism or auction that satisfies the four main assumptions of the benchmark model will lead to the same expected revenue for the seller. (Buyer "i" with value "v" has the same "payoff" or "buyer surplus" across all auctions.) Symmetric auctions with correlated valuation distributions. The first model for a broad class of models was Milgrom and Weber's (1983) paper on auctions with affiliated valuations. In a recent working paper on general asymmetric auctions, Riley (2022) characterized equilibrium bids for all valuation distributions. Each buyer's valuation can be positively or negatively correlated. The revelation principle as applied to auctions is that the marginal buyer payoff or "buyer surplus" is P(v), the probability of being the winner. In every participant-efficient auction, the probability of winning is 1 for a high-valuation buyer. The marginal payoff to a buyer is therefore the same in every such auction. The payoff must therefore be the same as well. Auctions from the seller's perspective (revenue maximization). Quite independently and soon after, used the revelation principle to characterize revenue-maximizing sealed high-bid auctions. In the "regular" case this is a participation-efficient auction. Setting a reserve price is therefore optimal for the seller. In the "irregular" case it has since been shown that the outcome can be implemented by prohibiting bids in certain sub-intervals. Relaxing each of the four main assumptions of the benchmark model yields auction formats with unique characteristics. The theory of efficient trading processes developed in a static framework relies heavily on the premise of non-repetition. For example, an auction-seller-optimal design (as derived in Myerson) involves the best lowest price that exceeds both the seller's valuation and the lowest possible buyer's valuation. Game-theoretic models. A game-theoretic auction model is a mathematical game represented by a set of players, a set of actions (strategies) available to each player, and a payoff vector corresponding to each combination of strategies. Generally, the players are the buyer(s) and the seller(s). The action set of each player is a set of bid functions or reservation prices (reserves). Each bid function maps the player's value (in the case of a buyer) or cost (in the case of a seller) to a bid price. The payoff of each player under a combination of strategies is the expected utility (or expected profit) of that player under that combination of strategies. Game-theoretic models of auctions and strategic bidding generally fall into either of the following two categories. In a private values model, each participant (bidder) assumes that each of the competing bidders obtains a random "private value" from a probability distribution. In a common value model, the participants have equal valuations of the item, but they do not have perfectly accurate information to arrive at this valuation. In lieu of knowing the exact value of the item, each participant can assume that any other participant obtains a random signal, which can be used to estimate the true value, from a probability distribution common to all bidders. Usually, but not always, the private-values model assumes that the valuations are independent across bidders, whereas a common-value model usually assumes that the valueations are independent up to the common parameters of the probability distribution. A more general category for strategic bidding is the "affiliated values model", in which the bidder's total utility depends on both their individual private signal and some unknown common value. Both the private value and common value models can be perceived as extensions of the general affiliated values model. When it is necessary to make explicit assumptions about bidders' value distributions, most of the published research assumes symmetric bidders. This means that the probability distribution from which the bidders obtain their values (or signals) is identical across bidders. In a private values model which assumes independence, symmetry implies that the bidders' values are "i.i.d." – independently and identically distributed. An important example (which does not assume independence) is Milgrom and Weber's "general symmetric model" (1982). Asymmetric auctions. The earliest paper on asymmetric value distributions is by Vickrey (1961). One buyer's valuation is uniformly distributed over the closed interval [0,1]. The other buyer has a known value of 1/2. Both the equilibrium and uniform bid distributions will support [0,1/2]. Jump-bidding; Suppose that the buyers' valuations are uniformly distributed on [0,1] and [0,2] and buyer 1 has the wider support. Then both continue to bid half their valuations "except" at v=1. The jump bid: buyer 2 jumps from bidding 1/2 to bidding 3/4. If buyer 1 follows suit she halves her profit margin and less than doubles her win probability (because of the tie breaking rule, a coin toss). So buyer 2 does not jump. This makes buyer 1 much better off. He wins for use if his valuation is above 1/2. The next paper, by Maskin and Riley (2000), provides a qualitative characterization of equilibrium bids when the "strong buyer" S has a value distribution that dominates that of the "weak buyer" under the assumption of conditional stochastic dominance (first-order stochastic dominance for every right-truncated value distribution). Another early contribution is Keith Waehrer's 1999 article. Later published research includes Susan Athey's 2001 "Econometrica" article, as well as that by Reny and Zamir (2004). Revenue equivalence. One of the major findings of auction theory is the revenue equivalence theorem. Early equivalence results focused on a comparison of revenues in the most common auctions. The first such proof, for the case of two buyers and uniformly distributed values, was by . In 1979 proved a much more general result. (Quite independently and soon after, this was also derived by ).The revenue equivalence theorem states that any allocation mechanism, or auction that satisfies the four main assumptions of the benchmark model, will lead to the same expected revenue for the seller (and player "i" of type "v" can expect the same surplus across auction types). The basic version of the theorem asserts that, as long as the Symmetric Independent Private Value (SIPV) environment assumption holds, all standard auctions give the same expected profit to the auctioneer and the same expected surplus to the bidder. Winner's curse. The winner's curse is a phenomenon which can occur in "common value" settings—when the actual values to the different bidders are unknown but correlated, and the bidders make bidding decisions based on estimated values. In such cases, the winner will tend to be the bidder with the highest estimate, but the results of the auction will show that the remaining bidders' estimates of the item's value are less than that of the winner, giving the winner the impression that they "bid too much". In an equilibrium of such a game, the winner's curse does not occur because the bidders account for the bias in their bidding strategies. Behaviorally and empirically, however, winner's curse is a common phenomenon, described in detail by Richard Thaler. Optimal auctions. With identically and independently distributed private valuations, Riley and Samuelson (1981) showed that in any auction or auction-like action (such as the "War of Attrition") the allocation is "participant efficient", i.e. the item is allocated to the buyer submitting the highest bid, with a probability of 1. They then showed that allocation equivalence implied payoff equivalence for all reserve prices. They then showed that discriminating against low-value buyers by setting a minimum, or reserve, price would increase expected revenue. Along with Myerson, they showed that the most profitable reserve price is independent of the number of bidders. The reserve price only comes into play if there is a single bid. Thus it is equivalent to ask what reserve price would maximize the revenue from a single buyer. If values are uniformly distributed over the interval [0, 100], then the probability p(r) that this buyer's value is less than r is p(r) = (100-r)/100. Therefore the expected revenue is p(r)*r = (100 - r)*r/100 =(r-50)*(r-50) + 25 Thus, the expected revenue-maximizing reserve price is 50. Also examined is the question of whether it might ever be more profitable to design a mechanism that awards the item to a bidder other than one with the highest value. Surprisingly, this is the case. As Maskin and Riley then showed, this is equivalent to excluding bids over certain intervals above the optimal reserve price. Bulow and Klemperer (1996) have shown that an auction with n bidders and an optimally chosen reserve price generates a smaller profit for the seller than a standard auction with n+1 bidders and no reserve price. JEL classification. In the Journal of Economic Literature Classification System, game theory is classified as C7, under Mathematical and Quantitative Methods, and auctions are classified as D44, under Microeconomics. Applications to business strategy. Scholars of managerial economics have noted some applications of auction theory in business strategy. Namely, auction theory can be applied to "preemption games" and "attrition games". Preemption games are games where entrepreneurs preempt other firms by entering a market with new technology before it's ready for commercial deployment. The value generated from waiting for the technology to become commercially viable also increases the risk that a competitor will enter the market preemptively. Preemptive games can be modeled as a first-priced sealed auction. Both companies would prefer to enter the market when the technology is ready for commercial deployment; this can be considered the valuation by both companies. However, one firm might hold information stating that technology is viable earlier than the other firm believes. The company with better information would then "bid" to enter the market earlier, even as the risk of failure is higher. Games of attrition are games of preempting other firms to leave the market. This often occurs in the airline industry as these markets are considered highly contestable. As a new airline enters the market, they will decrease prices to gain market share. This forces established airlines to also decrease prices to avoid losing market share. This creates an auction game. Usually, market entrants will use a strategy of attempting to bankrupt established firms. Thus, the auction is measured in how much each firm is willing to lose as they stay in the game of attrition. The firm that lasts the longest in the game wins the market share. This strategy has been used more recently by entertainment streaming services such as Netflix, Hulu, Disney+, and HBO Max which are all loss-making firms attempting to gain market share by bidding to expand entertainment content. Nobel Prize. Two Stanford University professors, Paul Milgrom and Robert Wilson, won the 2020 Nobel Prize in Economics for advancing auction theory by inventing several new auction formats, including the simultaneous multiple-round auction (SMRA), which combines the benefit of both the English (open-outcry), and sealed-bid, auctions. SMRAs are deemed to solve a problem facing the Federal Communications Commission (FCC). If the FCC were to sell all of its telecommunication frequency slots by using a traditional auction method, it would eventually either give away licenses for free or end up with a telecom monopoly in the United States. The process of simultaneous multiple-round auctions is that there are three- to four-round auctions. Every bidder seals their bid, and the auctioneer announces the highest bid to all bidders at the end of each round. All the bidders can adjust and change their auction price and strategy after they listen to the highest bid in a particular round. The auction will continue until the highest bid of the current round is lower than the previous round's highest bid. SMRA's first distinguishing feature is that the auction is taking place simultaneously for different items; therefore, it seriously increases the cost for speculators. For the same reason, sealed bidding can ensure that all bidding reflects the bidder’s valuation of the product. The second difference is that the bidding takes place in numerous rounds and the highest price of bidding is announced each round, allowing bidders to learn more about their competitors' preferences and information and to adjust their strategy accordingly, thus decreasing the effect of asymmetric information inside the auction. In addition, multiple-round bidding can maintain the bidder's activity in the auction. It has substantially increased the information the bidder has about the highest bid, because at the end of every round, the host will announce the highest bid after the bidding. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B(v)" }, { "math_id": 1, "text": "W=F_{({\\scriptstyle\\text{1}})}(v)" }, { "math_id": 2, "text": "W=F(v)^N" }, { "math_id": 3, "text": " u(v,b) = w(b)(v-b)" }, { "math_id": 4, "text": "B" }, { "math_id": 5, "text": "u(v,B)>u(v,b)=W(b)(v-b)" }, { "math_id": 6, "text": "U(v)=W(B)(v-B))" }, { "math_id": 7, "text": "\\partial u/\\partial b=0" }, { "math_id": 8, "text": "b=B" }, { "math_id": 9, "text": "U'(v)=W(B)+\\partial u/\\partial b" }, { "math_id": 10, "text": "U'(v)=W" }, { "math_id": 11, "text": "=F_{({\\scriptstyle\\text{1}})}(v)" }, { "math_id": 12, "text": "F(v)=v" }, { "math_id": 13, "text": "U'(v)=v" }, { "math_id": 14, "text": "U(v)=\\textstyle \\int_{0}^{v} \\displaystyle xdx=(1/2)v^2" }, { "math_id": 15, "text": "W=F(v)=v" }, { "math_id": 16, "text": "(1/2)v^2=v(v-B(v))" }, { "math_id": 17, "text": "B(v)=(1/2)v" }, { "math_id": 18, "text": "=F_{({\\scriptstyle\\text{1}})}(v)=F(v)^2=v^2" }, { "math_id": 19, "text": "B(v)=(2/3)v" }, { "math_id": 20, "text": "N+1" }, { "math_id": 21, "text": "B(v)=(N/(N+1))v" } ]
https://en.wikipedia.org/wiki?curid=5827247
582770
Particle number
Number of particles in a thermodynamic system In thermodynamics, the particle number (symbol N) of a thermodynamic system is the number of constituent particles in that system. The particle number is a fundamental thermodynamic property which is conjugate to the chemical potential. Unlike most physical quantities, the particle number is a dimensionless quantity, specifically a countable quantity. It is an extensive property, as it is directly proportional to the size of the system under consideration and thus meaningful only for closed systems. A constituent particle is one that cannot be broken into smaller pieces at the scale of energy k·T involved in the process (where k is the Boltzmann constant and T is the temperature). For example, in a thermodynamic system consisting of a piston containing water vapour, the particle number is the number of water molecules in the system. The meaning of constituent particles, and thereby of particle numbers, is thus temperature-dependent. Determining the particle number. The concept of particle number plays a major role in theoretical considerations. In situations where the actual particle number of a given thermodynamical system needs to be determined, mainly in chemistry, it is not practically possible to measure it directly by counting the particles. If the material is homogeneous and has a known "amount of substance" "n" expressed in "moles", the particle number "N" can be found by the relation : formula_0, where "NA" is the Avogadro constant. Particle number density. A related intensive system parameter is the particle number density (or particle number concentration PNC), a quantity of kind volumetric number density obtained by dividing the particle number of a system by its volume. This parameter is often denoted by the lower-case letter "n". In quantum mechanics. In quantum mechanical processes, the total number of particles may not be preserved. The concept is therefore generalized to the particle number operator, that is, the observable that counts the number of constituent particles. In quantum field theory, the particle number operator (see Fock state) is conjugate to the phase of the "classical" wave (see coherent state). In air quality. One measure of air pollution used in air quality standards is the atmospheric concentration of particulate matter. This measure is usually expressed in μg/m3 (micrograms per cubic metre). In the current EU emission norms for cars, vans, and trucks and in the upcoming EU emission norm for non-road mobile machinery, particle number measurements and limits are defined, commonly referred to as "PN", with units [#/km] or [#/kWh]. In this case, PN expresses a quantity of particles per unit distance (or work). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " N = nN_A" } ]
https://en.wikipedia.org/wiki?curid=582770
58278053
Schwarz lantern
Near-cylindrical polyhedron with large area In mathematics, the Schwarz lantern is a polyhedral approximation to a cylinder, used as a pathological example of the difficulty of defining the area of a smooth (curved) surface as the limit of the areas of polyhedra. It is formed by stacked rings of isosceles triangles, arranged within each ring in the same pattern as an antiprism. The resulting shape can be folded from paper, and is named after mathematician Hermann Schwarz and for its resemblance to a cylindrical paper lantern. It is also known as Schwarz's boot, Schwarz's polyhedron, or the Chinese lantern. As Schwarz showed, for the surface area of a polyhedron to converge to the surface area of a curved surface, it is not sufficient to simply increase the number of rings and the number of isosceles triangles per ring. Depending on the relation of the number of rings to the number of triangles per ring, the area of the lantern can converge to the area of the cylinder, to a limit arbitrarily larger than the area of the cylinder, or to infinity—in other words, the area can diverge. The Schwarz lantern demonstrates that sampling a curved surface by close-together points and connecting them by small triangles is inadequate to ensure an accurate approximation of area, in contrast to the accurate approximation of arc length by inscribed polygonal chains. The phenomenon that closely sampled points can lead to inaccurate approximations of area has been called the Schwarz paradox. The Schwarz lantern is an instructive example in calculus and highlights the need for care when choosing a triangulation for applications in computer graphics and the finite element method. History and motivation. Archimedes approximated the circumference of circles by the lengths of inscribed or circumscribed regular polygons. More generally, the length of any smooth or rectifiable curve can be defined as the supremum of the lengths of polygonal chains inscribed in them. However, for this to work correctly, the vertices of the polygonal chains must lie on the given curve, rather than merely near it. Otherwise, in a counterexample sometimes known as the staircase paradox, polygonal chains of vertical and horizontal line segments of total length formula_0 can lie arbitrarily close to a diagonal line segment of length formula_1, converging in distance to the diagonal segment but not converging to the same length. The Schwarz lantern provides a counterexample for surface area rather than length, and shows that for area, requiring vertices to lie on the approximated surface is not enough to ensure an accurate approximation. German mathematician Hermann Schwarz (1843–1921) devised his construction in the late 19th century as a counterexample to the erroneous definition in J. A. Serret's 1868 book , which incorrectly states that: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Let a portion of curved surface be bounded by a contour formula_2; we will define the area of this surface to be the limit formula_3 tended towards by the area of an inscribed polyhedral surface formed from triangular faces and bounded by a polygonal contour formula_4 whose limit is the contour formula_2. It must be shown that the limit formula_3 exists and that it is independent of the law according to which the faces of the inscribed polyhedral surface shrink. Independently of Schwarz, Giuseppe Peano found the same counterexample. At the time, Peano was a student of Angelo Genocchi, who, from communication with Schwarz, already knew about the difficulty of defining surface area. Genocchi informed Charles Hermite, who had been using Serret's erroneous definition in his course. Hermite asked Schwarz for details, revised his course, and published the example in the second edition of his lecture notes (1883). The original note from Schwarz to Hermite was not published until the second edition of Schwarz's collected works in 1890. An instructive example of the value of careful definitions in calculus, the Schwarz lantern also highlights the need for care in choosing a triangulation for applications in computer graphics and for the finite element method for scientific and engineering simulations. In computer graphics, scenes are often described by triangulated surfaces, and accurate rendering of the illumination of those surfaces depends on the direction of the surface normals. A poor choice of triangulation, as in the Schwarz lantern, can produce an accordion-like surface whose normals are far from the normals of the approximated surface, and the closely-spaced sharp folds of this surface can also cause problems with aliasing. The failure of Schwarz lanterns to converge to the cylinder's area only happens when they include highly obtuse triangles, with angles close to 180°. In restricted classes of Schwarz lanterns using angles bounded away from 180°, the area converges to the same area as the cylinder as the number of triangles grows to infinity. The finite element method, in its most basic form, approximates a smooth function (often, the solution to a physical simulation problem in science or engineering) by a piecewise-linear function on a triangulation. The Schwarz lantern's example shows that, even for simple functions such as the height of a cylinder above a plane through its axis, and even when the function values are calculated accurately at the triangulation vertices, a triangulation with angles close to 180° can produce highly inaccurate simulation results. This motivates mesh generation methods for which all angles are bounded away from 180°, such as nonobtuse meshes. Construction. The discrete polyhedral approximation considered by Schwarz can be described by two parameters: formula_6, the number of rings of triangles in the Schwarz lantern; and formula_7, half of the number of triangles per ring. For a single ring (formula_5), the resulting surface consists of the triangular faces of an antiprism of order formula_7. For larger values of formula_6, the Schwarz lantern is formed by stacking formula_6 of these antiprisms. To construct a Schwarz lantern that approximates a given right circular cylinder, the cylinder is sliced by parallel planes into formula_6 congruent cylindrical rings. These rings have formula_8 circular boundaries—two at the ends of the given cylinder, and formula_9 more where it was sliced. In each circle, formula_7 vertices of the Schwarz lantern are spaced equally, forming a regular polygon. These polygons are rotated by an angle of formula_10 from one circle to the next, so that each edge from a regular polygon and the nearest vertex on the next circle form the base and apex of an isosceles triangle. These triangles meet edge-to-edge to form the Schwarz lantern, a polyhedral surface that is topologically equivalent to the cylinder. Ignoring top and bottom vertices, each vertex touches two apex angles and four base angles of congruent isosceles triangles, just as it would in a tessellation of the plane by triangles of the same shape. As a consequence, the Schwarz lantern can be folded from a flat piece of paper, with this tessellation as its crease pattern. This crease pattern has been called the "Yoshimura pattern", after the work of Y. Yoshimura on the Yoshimura buckling pattern of cylindrical surfaces under axial compression, which can be similar in shape to the Schwarz lantern. Area. The area of the Schwarz lantern, for any cylinder and any particular choice of the parameters formula_6 and formula_7, can be calculated by a straightforward application of trigonometry. A cylinder of radius formula_11 and length formula_12 has area formula_13. For a Schwarz lantern with parameters formula_6 and formula_7, each band is a shorter cylinder of length formula_14, approximated by formula_15 isosceles triangles. The length of the base of each triangle can be found from the formula for the edge length of a regular formula_7-gon, namely formula_16 The height formula_17 of each triangle can be found by applying the Pythagorean theorem to a right triangle formed by the apex of the triangle, the midpoint of the base, and the midpoint of the arc of the circle bounded by the endpoints of the base. The two sides of this right triangle are the length formula_14 of the cylindrical band, and the sagitta of the arc, giving the formula formula_18 Combining the formula for the area of each triangle from its base and height, and the total number formula_19 of the triangles, gives the Schwarz lantern a total area of formula_20 Limits. The Schwarz lanterns, for large values of both parameters, converge uniformly to the cylinder that they approximate. However, because there are two free parameters formula_6 and formula_7, the limiting area of the Schwarz lantern, as both formula_6 and formula_7 become arbitrarily large, can be evaluated in different orders, with different results. If formula_6 is fixed while formula_7 grows, and the resulting limit is then evaluated for arbitrarily large choices of formula_6, one obtains formula_21 the correct area for the cylinder. In this case, the inner limit already converges to the same value, and the outer limit is superfluous. Geometrically, substituting each cylindrical band by a band of very sharp isosceles triangles accurately approximates its area. On the other hand, reversing the ordering of the limits gives formula_22 In this case, for a fixed choice of formula_7, as formula_6 grows and the length formula_14 of each cylindrical band becomes arbitrarily small, each corresponding band of isosceles triangles becomes nearly planar. Each triangle approaches the triangle formed by two consecutive edges of a regular formula_15-gon, and the area of the whole band of triangles approaches formula_15 times the area of one of these planar triangles, a finite number. However, the number formula_6 of these bands grows arbitrarily large; because the lantern's area grows in approximate proportion to formula_6, it also becomes arbitrarily large. It is also possible to fix a functional relation between formula_6 and formula_7, and to examine the limit as both parameters grow large simultaneously, maintaining this relation. Different choices of this relation can lead to either of the two behaviors described above, convergence to the correct area or divergence to infinity. For instance, setting formula_23 (for an arbitrary constant formula_24) and taking the limit for large formula_7 leads to convergence to the correct area, while setting formula_25 leads to divergence. A third type of limiting behavior is obtained by setting formula_26. For this choice, formula_27 In this case, the area of the Schwarz lantern, parameterized in this way, converges, but to a larger value than the area of the cylinder. Any desired larger area can be obtained by making an appropriate choice of the constant formula_24. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2" }, { "math_id": 1, "text": "\\sqrt 2" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "S" }, { "math_id": 4, "text": "\\Gamma" }, { "math_id": 5, "text": "m=1" }, { "math_id": 6, "text": "m" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "m+1" }, { "math_id": 9, "text": "m-1" }, { "math_id": 10, "text": "\\pi/n" }, { "math_id": 11, "text": "r" }, { "math_id": 12, "text": "\\ell" }, { "math_id": 13, "text": "2\\pi r\\ell" }, { "math_id": 14, "text": "\\ell/m" }, { "math_id": 15, "text": "2n" }, { "math_id": 16, "text": "2r\\sin\\frac{\\pi}{n}." }, { "math_id": 17, "text": "h" }, { "math_id": 18, "text": "h^2 = \\left(\\frac{\\ell}{m}\\right)^2+\\left(r\\left(1-\\cos\\frac{\\pi}{n}\\right)\\right)^2." }, { "math_id": 19, "text": "2mn" }, { "math_id": 20, "text": "\nA(m,n)=2mn\\left(r\\sin\\frac{\\pi}{n}\\right)\n\\sqrt{\\left(\\frac{\\ell}{m}\\right)^2+r^2\\left(1-\\cos\\frac{\\pi}{n}\\right)^2}." }, { "math_id": 21, "text": "\\lim_{m\\to\\infty} \\lim_{n\\to\\infty} A(m,n)=2\\pi r\\ell," }, { "math_id": 22, "text": "\\lim_{n\\to\\infty} \\lim_{m\\to\\infty} A(m,n)=\\infty." }, { "math_id": 23, "text": "m=cn" }, { "math_id": 24, "text": "c" }, { "math_id": 25, "text": "m=cn^3" }, { "math_id": 26, "text": "m=cn^2" }, { "math_id": 27, "text": "\\lim_{n\\to\\infty} A(cn^2,n)=\n2\\pi r\\sqrt{\\ell^2+\\frac{r^2\\pi^4c^2}{4}}." }, { "math_id": 28, "text": "n=2" } ]
https://en.wikipedia.org/wiki?curid=58278053
58278312
Aspect's experiment
Quantum mechanics experiment Aspect's experiment was the first quantum mechanics experiment to demonstrate the violation of Bell's inequalities with photons using distant detectors. Its 1982 result allowed for further validation of the quantum entanglement and locality principles. It also offered an experimental answer to Albert Einstein, Boris Podolsky, and Nathan Rosen's paradox which had been proposed about fifty years earlier. It was the first experiment to remove the locality loophole, as it was able to modify the angle of the polarizers while the photons were in flight, faster than what light would take to reach the other polarizer, removing the possibility of communications between detectors. The experiment was led by French physicist Alain Aspect at the Institut d'optique théorique et appliquée in Orsay between 1980 and 1982. Its importance was immediately recognized by the scientific community. Although the methodology carried out by Aspect presents a potential flaw, the detection loophole, his result is considered decisive and led to numerous other experiments (the so-called Bell tests) which confirmed Aspect's original experiment. For his work on this topic, Aspect was awarded part of the 2022 Nobel Prize in Physics. Background. Entanglement and the EPR paradox. The Einstein–Podolsky–Rosen (EPR) paradox is a thought experiment proposed by physicists Albert Einstein, Boris Podolsky and Nathan Rosen which argues that the description of physical reality provided by quantum mechanics is incomplete. In the 1935 EPR paper titled "Can Quantum-Mechanical Description of Physical Reality be Considered Complete?", they argued for the existence of "elements of reality" that were not part of quantum theory, and speculated that it should be possible to construct a theory containing these hidden variables. Resolutions of the paradox have important implications for the interpretation of quantum mechanics. The thought experiment involves a pair of particles prepared in what would later become known as an entangled state. Einstein, Podolsky, and Rosen pointed out that, in this state, if the position of the first particle were measured, the result of measuring the position of the second particle could be predicted. If instead the momentum of the first particle were measured, then the result of measuring the momentum of the second particle could be predicted. They argued that no action taken on the first particle could instantaneously affect the other, since this would involve information being transmitted faster than light, which is forbidden by the theory of relativity. They invoked a principle, later known as the "EPR criterion of reality", positing that: "If, without in any way disturbing a system, we can predict with certainty (i.e., with probability equal to unity) the value of a physical quantity, then there exists an element of reality corresponding to that quantity." From this, they inferred that the second particle must have a definite value of both position and of momentum prior to either quantity being measured. But quantum mechanics considers these two observables incompatible and thus does not associate simultaneous values for both to any system. Einstein, Podolsky, and Rosen therefore concluded that quantum theory does not provide a complete description of reality. Bell's inequalities. In 1964, Irish physicist John Stewart Bell carried the analysis of quantum entanglement much further. He deduced that if measurements are performed independently on the two separated particles of an entangled pair, then the assumption that the outcomes depend upon hidden variables within each half implies a mathematical constraint on how the outcomes on the two measurements are correlated. This constraint would later be named the Bell inequalities. Bell then showed that quantum physics predicts correlations that violate this inequality. Consequently, the only way that hidden variables could explain the predictions of quantum physics is if they are "nonlocal", which is to say that somehow the two particles are able to influence one another instantaneously no matter how widely they ever become separated. In 1969, John Clauser and Michael Horne, along with Horne's doctoral student Abner Shimony, and Francis Pinki's doctoral student Richard Holt, came up with the CHSH inequality, a reformulation of Bell inequality that could better tested with experiments. Early experiments in the United States. The first rudimentary experiment designed to test Bell's theorem was performed in 1972 by Clauser and Stuart Freedman at University of California, Berkeley. In 1973, at Harvard University, Pipkin and Holt's experiments suggested the opposite conclusion, negating that quantum mechanics violates the Bell inequalities. Edward S. Fry and Randall C. Thompson Texas A&amp;M University, reattempted the experiment in 1973 and agreed with Clauser. These experiments were only a limited test, because the choice of detector settings was made before the photons had left the source. Advised by John Bell, Alain Aspect worked to develop an experiment to remove this limitation. In France. Alain Aspect completed his doctoral thesis in 1971 working on holography and then went abroad to teach in the École Normale in Cameroon. He returned to France in 1974 and joined the Institut d'optique in Orsay working for his habilitation thesis. Physicist Christian Imbert handled him various papers from Bell and Aspect worked for five year in the construction and preliminary tests for his experiment. He published his first experimental results in 1981, and completed his habilitation in 1983 with the final results of his experiment. The referees included André Maréchal and Christian Imbert from the Institut d'optique, Franck Laloë, Bernard d'Espagnat, Claude Cohen-Tannoudji, and John Bell. Theoretical scheme. The illustration above represents the principle scheme from which John Bell demonstrated his inequalities: a source of entangled photons S simultaneously emits two formula_0and formula_1 photons whose polarization is prepared so that both photons' state vector is: formula_2 This formula simply means that the photons are in a superposed state: they are in a linear combination of both photons vertically polarized plus both photons horizontally polarized, with an equal probability. These two photons are then measured using two polarizers P1 and P2, each with a configurable measuring angle: "α" and "β". the result of each polarizer's measurement can be (+) or (−) according to whether the measured polarization is parallel or perpendicular to the polarizer's angle of measurement. One noteworthy aspect is that the polarizers imagined for this ideal experiment give a measurable result both in the (−) and (+) situations. Not all real polarizers are able to do this: some detect the (+) situation for example, but are unable to detect anything in the (−) situation (the photon never leaves the polarizer). Early experiments used the latter sort of polarizer. Alain Aspect's polarizers resulted better able to detect both scenarios and therefore much closer to the ideal experiment. Given the apparatus and the initial state of polarization given to the photons, quantum mechanics is able to predict the probabilities of measuring (+,+), (−,−), (+,−) and (−,+) on the polarizers (P1,P2), oriented on the ("α","β") angles. As a reminder in quantum mechanics: formula_3; formula_4. The quantity of interest is a correlation function given by formula_5 with formula_6 where ("α"',"β"') are a set of different angles. According to the CHSH inequality, formula_7, a type of Bell inequality. However quantum mechanics predicts a maximal violation of this inequality for |"α"−"β"| = |"α'"−"β"| = |"α'"−"β'"| = 22.5° and |"α"−"β"' | = 67.5°. Proposal. In 1975, since a decisive experiment based on the violation of Bell's inequalities and verifying the veracity of quantum entanglement was still missing, Alain Aspect proposed in an article, an experiment meticulous enough to be irrefutable: "Proposed experiment to test the non-separability of quantum mechanics". Alain Aspect specified his experiment so that it would be as decisive as possible. Namely: Experiments. Alain Aspect carried a three round series of increasingly complex experiments from 1980 to 1981. The first round of experiments reproduced Clauser, Holt and Fry experimental tests. In the second round of experiments he added a two-channel polarizers which improved the efficiency of the detections. These two rounds of experiments carried this experiment with the help of research engineer Gérard Roger and physicist Philippe Grangier, undergraduate student at the time. The third round of experiments took place in 1982, and were carried in collaboration with Roger and physicist Jean Dalibard, a young student at the time. This last round is the closest to the initial specifications, will be described here. Photon source. The first experiments testing Bell's inequalities possessed low-intensity photon sources and necessitated a continuous week to complete. One of Aspect's first improvements consisted in using a photon source several orders of magnitude more efficient. This source allowed a detection rate of 100 photons per second, thus shortening the length of the experiment to 100 "seconds". The source used is a calcium radiative cascade, excited with a krypton laser. Polarizers with an adjustable orientation variable and on a remote position. One of the main points of this experiment was to make sure that the correlation between the measurements P1 and P2 had not been the result of "classical" effects, especially experimental artefacts. As an example, when P1 and P2 are prepared with fixed angles "α" and "β", it can be surmised that this state generates parasitic correlations through current or mass loops, or some other effects. As a matter of fact, both polarizers belong to the same setup and could influence one another through the various circuits of the experimental device, and generate correlations upon measurement. One can then imagine that the fixed orientation of the polarizers impacts, one way or the other, the state the photon couple is emitted with. In such a case, the correlations between the measurement results could be explained by local hidden variables within the photons, upon their emission. Alain Aspects had mentioned these observations to John Bell himself. One way of ruling out these kinds of effects is to determine the (α,β) orientation of the polarizers at the last moment—after the photons have been emitted, and before their detection—and to keep them far enough from each other to prevent any signal from reaching any one of them. This method assures that the orientation of the polarizers during the emission has no bearing on the result (since the orientation is yet undetermined during emission). It also assures that the polarizers do not influence each other, being too distant from one another. As a consequence, Aspect's experimental set-up has polarizers P1 and P2 set 6 metres apart from the source, and 12 metres apart from one another. With this setup, only 20 nanoseconds elapse between the emission of the photons and their detection. During this extremely short period of time, the experimenter has to decide on the polarizers' orientation and to then orient them. Since it is physically impossible to modify a polarizer's orientation within such a time span, two polarizers—one for each side—were used and pre-oriented in different directions. A high-frequency shunting randomly oriented towards one polarizer or the other. The setup corresponded to one polarizer with a randomly tilting polarization angle. Since it was not possible either to have the emitted photons provoke the tilting, the polarizers shunted periodically every 10 nanoseconds (asynchronously with the photon's emission) thus assuring the referral device would tilt at least once between the emission of the photon and its detection. Two-channel polarizers. Another important characteristic of the 1982 experiment was the use of two-channel polarizers which allowed a measurable result in situations (+) and (−). The polarizers used until Aspect's experiment could detect situation (+), but not situation (−). These single-channel polarizers had two major inconveniences: The two-channel polarizers Aspect used in his experiment avoided these two inconveniences and allowed him to use Bell's formulas directly to calculate the inequalities. Technically, the polarizers he used were polarizing cubes which transmitted one polarity and reflected the other one, emulating a Stern-Gerlach device. Results. Bell's inequalities establish a theoretical curve of the number of correlations (++ or −−) between the two detectors in relation to the relative angle of the detectors formula_8. The shape of the curve is characteristic of the violation of Bell's inequalities. The measures' matching the shape of the curve establishes, quantitatively and qualitatively, that Bell's inequalities have been violated. All three of Aspect's experiments unambiguously confirmed the violation, as predicted by quantum mechanics, thus undermining Einstein's local realistic outlook on quantum mechanics and local hidden variable scenarios. In addition to being confirmed, the violation was confirmed "in the exact way predicted by quantum mechanics", with a statistical agreement of up to 242 standard deviations. Given the technical quality of the experiment, the scrupulous avoidance of experimental artefacts, and the quasi-perfect statistical agreement, this experiment convinced the scientific community at large that quantum physics violates Bell's inequalities. Reception and limitations. After the results, some physicists legitimately tried to look for flaws in Aspect's experiment and to find out how to improve it to resist criticism. Some theoretical objections can be raised against the setup: The ideal experiment, which would negate any imaginable possibility of induced correlations should: The conditions of the experiment also suffered from a detection loophole. After 1982, physicists began to look for application of entanglement, this lead for the development of quantum computing and quantum cryptography. For his work on this topic, Aspect received several awards including the 2010 Wolf Prize in Physics and the 2022 Nobel Prize in Physics, both shared with John Clauser and Anton Zeilinger for their Bell tests. Later experiments. The loopholes mentioned could only be solved from 1998. In the meantime, Aspect's experiment was reproduced, and the violation of Bell's inequalities was systematically confirmed, with a statistical certainty of up to 100 standard deviation. Other experiments were conducted to test the violations of Bell's inequalities with other observables than polarization, in order to approach the original spirit of the EPR paradox, in which Einstein imagined measuring two combined variables (such as position and movement quantity) on an EPR pair. An experiment introduced the combined variables (time and energy) which, once again, confirmed quantum mechanics. In 1998, the Geneva experiment tested the correlation between two detectors set 30 kilometres apart using the Swiss optical fibre telecommunication network. The distance gave more time to commute the angles of the polarizers. It was therefore possible to have a completely random shunting. Additionally, the two distant polarizers were entirely independent. The measurements were recorded on each side, and compared after the experiment by dating each measurement using an atomic clock. The violation of Bell's inequalities was once again verified under strict and practically ideal conditions. If Aspect's experiment implied that a hypothetical coordination signal travel twice as fast as the speed of light "c", Geneva's reached 10 million times "c". An experiment took place at the National Institute of Standards and Technology (NIST) in 2000 on trapped-ion entanglement using a very efficient correlation-based detection method. The reliability of detection proved to be sufficient for the experiment to violate Bell's inequalities on the whole, even though all detected correlations did not violate them. In 2001, Antoine Suarez's team, which included Nicolas Gisin who had participated in the Geneva experiment, reproduced the experiment using mirrors or detectors in motion, allowing them to reverse the order of events across the frames of reference, in accordance with special relativity (this inversion is only possible for events without any causal relationship). The speeds are chosen so that when a photon is reflected or crosses the semi-transparent mirror, the other photon has already crossed or been reflected from the point of view of the frame of reference attached to the mirror. This an "after-after" configuration, in which sound waves play the role of semi-transparent mirrors. In 2015 the first three significant-loophole-free Bell-tests were published within three months by independent groups in Delft University of Technology, University of Vienna and NIST. All three tests simultaneously addressed the detection loophole, the locality loophole, and the memory loophole. Implications. Prior to the Aspect experiments, Bell's theorem was mostly a niche topic. The publications by Aspect and collaborators prompted wider discussion of the subject. The fact that nature is found to violate Bell's inequality implies that one or more of the assumptions underlying that inequality must not hold true. Different interpretations of quantum mechanics provide different views on which assumptions ought to be rejected. Copenhagen-type interpretations generally take the violation of Bell inequalities as grounds to reject the assumption often called counterfactual definiteness. This is also the route taken by interpretations that descend from the Copenhagen tradition, such as consistent histories (often advertised as "Copenhagen done right"), as well as QBism. In contrast, the versions of the many-worlds interpretation all violate an implicit assumption by Bell that measurements have a single outcome. Unlike all of these, the Bohmian or "pilot wave" interpretation abandons the assumption of locality: instantaneous communication can exist at the level of the hidden variables, but it cannot be used to send signals. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu_1" }, { "math_id": 1, "text": "\\nu_2" }, { "math_id": 2, "text": "|\\nu_1,\\nu_2\\rangle = {1 \\over \\sqrt{2}} \\left( \\left| ++\\right\\rangle + \\left| --\\right\\rangle \\right)" }, { "math_id": 3, "text": "P_{++}(\\alpha,\\beta) = P_{--}(\\alpha,\\beta) = {1 \\over 2} \\cos^2(\\alpha-\\beta)" }, { "math_id": 4, "text": "P_{+-}(\\alpha,\\beta) = P_{-+}(\\alpha,\\beta) = {1 \\over 2} \\sin^2(\\alpha-\\beta)" }, { "math_id": 5, "text": "S(\\alpha,\\beta;\\alpha',\\beta')=E(\\alpha,\\beta)-E(\\alpha,\\beta')+E(\\alpha',\\beta)+E(\\alpha',\\beta')" }, { "math_id": 6, "text": "E(\\alpha,\\beta)=P_{++}(\\alpha,\\beta)+P_{--}(\\alpha,\\beta)-P_{+-}(\\alpha,\\beta)-P_{-+}(\\alpha,\\beta )" }, { "math_id": 7, "text": "|S(\\alpha,\\beta;\\alpha',\\beta')|<2" }, { "math_id": 8, "text": "(\\alpha - \\beta)" } ]
https://en.wikipedia.org/wiki?curid=58278312
58278355
József Solymosi
Hungarian-Canadian mathematician József Solymosi is a Hungarian-Canadian mathematician and a professor of mathematics at the University of British Columbia. His main research interests are arithmetic combinatorics, discrete geometry, graph theory, and combinatorial number theory. Education and career. Solymosi earned his master's degree in 1999 under the supervision of László Székely from the Eötvös Loránd University and his Ph.D. in 2001 at ETH Zürich under the supervision of Emo Welzl. His doctoral dissertation was "Ramsey-Type Results on Planar Geometric Objects". From 2001 to 2003 he was S. E. Warschawski Assistant Professor of Mathematics at the University of California, San Diego. He joined the faculty of the University of British Columbia in 2002. He was editor in chief of the "Electronic Journal of Combinatorics" from 2013 to 2015. Contributions. Solymosi was the first online contributor to the first Polymath Project, set by Timothy Gowers to find improvements to the Hales–Jewett theorem. One of his theorems states that if a finite set of points in the Euclidean plane has every pair of points at an integer distance from each other, then the set must have a diameter (largest distance) that is linear in the number of points. This result is connected to the Erdős–Anning theorem, according to which an infinite set of points with integer distances must lie on one line.[ID] In connection with the related Erdős–Ulam problem, on the existence of dense subsets of the plane for which all distances are rational numbers, Solymosi and de Zeeuw proved that every infinite rational-distance set must either be dense in the Zariski topology or it must have all but finitely many of its points on a single line or circle.[EU] With Terence Tao, Solymosi proved a bound of formula_0 on the number of incidences between formula_1 points and formula_2 affine subspaces of any finite-dimensional Euclidean space, whenever each pair of subspaces has at most one point of intersection. This generalizes the Szemerédi–Trotter theorem on points and lines in the Euclidean plane, and because of this the exponent of formula_3 cannot be improved. Their theorem solves (up to the formula_4 in the exponent) a conjecture of Toth, and was inspired by an analogue of the Szemerédi–Trotter theorem for lines in the complex plane.[HD] He has also contributed improved bounds for the Erdős–Szemerédi theorem, showing that every set of real numbers has either a large set of pairwise sums or a large set of pairwise products,[ME] and for the Erdős distinct distances problem, showing that every set of points in the plane has many different pairwise distances.[DD] Recognition. In 2006, Solymosi received a Sloan Research Fellowship and in 2008 he was awarded the André Aisenstadt Mathematics Prize. In 2012 he was named a doctor of the Hungarian Academy of Science. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(mn)^{2/3+\\varepsilon}" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "2/3" }, { "math_id": 4, "text": "\\varepsilon" } ]
https://en.wikipedia.org/wiki?curid=58278355
58282
Thermal diffusivity
Rate at which heat spreads throughout a material In heat transfer analysis, thermal diffusivity is the thermal conductivity divided by density and specific heat capacity at constant pressure. It is a measure of the rate of heat transfer inside a material. It has units of m2/s. Thermal diffusivity is usually denoted by lowercase alpha (α), but a, h, κ (kappa), K, ,D, formula_0 are also used. The formula is: formula_1 where Together, ρcp can be considered the volumetric heat capacity (J/(m3·K)). As seen in the heat equation, formula_2 one way to view thermal diffusivity is as the ratio of the time derivative of temperature to its curvature, quantifying the rate at which temperature concavity is "smoothed out". Thermal diffusivity is a contrasting measure to thermal effusivity. In a substance with high thermal diffusivity, heat moves rapidly through it because the substance conducts heat quickly relative to its volumetric heat capacity or 'thermal bulk'. Thermal diffusivity is often measured with the flash method. It involves heating a strip or cylindrical sample with a short energy pulse at one end and analyzing the temperature change (reduction in amplitude and phase shift of the pulse) a short distance away. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "D_T" }, { "math_id": 1, "text": "\\alpha = \\frac{ k }{ \\rho c_{p} }" }, { "math_id": 2, "text": "\\frac{\\partial T}{\\partial t} = \\alpha \\nabla^2 T, " } ]
https://en.wikipedia.org/wiki?curid=58282
58283
Prandtl number
Ratio of kinematic to thermal diffusivity The Prandtl number (Pr) or Prandtl group is a dimensionless number, named after the German physicist Ludwig Prandtl, defined as the ratio of momentum diffusivity to thermal diffusivity. The Prandtl number is given as: formula_0 where: Note that whereas the Reynolds number and Grashof number are subscripted with a scale variable, the Prandtl number contains no such length scale and is dependent only on the fluid and the fluid state. The Prandtl number is often found in property tables alongside other properties such as viscosity and thermal conductivity. The mass transfer analog of the Prandtl number is the Schmidt number and the ratio of the Prandtl number and the Schmidt number is the Lewis number. Experimental values. Typical values. For most gases over a wide range of temperature and pressure, Pr is approximately constant. Therefore, it can be used to determine the thermal conductivity of gases at high temperatures, where it is difficult to measure experimentally due to the formation of convection currents. Typical values for Pr are: Formula for the calculation of the Prandtl number of air and water. For air with a pressure of 1 bar, the Prandtl numbers in the temperature range between −100 °C and +500 °C can be calculated using the formula given below. The temperature is to be used in the unit degree Celsius. The deviations are a maximum of 0.1% from the literature values. formula_9 The Prandtl numbers for water (1 bar) can be determined in the temperature range between 0 °C and 90 °C using the formula given below. The temperature is to be used in the unit degree Celsius. The deviations are a maximum of 1% from the literature values. formula_10 Physical interpretation. Small values of the Prandtl number, Pr ≪ 1, means the thermal diffusivity dominates. Whereas with large values, Pr ≫ 1, the momentum diffusivity dominates the behavior. For example, the listed value for liquid mercury indicates that the heat conduction is more significant compared to convection, so thermal diffusivity is dominant. However, engine oil with its high viscosity and low heat conductivity, has a higher momentum diffusivity as compared to thermal diffusivity. The Prandtl numbers of gases are about 1, which indicates that both momentum and heat dissipate through the fluid at about the same rate. Heat diffuses very quickly in liquid metals (Pr ≪ 1) and very slowly in oils (Pr ≫ 1) relative to momentum. Consequently thermal boundary layer is much thicker for liquid metals and much thinner for oils relative to the velocity boundary layer. In heat transfer problems, the Prandtl number controls the relative thickness of the momentum and thermal boundary layers. When Pr is small, it means that the heat diffuses quickly compared to the velocity (momentum). This means that for liquid metals the thermal boundary layer is much thicker than the velocity boundary layer. In laminar boundary layers, the ratio of the thermal to momentum boundary layer thickness over a flat plate is well approximated by formula_11 where formula_12 is the thermal boundary layer thickness and formula_13 is the momentum boundary layer thickness. For incompressible flow over a flat plate, the two Nusselt number correlations are asymptotically correct: formula_14 formula_15 where formula_16 is the Reynolds number. These two asymptotic solutions can be blended together using the concept of the Norm (mathematics): formula_17 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathrm{Pr} = \\frac{\\nu}{\\alpha} = \\frac{\\mbox{momentum diffusivity}}{\\mbox{thermal diffusivity}} = \\frac{\\mu / \\rho}{k / (c_p \\rho)} = \\frac{c_p \\mu}{k} " }, { "math_id": 1, "text": "\\nu" }, { "math_id": 2, "text": "\\nu = \\mu/\\rho" }, { "math_id": 3, "text": "\\alpha" }, { "math_id": 4, "text": "\\alpha = k/(\\rho c_p)" }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "c_p" }, { "math_id": 8, "text": "\\rho" }, { "math_id": 9, "text": "\\mathrm{Pr}_\\text{air} = \\frac{10^9}{1.1 \\cdot \\vartheta^3-1200 \\cdot \\vartheta^2 + 322000 \\cdot \\vartheta + 1.393 \\cdot 10^9}" }, { "math_id": 10, "text": "\\mathrm{Pr}_\\text{water} = \\frac{50000}{\\vartheta^2+155\\cdot \\vartheta + 3700}" }, { "math_id": 11, "text": "\\frac{\\delta_t}{\\delta} = \\mathrm{Pr}^{-\\frac13}, \\quad 0.6 \\leq \\mathrm{Pr} \\leq 50," }, { "math_id": 12, "text": "\\delta_t" }, { "math_id": 13, "text": "\\delta" }, { "math_id": 14, "text": "\\mathrm{Nu}_x = 0.339 \\mathrm{Re}_x^{\\frac12} \\mathrm{Pr}^{\\frac13}, \\quad \\mathrm{Pr} \\to \\infty," }, { "math_id": 15, "text": "\\mathrm{Nu}_x = 0.565 \\mathrm{Re}_x^{\\frac12} \\mathrm{Pr}^{\\frac12}, \\quad \\mathrm{Pr} \\to 0," }, { "math_id": 16, "text": "\\mathrm{Re}" }, { "math_id": 17, "text": "\\mathrm{Nu}_x = \\frac{0.3387 \\mathrm{Re}_x^{\\frac12} \\mathrm{Pr}^{\\frac13}}{\\left( 1 + \\left( \\frac{0.0468}\\mathrm{Pr} \\right)^{\\frac23} \\right)^{\\frac14}}, \\quad \\mathrm{Re} \\mathrm{Pr} > 100." } ]
https://en.wikipedia.org/wiki?curid=58283
58283759
Relative Gain Array
The Relative Gain Array (RGA) is a classical widely-used method for determining the best input-output pairings for multivariable process control systems. It has many practical open-loop and closed-loop control applications and is relevant to analyzing many fundamental steady-state closed-loop system properties such as stability and robustness. Definition. Given a linear time-invariant (LTI) system represented by a nonsingular matrix formula_0, the relative gain array (RGA) is defined as formula_1 where formula_2 is the elementwise Hadamard product of the two matrices, and the transpose operator (no conjugate) is necessary even for complex formula_0. Each formula_3 element formula_4 gives a scale invariant (unit-invariant) measure of the dependence of output formula_5 on input formula_6. Properties. The following are some of the linear-algebra properties of the RGA: The second property says that the RGA is invariant with respect to nonzero scalings of the rows and columns of formula_0, which is why the RGA is invariant with respect to the choice of units on different input and output variables. The third property says that the RGA is consistent with respect to permutations of the rows or columns of formula_0. Generalizations. The RGA is often generalized in practice to be used when formula_0 is singular, e.g., non-square, by replacing the inverse of formula_0 with its Moore–Penrose inverse (pseudoinverse). However, it has been shown that the Moore–Penrose pseudoinverse fails to preserve the critical scale-invariance property of the RGA (#2 above) and that the unit-consistent (UC) generalized inverse must therefore be used. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{G}" }, { "math_id": 1, "text": "\\mathrm{R} = \\Phi (\\mathrm{G}) = \\mathrm{G} \\circ {(\\mathrm{G}^{-1})}^T." }, { "math_id": 2, "text": "\\circ" }, { "math_id": 3, "text": "{i,j}" }, { "math_id": 4, "text": "\\mathrm{R}_{i,j}" }, { "math_id": 5, "text": "j" }, { "math_id": 6, "text": "i" }, { "math_id": 7, "text": "\\Phi (\\mathrm{G})" }, { "math_id": 8, "text": "\\mathrm{D}" }, { "math_id": 9, "text": "\\mathrm{E}" }, { "math_id": 10, "text": "\\Phi (\\mathrm{G}) = \\Phi (\\mathrm{D} \\mathrm{G} \\mathrm{E})" }, { "math_id": 11, "text": "\\mathrm{P}" }, { "math_id": 12, "text": "\\mathrm{Q}" }, { "math_id": 13, "text": "\\mathrm{P}\\Phi (\\mathrm{G})\\mathrm{Q} = \\Phi (\\mathrm{P} \\mathrm{G} \\mathrm{Q})" }, { "math_id": 14, "text": "\\Phi (\\mathrm{G}^{-1}) = \\Phi (\\mathrm{G})^T = \\Phi {(\\mathrm{G}^T)}" } ]
https://en.wikipedia.org/wiki?curid=58283759
58285
Nusselt number
Ratio of a fluid's rates of convective and conductive heat transfer In thermal fluid dynamics, the Nusselt number (Nu, after Wilhelm Nusselt336) is the ratio of total heat transfer to conductive heat transfer at a boundary in a fluid. Total heat transfer combines conduction and convection. Convection includes both advection (fluid motion) and diffusion (conduction). The conductive component is measured under the same conditions as the convective but for a hypothetically motionless fluid. It is a dimensionless number, closely related to the fluid's Rayleigh number. A Nusselt number of order one represents heat transfer by pure conduction.336 A value between one and 10 is characteristic of slug flow or laminar flow. A larger Nusselt number corresponds to more active convection, with turbulent flow typically in the 100–1000 range. A similar non-dimensional property is the Biot number, which concerns thermal conductivity for a solid body rather than a fluid. The mass transfer analogue of the Nusselt number is the Sherwood number. Definition. The Nusselt number is the ratio of total heat transfer (convection + conduction) to conductive heat transfer across a boundary. The convection and conduction heat flows are parallel to each other and to the surface normal of the boundary surface, and are all perpendicular to the mean fluid flow in the simple case. formula_0 where "h" is the convective heat transfer coefficient of the flow, "L" is the characteristic length, and "k" is the thermal conductivity of the fluid. In contrast to the definition given above, known as "average Nusselt number", the local Nusselt number is defined by taking the length to be the distance from the surface boundary to the local point of interest. formula_1 The "mean", or "average", number is obtained by integrating the expression over the range of interest, such as: formula_2 Context. An understanding of convection boundary layers is necessary to understand convective heat transfer between a surface and a fluid flowing past it. A thermal boundary layer develops if the fluid free stream temperature and the surface temperatures differ. A temperature profile exists due to the energy exchange resulting from this temperature difference. The heat transfer rate can be written using Newton's law of cooling as formula_3, where "h" is the heat transfer coefficient and "A" is the heat transfer surface area. Because heat transfer at the surface is by conduction, the same quantity can be expressed in terms of the thermal conductivity "k": formula_4. These two terms are equal; thus formula_5. Rearranging, formula_6. Multiplying by a representative length "L" gives a dimensionless expression: formula_7. The right-hand side is now the ratio of the temperature gradient at the surface to the reference temperature gradient, while the left-hand side is similar to the Biot modulus. This becomes the ratio of conductive thermal resistance to the convective thermal resistance of the fluid, otherwise known as the Nusselt number, Nu. formula_8. Derivation. The Nusselt number may be obtained by a non-dimensional analysis of Fourier's law since it is equal to the dimensionless temperature gradient at the surface: formula_9, where "q" is the heat transfer rate, "k" is the constant thermal conductivity and "T" the fluid temperature. Indeed, if: formula_10 and formula_11 we arrive at formula_12 then we define formula_13 so the equation becomes formula_14 By integrating over the surface of the body: formula_15, where formula_16. Empirical correlations. Typically, for free convection, the average Nusselt number is expressed as a function of the Rayleigh number and the Prandtl number, written as: formula_17 Otherwise, for forced convection, the Nusselt number is generally a function of the Reynolds number and the Prandtl number, or formula_18 correlations for a wide variety of geometries are available that express the Nusselt number in the aforementioned forms. See also Heat transfer coefficient#Convective_heat_transfer_correlations. Free convection. Free convection at a vertical wall. Cited493 as coming from Churchill and Chu: formula_19 Free convection from horizontal plates. If the characteristic length is defined formula_20 where formula_21 is the surface area of the plate and formula_22 is its perimeter. Then for the top surface of a hot object in a colder environment or bottom surface of a cold object in a hotter environment493 formula_23 formula_24 And for the bottom surface of a hot object in a colder environment or top surface of a cold object in a hotter environment493 formula_25 Free convection from enclosure heated from below. Cited as coming from Bejan: formula_26 This equation "holds when the horizontal layer is sufficiently wide so that the effect of the short vertical sides is minimal." It was empirically determined by Globe and Dropkin in 1959: "Tests were made in cylindrical containers having copper tops and bottoms and insulating walls." The containers used were around 5" in diameter and 2" high. Flat plate in laminar flow. The local Nusselt number for laminar flow over a flat plate, at a distance formula_27 downstream from the edge of the plate, is given by490 formula_28 The average Nusselt number for laminar flow over a flat plate, from the edge of the plate to a downstream distance formula_27, is given by490 formula_29 Sphere in convective flow. In some applications, such as the evaporation of spherical liquid droplets in air, the following correlation is used: formula_30 Forced convection in turbulent pipe flow. Gnielinski correlation. Gnielinski's correlation for turbulent flow in tubes: formula_31 where f is the Darcy friction factor that can either be obtained from the Moody chart or for smooth tubes from correlation developed by Petukhov:490 formula_32 The Gnielinski Correlation is valid for:490 formula_33 formula_34 Dittus–Boelter equation. The Dittus–Boelter equation (for turbulent flow) as introduced by W.H. McAdams is an explicit function for calculating the Nusselt number. It is easy to solve but is less accurate when there is a large temperature difference across the fluid. It is tailored to smooth tubes, so use for rough tubes (most commercial applications) is cautioned. The Dittus–Boelter equation is: formula_35 where: formula_36 is the inside diameter of the circular duct formula_37 is the Prandtl number formula_38 for the fluid being heated, and formula_39 for the fluid being cooled.493 The Dittus–Boelter equation is valid for514 formula_40 formula_41 formula_42 The Dittus–Boelter equation is a good approximation where temperature differences between bulk fluid and heat transfer surface are minimal, avoiding equation complexity and iterative solving. Taking water with a bulk fluid average temperature of , viscosity and a heat transfer surface temperature of (viscosity , a viscosity correction factor for formula_43 can be obtained as 1.45. This increases to 3.57 with a heat transfer surface temperature of (viscosity ), making a significant difference to the Nusselt number and the heat transfer coefficient. Sieder–Tate correlation. The Sieder–Tate correlation for turbulent flow is an implicit function, as it analyzes the system as a nonlinear boundary value problem. The Sieder–Tate result can be more accurate as it takes into account the change in viscosity (formula_44 and formula_45) due to temperature change between the bulk fluid average temperature and the heat transfer surface temperature, respectively. The Sieder–Tate correlation is normally solved by an iterative process, as the viscosity factor will change as the Nusselt number changes. formula_46493 where: formula_44 is the fluid viscosity at the bulk fluid temperature formula_45 is the fluid viscosity at the heat-transfer boundary surface temperature The Sieder–Tate correlation is valid for493 formula_47 formula_48 formula_42 Forced convection in fully developed laminar pipe flow. For fully developed internal laminar flow, the Nusselt numbers tend towards a constant value for long pipes. For internal flow: formula_49 where: "Dh" = Hydraulic diameter "kf" = thermal conductivity of the fluid "h" = convective heat transfer coefficient Convection with uniform temperature for circular tubes. From Incropera &amp; DeWitt, formula_50 OEIS sequence gives this value as formula_51. Convection with uniform heat flux for circular tubes. For the case of constant surface heat flux, formula_52 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{Nu}_L = \\frac{\\mbox{Total heat transfer }}{\\mbox{Conductive heat transfer }} = \\frac{h}{k/L} = \\frac{hL}{k}" }, { "math_id": 1, "text": "\\mathrm{Nu}_x = \\frac{h_x x}{k}" }, { "math_id": 2, "text": "\\overline{\\mathrm{Nu}}=\\frac{\\frac{1}{L} \\int_0^L h_x\\ dx\\ L}{k}=\\frac{\\overline{h} L}{k}" }, { "math_id": 3, "text": "Q_y=hA\\left( T_s-T_\\infty \\right)" }, { "math_id": 4, "text": "Q_y=-kA\\frac{\\partial }{\\partial y}\\left. \\left( T-T_s \\right) \\right|_{y=0}" }, { "math_id": 5, "text": "-kA\\frac{\\partial }{\\partial y}\\left. \\left( T-T_s \\right) \\right|_{y=0}=hA\\left( T_s-T_\\infty \\right)" }, { "math_id": 6, "text": "\\frac{h}{k}=\\frac{\\left. \\frac{\\partial \\left( T_s-T \\right)}{\\partial y} \\right|_{y=0}}{\\left( T_s-T_\\infty \\right)}" }, { "math_id": 7, "text": "\\frac{hL}{k}=\\frac{\\left. \\frac{\\partial \\left( T_s-T \\right)}{\\partial y} \\right|_{y=0}}{\\frac{\\left( T_s-T_\\infty \\right)}{L}}" }, { "math_id": 8, "text": "\\mathrm{Nu} = \\frac{h}{k/L} = \\frac{hL}{k}" }, { "math_id": 9, "text": "q = -k A \\nabla T" }, { "math_id": 10, "text": "\\nabla' = L \\nabla " }, { "math_id": 11, "text": "T' = \\frac{T-T_h}{T_h-T_c}" }, { "math_id": 12, "text": "-\\nabla'T' = \\frac{L}{kA(T_h-T_c)}q=\\frac{hL}{k}" }, { "math_id": 13, "text": "\\mathrm{Nu}_L=\\frac{hL}{k}" }, { "math_id": 14, "text": "\\mathrm{Nu}_L=-\\nabla'T'" }, { "math_id": 15, "text": "\\overline{\\mathrm{Nu}}=-{{1} \\over {S'}} \\int_{S'}^{} \\mathrm{Nu} \\, \\mathrm{d}S'\\!" }, { "math_id": 16, "text": "S' = \\frac{S}{L^2}" }, { "math_id": 17, "text": "\\mathrm{Nu} = f(\\mathrm{Ra}, \\mathrm{Pr})" }, { "math_id": 18, "text": "\\mathrm{Nu} = f(\\mathrm{Re}, \\mathrm{Pr})" }, { "math_id": 19, "text": "\\overline{\\mathrm{Nu}}_L \\ = 0.68 + \\frac{0.663\\, \\mathrm{Ra}_L^{1/4}}{\\left[1 + (0.492/\\mathrm{Pr})^{9/16} \\, \\right]^{4/9} \\,} \\quad \\mathrm{Ra}_L \\le 10^8 " }, { "math_id": 20, "text": "L \\ = \\frac{A_s}{P}" }, { "math_id": 21, "text": "\\mathrm{A}_s" }, { "math_id": 22, "text": "P" }, { "math_id": 23, "text": "\\overline{\\mathrm{Nu}}_L \\ = 0.54\\, \\mathrm{Ra}_L^{1/4} \\, \\quad 10^4 \\le \\mathrm{Ra}_L \\le 10^7" }, { "math_id": 24, "text": "\\overline{\\mathrm{Nu}}_L \\ = 0.15\\, \\mathrm{Ra}_L^{1/3} \\, \\quad 10^7 \\le \\mathrm{Ra}_L \\le 10^{11}" }, { "math_id": 25, "text": "\\overline{\\mathrm{Nu}}_L \\ = 0.52\\, \\mathrm{Ra}_L^{1/5} \\, \\quad 10^5 \\le \\mathrm{Ra}_L \\le 10^{10}" }, { "math_id": 26, "text": "\\overline{\\mathrm{Nu}}_L \\ = 0.069\\, \\mathrm{Ra}_L^{1/3}Pr^{0.074} \\, \\quad 3 * 10^5 \\le \\mathrm{Ra}_L \\le 7 * 10^{9}" }, { "math_id": 27, "text": "x" }, { "math_id": 28, "text": "\\mathrm{Nu}_x\\ = 0.332\\, \\mathrm{Re}_x^{1/2}\\, \\mathrm{Pr}^{1/3}, (\\mathrm{Pr} > 0.6) " }, { "math_id": 29, "text": "\\overline{\\mathrm{Nu}}_x \\ = {2} \\cdot 0.332\\, \\mathrm{Re}_x^{1/2}\\, \\mathrm{Pr}^{1/3}\\ = 0.664\\, \\mathrm{Re}_x^{1/2}\\, \\mathrm{Pr}^{1/3}, (\\mathrm{Pr} > 0.6) " }, { "math_id": 30, "text": "\\mathrm{Nu}_D \\ = {2} + 0.4\\, \\mathrm{Re}_D^{1/2}\\, \\mathrm{Pr}^{1/3}\\, " }, { "math_id": 31, "text": "\\mathrm{Nu}_D = \\frac{ \\left( f/8 \\right) \\left( \\mathrm{Re}_D - 1000 \\right) \\mathrm{Pr} } {1 + 12.7(f/8)^{1/2} \\left( \\mathrm{Pr}^{2/3} - 1 \\right)}" }, { "math_id": 32, "text": "f= \\left( 0.79 \\ln \\left(\\mathrm{Re}_D \\right)-1.64 \\right)^{-2}" }, { "math_id": 33, "text": "0.5 \\le \\mathrm{Pr} \\le 2000" }, { "math_id": 34, "text": "3000 \\le \\mathrm{Re}_D \\le 5 \\times 10^{6}" }, { "math_id": 35, "text": "\\mathrm{Nu}_D = 0.023\\, \\mathrm{Re}_D^{4/5}\\, \\mathrm{Pr}^{n}" }, { "math_id": 36, "text": "D" }, { "math_id": 37, "text": "\\mathrm{Pr}" }, { "math_id": 38, "text": "n = 0.4" }, { "math_id": 39, "text": "n = 0.3" }, { "math_id": 40, "text": "0.6 \\le \\mathrm{Pr} \\le 160" }, { "math_id": 41, "text": "\\mathrm{Re}_D \\gtrsim 10\\,000" }, { "math_id": 42, "text": "\\frac{L}{D} \\gtrsim 10" }, { "math_id": 43, "text": "({\\mu} / {\\mu_s})" }, { "math_id": 44, "text": "\\mu" }, { "math_id": 45, "text": "\\mu_s" }, { "math_id": 46, "text": "\\mathrm{Nu}_D = 0.027\\,\\mathrm{Re}_D^{4/5}\\, \\mathrm{Pr}^{1/3}\\left(\\frac{\\mu}{\\mu_s}\\right)^{0.14}" }, { "math_id": 47, "text": "0.7 \\le \\mathrm{Pr} \\le 16\\,700" }, { "math_id": 48, "text": "\\mathrm{Re}_D \\ge 10\\,000" }, { "math_id": 49, "text": "\\mathrm{Nu} = \\frac{h D_h}{k_f}" }, { "math_id": 50, "text": "\\mathrm{Nu}_D = 3.66" }, { "math_id": 51, "text": "\\mathrm{Nu}_D = 3.6567934577632923619..." }, { "math_id": 52, "text": "\\mathrm{Nu}_D = 4.36" } ]
https://en.wikipedia.org/wiki?curid=58285
58287
Grashof number
Dimensionless quantity; ratio of a fluid's buoyancy to viscosity In fluid mechanics (especially fluid thermodynamics), the Grashof number (Gr, after Franz Grashof) is a dimensionless number which approximates the ratio of the buoyancy to viscous forces acting on a fluid. It frequently arises in the study of situations involving natural convection and is analogous to the Reynolds number (Re). Definition. Heat transfer. Free convection is caused by a change in density of a fluid due to a temperature change or gradient. Usually the density decreases due to an increase in temperature and causes the fluid to rise. This motion is caused by the buoyancy force. The major force that resists the motion is the viscous force. The Grashof number is a way to quantify the opposing forces. The Grashof number is: formula_0 for vertical flat plates formula_1 for pipes and bluff bodies where: The L and D subscripts indicate the length scale basis for the Grashof number. The transition to turbulent flow occurs in the range 108 &lt; Gr"L" &lt; 109 for natural convection from vertical flat plates. At higher Grashof numbers, the boundary layer is turbulent; at lower Grashof numbers, the boundary layer is laminar, that is, in the range 103 &lt; Gr"L" &lt; 106. Mass transfer. There is an analogous form of the Grashof number used in cases of natural convection mass transfer problems. In the case of mass transfer, natural convection is caused by concentration gradients rather than temperature gradients. formula_2 where formula_3 and: Relationship to other dimensionless numbers. The Rayleigh number, shown below, is a dimensionless number that characterizes convection problems in heat transfer. A critical value exists for the Rayleigh number, above which fluid motion occurs. formula_4 The ratio of the Grashof number to the square of the Reynolds number may be used to determine if forced or free convection may be neglected for a system, or if there's a combination of the two. This characteristic ratio is known as the Richardson number (Ri). If the ratio is much less than one, then free convection may be ignored. If the ratio is much greater than one, forced convection may be ignored. Otherwise, the regime is combined forced and free convection. formula_5 formula_6 formula_7 Derivation. The first step to deriving the Grashof number is manipulating the volume expansion coefficient, formula_8 as follows. formula_9 The formula_10 in the equation above, which represents specific volume, is not the same as the formula_10 in the subsequent sections of this derivation, which will represent a velocity. This partial relation of the volume expansion coefficient, formula_8, with respect to fluid density, formula_11, given constant pressure, can be rewritten as formula_12 where: There are two different ways to find the Grashof number from this point. One involves the energy equation while the other incorporates the buoyant force due to the difference in density between the boundary layer and bulk fluid. Energy equation. This discussion involving the energy equation is with respect to rotationally symmetric flow. This analysis will take into consideration the effect of gravitational acceleration on flow and heat transfer. The mathematical equations to follow apply both to rotational symmetric flow as well as two-dimensional planar flow. formula_16 where: In this equation the superscript n is to differentiate between rotationally symmetric flow from planar flow. The following characteristics of this equation hold true. This equation expands to the following with the addition of physical fluid properties: formula_23 From here we can further simplify the momentum equation by setting the bulk fluid velocity to 0 (formula_24). formula_25 This relation shows that the pressure gradient is simply a product of the bulk fluid density and the gravitational acceleration. The next step is to plug in the pressure gradient into the momentum equation. formula_26 where the volume expansion coefficient to density relationship formula_27 found above and the kinematic viscosity relationship formula_28 were substituted into the momentum equation.formula_29 To find the Grashof number from this point, the preceding equation must be non-dimensionalized. This means that every variable in the equation should have no dimension and should instead be a ratio characteristic to the geometry and setup of the problem. This is done by dividing each variable by corresponding constant quantities. Lengths are divided by a characteristic length, formula_30. Velocities are divided by appropriate reference velocities, formula_31, which, considering the Reynolds number, gives formula_32. Temperatures are divided by the appropriate temperature difference, formula_33. These dimensionless parameters look like the following: The asterisks represent dimensionless parameter. Combining these dimensionless equations with the momentum equations gives the following simplified equation. formula_39 formula_40 where: formula_41 is the surface temperature formula_42 is the bulk fluid temperature formula_30 is the characteristic length. The dimensionless parameter enclosed in the brackets in the preceding equation is known as the Grashof number: formula_43 Buckingham π theorem. Another form of dimensional analysis that will result in the Grashof number is known as the Buckingham π theorem. This method takes into account the buoyancy force per unit volume, formula_44 due to the density difference in the boundary layer and the bulk fluid. formula_45 This equation can be manipulated to give, formula_46 The list of variables that are used in the Buckingham π method is listed below, along with their symbols and dimensions. With reference to the Buckingham π theorem there are 9 – 5 4 dimensionless groups. Choose L, formula_48 k, g and formula_47 as the reference variables. Thus the formula_49 groups are as follows: formula_50, formula_51, formula_52, formula_53. Solving these formula_54 groups gives: formula_55, formula_56, formula_57, formula_58 From the two groups formula_59 and formula_60 the product forms the Grashof number: formula_61 Taking formula_28 and formula_62 the preceding equation can be rendered as the same result from deriving the Grashof number from the energy equation. formula_63 In forced convection the Reynolds number governs the fluid flow. But, in natural convection the Grashof number is the dimensionless parameter that governs the fluid flow. Using the energy equation and the buoyant force combined with dimensional analysis provides two different ways to derive the Grashof number. Physical Reasoning. It is also possible to derive the Grashof number by physical definition of the number as follows: formula_64 However, above expression, especially the final part at the right hand side, is slightly different from Grashof number appearing in literature. Following dimensionally correct scale in terms of dynamic viscosity can be used to have the final form. formula_65Writing above scale in Gr gives; formula_66Physical reasoning is helpful to grasp the meaning of the number. On the other hand, following velocity definition can be used as a characteristic velocity value for making certain velocities nondimensional. formula_67 Effects of Grashof number on the flow of different fluids. In a recent research carried out on the effects of Grashof number on the flow of different fluids driven by convection over various surfaces. Using slope of the linear regression line through data points, it is concluded that increase in the value of Grashof number or any buoyancy related parameter implies an increase in the wall temperature and this makes the bond(s) between the fluid to become weaker, strength of the internal friction to decrease, the gravity to becomes stronger enough (i.e. makes the specific weight appreciably different between the immediate fluid layers adjacent to the wall). The effects of buoyancy parameter are highly significant in the laminar flow within the boundary layer formed on a vertically moving cylinder. This is only achievable when the prescribed surface temperature (PST) and prescribed wall heat flux (WHF) are considered. It can be concluded that buoyancy parameter has a negligible positive effect on the local Nusselt number. This is only true when the magnitude of Prandtl number is small or prescribed wall heat flux (WHF) is considered. Sherwood number, Bejan Number, Entropy generation, Stanton Number and pressure gradient are increasing properties of buoyancy related parameter while concentration profiles, frictional force, and motile microorganism are decreasing properties. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathrm{Gr}_L = \\frac{g \\beta (T_s - T_\\infty ) L^3}{\\nu ^2}\\, " }, { "math_id": 1, "text": " \\mathrm{Gr}_D = \\frac{g \\beta (T_s - T_\\infty ) D^3}{\\nu ^2}\\, " }, { "math_id": 2, "text": " \\mathrm{Gr}_c = \\frac{g \\beta^* (C_{a,s} - C_{a,a} ) L^3}{\\nu^2}" }, { "math_id": 3, "text": " \\beta^* = -\\frac{1}{\\rho} \\left ( \\frac{\\partial \\rho}{\\partial C_a} \\right )_{T,p}" }, { "math_id": 4, "text": "\\mathrm{Ra}_{x} = \\mathrm{Gr}_{x}\\mathrm{Pr}" }, { "math_id": 5, "text": "\\mathrm{Ri} = \\frac{\\mathrm{Gr}}{\\mathrm{Re}^2} \\gg 1 \\implies \\text{ignore forced convection}" }, { "math_id": 6, "text": "\\mathrm{Ri} = \\frac{\\mathrm{Gr}}{\\mathrm{Re}^2} \\approx 1 \\implies \\text{combined forced and free convection}" }, { "math_id": 7, "text": "\\mathrm{Ri} = \\frac{\\mathrm{Gr}}{\\mathrm{Re}^2} \\ll 1 \\implies \\text{ignore free convection}" }, { "math_id": 8, "text": "\\mathrm{\\beta}" }, { "math_id": 9, "text": "\\beta = \\frac{1}{v}\\left(\\frac{\\partial v}{\\partial T}\\right)_p =\\frac{-1}{\\rho}\\left(\\frac{\\partial\\rho}{\\partial T}\\right)_p" }, { "math_id": 10, "text": "v" }, { "math_id": 11, "text": "\\mathrm{\\rho}" }, { "math_id": 12, "text": "\\rho=\\rho_0 (1-\\beta \\Delta T)" }, { "math_id": 13, "text": "\\rho_0" }, { "math_id": 14, "text": "\\rho" }, { "math_id": 15, "text": "\\Delta T = (T - T_0)" }, { "math_id": 16, "text": "\\frac{\\partial}{\\partial s}(\\rho u r_0^{n})+{\\frac{\\partial}{\\partial y}}(\\rho v r_0^{n})=0" }, { "math_id": 17, "text": "s" }, { "math_id": 18, "text": "u" }, { "math_id": 19, "text": "y" }, { "math_id": 20, "text": "r_0" }, { "math_id": 21, "text": "n" }, { "math_id": 22, "text": "g" }, { "math_id": 23, "text": "\\rho\\left(u \\frac{\\partial u}{\\partial s} + v \\frac{\\partial u}{\\partial y}\\right) = \\frac{\\partial}{\\partial y}\\left(\\mu \\frac{\\partial u}{\\partial y}\\right) - \\frac{d p}{d s} + \\rho g." }, { "math_id": 24, "text": "u = 0" }, { "math_id": 25, "text": "\\frac{d p}{d s}=\\rho_0 g" }, { "math_id": 26, "text": "\\left(u \\frac{\\partial u}{\\partial s}+v \\frac{\\partial u}{\\partial y}\\right)=\\nu \\left(\\frac{\\partial^2 u}{\\partial y^2}\\right)+g\\frac{\\rho-\\rho_0}{\\rho}=\\nu \\left(\\frac{\\partial^2 u}{\\partial y^2}\\right)-\\frac{\\rho_0}{\\rho} g \\beta (T-T_0)" }, { "math_id": 27, "text": "\\rho-\\rho_0 = - \\rho_0 \\beta (T - T_0)" }, { "math_id": 28, "text": "\\nu = \\frac{\\mu}{\\rho}" }, { "math_id": 29, "text": " u\\left(\\frac{\\partial u}{\\partial s}\\right)+v \\left(\\frac{\\partial v}{\\partial y}\\right)=\\nu \\left(\\frac{\\partial^2 u}{\\partial y^2}\\right)-\\frac{\\rho_0}{\\rho}g \\beta(T - T_0)" }, { "math_id": 30, "text": "L_c" }, { "math_id": 31, "text": "V" }, { "math_id": 32, "text": "V=\\frac{\\mathrm{Re}_L \\nu}{L_c}" }, { "math_id": 33, "text": "(T_s - T_0)" }, { "math_id": 34, "text": "s^*=\\frac{s}{L_c}" }, { "math_id": 35, "text": "y^* =\\frac{y}{L_c}" }, { "math_id": 36, "text": "u^*=\\frac{u}{V}" }, { "math_id": 37, "text": "v^* = \\frac{v}{V}" }, { "math_id": 38, "text": "T^*=\\frac{(T-T_0)}{(T_s - T_0)}" }, { "math_id": 39, "text": "u^* \\frac{\\partial u^*}{\\partial s^*} + v^* \\frac{\\partial u^*}{\\partial y^*} =-\\left[ \\frac{\\rho_0 g \\beta(T_s - T_0)L_c^{3}}{\\rho\\nu^2 \\mathrm{Re}_L^{2}} \\right] T^*+\\frac{1}{\\mathrm{Re}_L} \\frac{\\partial^2 u^*}{\\partial {y^*}^2} " }, { "math_id": 40, "text": "=-\\left(\\frac{\\rho_0}{\\rho}\\right)\\left[\\frac{g \\beta(T_s - T_0)L_c^{3}}{\\nu^2} \\right] \\frac{T^*}{\\mathrm{Re}_L^{2}}+\\frac{1}{\\mathrm{Re}_L} \\frac{\\partial^2 u^*}{\\partial {y^*}^2}" }, { "math_id": 41, "text": "T_s" }, { "math_id": 42, "text": "T_0" }, { "math_id": 43, "text": "\\mathrm{Gr}=\\frac{g \\beta(T_s-T_0)L_c^{3}}{\\nu^2}." }, { "math_id": 44, "text": "F_b" }, { "math_id": 45, "text": "F_b = (\\rho - \\rho_0) g" }, { "math_id": 46, "text": "F_b = -\\beta g \\rho_0 \\Delta T." }, { "math_id": 47, "text": "\\beta" }, { "math_id": 48, "text": "\\mu," }, { "math_id": 49, "text": "\\pi" }, { "math_id": 50, "text": "\\pi_1 = L^a \\mu^b k^c \\beta^d g^e c_p" }, { "math_id": 51, "text": "\\pi_2 = L^f \\mu^g k^h \\beta^i g^j \\rho" }, { "math_id": 52, "text": " \\pi_3 = L^k \\mu^l k^m \\beta^n g^o \\Delta T" }, { "math_id": 53, "text": " \\pi_4 = L^q \\mu^r k^s \\beta^t g^u h" }, { "math_id": 54, "text": "\\pi " }, { "math_id": 55, "text": " \\pi_1 = \\frac{\\mu(c_p)}{k} = \\mathrm{Pr}" }, { "math_id": 56, "text": " \\pi_2 =\\frac{l^3 g \\rho^2}{\\mu^2}" }, { "math_id": 57, "text": " \\pi_3 =\\beta \\Delta T" }, { "math_id": 58, "text": " \\pi_4 =\\frac{h L}{k} = \\mathrm{Nu}" }, { "math_id": 59, "text": "\\pi_2" }, { "math_id": 60, "text": "\\pi_3," }, { "math_id": 61, "text": "\\pi_2 \\pi_3=\\frac{\\beta g \\rho^2 \\Delta T L^3}{\\mu^2} = \\mathrm{Gr}." }, { "math_id": 62, "text": "\\Delta T = (T_s - T_0)" }, { "math_id": 63, "text": "\\mathrm{Gr} = \\frac{\\beta g \\Delta T L^3}{\\nu^2}" }, { "math_id": 64, "text": "\\mathrm{Gr} = \\frac{\\mathrm{Buoyancy~Force}}{\\mathrm{Friction~Force}} \n=\\frac{mg}{\\tau A}=\\frac{L^3 \\rho \\beta (\\Delta T) g }{\\mu (V/L) L^2}\n=\\frac{L^2 \\beta (\\Delta T) g}{\\nu V}" }, { "math_id": 65, "text": "\\mathrm{\\mu} = \\rho V L" }, { "math_id": 66, "text": "\\mathrm{Gr} = \\frac{L^3 \\beta (\\Delta T) g}{\\nu^2 }" }, { "math_id": 67, "text": "\\mathrm{V} \n=\\frac{L^2 \\beta (\\Delta T) g}{\\nu Gr}" } ]
https://en.wikipedia.org/wiki?curid=58287
582887
6 Hebe
Large main-belt asteroid 6 Hebe () is a large main-belt asteroid, containing around 0.5% of the mass of the belt. However, due to its apparently high bulk density (greater than that of the Moon), Hebe does not rank among the top twenty asteroids by volume. This high bulk density suggests an extremely solid body that has not been impacted by collisions, which is not typical of asteroids of its size – they tend to be loosely-bound rubble piles. In brightness, Hebe is the fifth-brightest object in the asteroid belt after Vesta, Ceres, Iris, and Pallas. It has a mean opposition magnitude of +8.3, about equal to the mean brightness of Saturn's moon Titan, and can reach +7.5 at an opposition near perihelion. Hebe may be the parent body of the H chondrite meteorites, which account for about 40% of all meteorites striking Earth. History. Hebe was discovered on 1 July 1847 by Karl Ludwig Hencke, the sixth asteroid discovered. It was the second and final asteroid discovery by Hencke, after 5 Astraea. The name "Hebe", goddess of youth, was proposed by Carl Friedrich Gauss at Hencke's request. Gauss chose a wineglass as its symbol. It is in the pipeline for Unicode 17.0 as U+1CEC0 𜻀 (). Potential as major meteorite source. Hebe was once thought to be the probable parent body of the H chondrite meteorites and the IIE iron meteorites. This would imply that it is the source of about 40% of all meteorites striking Earth. Evidence for this connection includes the following: However, observations by the VLT in 2017 indicate that the depressions caused by impacts on 6 Hebe are only 20% the volume of the nearby H-chondrite asteroid families, suggesting the Hebe is not the most likely or primary source of H-chondrite meteorites. Physical characteristics. Lightcurve analysis suggests that Hebe has a rather angular shape, which may be due to several large impact craters. Hebe rotates in a prograde direction, with the north pole pointing towards ecliptic coordinates (β, λ) = (45°, 339°) with a 10° uncertainty. This gives an axial tilt of 42°. It has a bright surface and, if its identification as the parent body of the H chondrites is correct, a surface composition of silicate chondritic rocks mixed with pieces of iron–nickel. A likely scenario for the formation of the surface metal is as follows: Orbit. On 5 March 1977 Hebe occulted Kaffaljidhma (γ Ceti), a moderately bright 3rd-magnitude star. Between 1977 and 2021, 6 Hebe has been observed to occult fourteen stars. Possible moon. As a result of the aforementioned 1977 occultation, a small moon around Hebe was reported by Paul D. Maley. It was nicknamed "Jebe" (see heebie-jeebies). This was the first modern-day suggestion that asteroids have satellites. It was 17 years later when the first asteroid moon was formally discovered (Dactyl, the satellite of 243 Ida). The discovery of Hebe's moon was never confirmed. Detailed observations by many telescopes including the Very Large Telescope and the Hubble Space Telescope have consistently failed to detect any satellites around the asteroid, casting doubt on such a moon existing. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\nu_6\\,\\!" } ]
https://en.wikipedia.org/wiki?curid=582887
5828941
Rigid analytic space
An analogue of a complex analytic space over a nonarchimedean field &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; Tate m’a écrit de son côté sur ses histoires de courbes elliptiques, et pour me demander si j’avais des idées sur une définition globale des variétés analytiques sur des corps complets. Je dois avouer que je n’ai pas du tout compris pourquoi ses résultats suggéreraient l’existence d’une telle définition, et suis encore sceptique. Alexander Grothendieck in a 1959 August 18 letter to Jean-Pierre Serre, expressing skepticism about the existence of John Tate's theory of global analytic varieties over complete fields In mathematics, a rigid analytic space is an analogue of a complex analytic space over a nonarchimedean field. Such spaces were introduced by John Tate in 1962, as an outgrowth of his work on uniformizing "p"-adic elliptic curves with bad reduction using the multiplicative group. In contrast to the classical theory of "p"-adic analytic manifolds, rigid analytic spaces admit meaningful notions of analytic continuation and connectedness. Definitions. The basic rigid analytic object is the "n"-dimensional unit polydisc, whose ring of functions is the Tate algebra formula_0, made of power series in "n" variables whose coefficients approach zero in some complete nonarchimedean field "k". The Tate algebra is the completion of the polynomial ring in "n" variables under the Gauss norm (taking the supremum of coefficients), and the polydisc plays a role analogous to that of affine "n"-space in algebraic geometry. Points on the polydisc are defined to be maximal ideals in the Tate algebra, and if "k" is algebraically closed, these correspond to points in formula_1 whose coordinates have norm at most one. An affinoid algebra is a "k"-Banach algebra that is isomorphic to a quotient of the Tate algebra by an ideal. An affinoid is then the subset of the unit polydisc on which the elements of this ideal vanish, i.e., it is the set of maximal ideals containing the ideal in question. The topology on affinoids is subtle, using notions of "affinoid subdomains" (which satisfy a universality property with respect to maps of affinoid algebras) and "admissible open sets" (which satisfy a finiteness condition for covers by affinoid subdomains). In fact, the admissible opens in an affinoid do not in general endow it with the structure of a topological space, but they do form a Grothendieck topology (called the "G"-topology), and this allows one to define good notions of sheaves and gluing of spaces. A rigid analytic space over "k" is a pair formula_2 describing a locally ringed "G"-topologized space with a sheaf of "k"-algebras, such that there is a covering by open subspaces isomorphic to affinoids. This is analogous to the notion of manifolds being coverable by open subsets isomorphic to euclidean space, or schemes being coverable by affines. Schemes over "k" can be analytified functorially, much like varieties over the complex numbers can be viewed as complex analytic spaces, and there is an analogous formal GAGA theorem. The analytification functor respects finite limits. Other formulations. Around 1970, Michel Raynaud provided an interpretation of certain rigid analytic spaces as formal models, i.e., as generic fibers of formal schemes over the valuation ring "R" of "k". In particular, he showed that the category of quasi-compact quasi-separated rigid spaces over "k" is equivalent to the localization of the category of quasi-compact admissible formal schemes over "R" with respect to admissible formal blow-ups. Here, a formal scheme is admissible if it is coverable by formal spectra of topologically finitely presented "R" algebras whose local rings are "R"-flat. Formal models suffer from a problem of uniqueness, since blow-ups allow more than one formal scheme to describe the same rigid space. Huber worked out a theory of "adic spaces" to resolve this, by taking a limit over all blow-ups. These spaces are quasi-compact, quasi-separated, and functorial in the rigid space, but lack a lot of nice topological properties. Vladimir Berkovich reformulated much of the theory of rigid analytic spaces in the late 1980s, using a generalization of the notion of Gelfand spectrum for commutative unital "C*"-algebras. The Berkovich spectrum of a Banach "k"-algebra "A" is the set of multiplicative semi-norms on "A" that are bounded with respect to the given norm on "k", and it has a topology induced by evaluating these semi-norms on elements of "A". Since the topology is pulled back from the real line, Berkovich spectra have many nice properties, such as compactness, path-connectedness, and metrizability. Many ring-theoretic properties are reflected in the topology of spectra, e.g., if "A" is Dedekind, then its spectrum is contractible. However, even very basic spaces tend to be unwieldy – the projective line over C"p" is a compactification of the inductive limit of affine Bruhat–Tits buildings for "PGL"2("F"), as "F" varies over finite extensions of Q"p", when the buildings are given a suitably coarse topology.
[ { "math_id": 0, "text": "T_n" }, { "math_id": 1, "text": "k^n" }, { "math_id": 2, "text": "(X, \\mathcal{O}_X)" } ]
https://en.wikipedia.org/wiki?curid=5828941
58299531
Edward B. Saff
American mathematician Edward Barry Saff (born 2 January 1944 in New York City) is an American mathematician, specializing in complex analysis, approximation theory, numerical analysis, and potential theory. Education and career. Saff received in 1964 his bachelor's degree from the Georgia Institute of Technology and in 1968 his PhD from the University of Maryland, College Park under Joseph L. Walsh with thesis "Interpolation and Functions of Class H (k, a, 2)". As a postdoc he was a Fulbright Fellow at Imperial College London from 1968 to 1969. At the University of South Florida he was from 1969 to 1971 an assistant professor, from 1971 to 1976 an associate professor, from 1976 to 1986 a full professor, and from 1986 to 2001 a distinguished research professor. At Vanderbilt University he is, since 2001, a professor and director of the Center for Constructive Approximation and was from 2004 to 2007 the Executive Dean of the College of Arts and Sciences. His research deals with approximation of complex functions by polynomials and rational functions, approximate solutions of differential equations, Padé approximants, geometry of polynomials, special functions, Hardy spaces, conformal mappings (including numerical analysis). and potential theory (minima of energy under boundary value constraints or external fields). He is the author or coauthor of over 240 research articles, the coauthor of 9 books, and the coeditor of 11 volumes. He was the coeditor, with Theodore J. Rivlin, of a volume of Joseph L. Walsh's "Selected Works" published in 2000. Since 2007 he is an ISI Highly Cited Researcher. For the academic year 1978–1979 Saff was a Guggenheim Fellow at the University of Oxford. In 2012 he was elected a Fellow of the American Mathematical Society. He was elected in 2013 a Foreign Member of the Bulgarian Academy of Sciences and made in 1987 an Honorary Professor of the Zhejiang Normal University in China.
[ { "math_id": 0, "text": "R" } ]
https://en.wikipedia.org/wiki?curid=58299531
58301902
V-topology
In mathematics, especially in algebraic geometry, the v-topology (also known as the universally subtrusive topology) is a Grothendieck topology whose covers are characterized by lifting maps from valuation rings. This topology was introduced by and studied further by , who introduced the name "v"-topology, where "v" stands for valuation. Definition. A universally subtrusive map is a map "f": "X" → "Y" of quasi-compact, quasi-separated schemes such that for any map "v": Spec ("V") → "Y", where "V" is a valuation ring, there is an extension (of valuation rings) formula_0 and a map Spec "W" → "X" lifting "v". Examples. Examples of "v"-covers include faithfully flat maps, proper surjective maps. In particular, any Zariski covering is a "v"-covering. Moreover, universal homeomorphisms, such as formula_1, the normalisation of the cusp, and the Frobenius in positive characteristic are "v"-coverings. In fact, the perfection formula_2 of a scheme is a v-covering. Voevodsky's h topology. See h-topology, relation to the v-topology Arc topology. have introduced the "arc"-topology, which is similar in its definition, except that only valuation rings of rank ≤ 1 are considered in the definition. A variant of this topology, with an analogous relationship that the h-topology has with the cdh topology, called the "cdarc"-topology was later introduced by Elmanto, Hoyois, Iwasa and Kelly (2020). show that the Amitsur complex of an arc covering of perfect rings is an exact complex. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V \\subset W" }, { "math_id": 1, "text": "X_{red} \\to X" }, { "math_id": 2, "text": "X_{perf} \\to X" } ]
https://en.wikipedia.org/wiki?curid=58301902
58317599
Yanosuke Otsuka
Japanese geologist and professor Yanosuke Otsuka () (11 July 1903 – 7 August 1950) was a Japanese geologist and professor. Yanosuke Otsuka was born in Nihonbashi, Tokyo on 11 July 1903. He went to the Junior High School attached to Tokyo Higher Normal School (), and after that to Shizuoka High School (). For his undergraduate studies, he entered the Department of Geology, Faculty of Science, Imperial University of Tokyo, where he graduated in 1929. While he was student, he learned the methods of historical geology from Yoshiaki Ozawa, topography from Taro Tsujimura (), and Cenozoic biological stratigraphy from Shigeyasu Tokunaga. After graduation, he entered the Earthquake Research Institute as an assistant in 1930, becoming an associate professor in 1939 and a professor in 1943. He made significant contributions in characterizing the surface faults in the circum-Pacific area, effects of tsunamis, tectonics of crustal movements, taxonomy of molluscs, paleoclimatology, mapping of Cenozoic strata, and the Tertiary history of the Japanese islands. He died prematurely from pulmonary tuberculosis on 7 August 1950, at the age of 47, because effective antimicrobial therapy was not yet available in Japan at the time. In biology, he is known for the Otsuka similarity coefficient (also known as Otsuka-Ochiai or Ochiai coefficient), which can be represented as: formula_0 Here, formula_1 and formula_2 are sets, and formula_3 is the number of elements in formula_1. If sets are represented as bit vectors, the Otsuka similarity coefficient can be seen to be the same as the cosine similarity. In a recent book, the Otsuka similarity coefficient is misattributed to another Japanese researcher with the family name Otsuka. The confusion arises because in 1957 Akira Ochiai attributes the coefficient only to Otsuka (no first name mentioned) by citing an article by Ikuso Hamai (), who in turn cites the original 1936 article by Yanosuke Otsuka. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K =\\frac{|A \\cap B|}{\\sqrt{|A| \\times |B|}}" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "B" }, { "math_id": 3, "text": "|A|" } ]
https://en.wikipedia.org/wiki?curid=58317599
58320
Potential flow
Velocity field as the gradient of a scalar function In fluid dynamics, potential flow or irrotational flow refers to a description of a fluid flow with no vorticity in it. Such a description typically arises in the limit of vanishing viscosity, i.e., for an inviscid fluid and with no vorticity present in the flow. Potential flow describes the velocity field as the gradient of a scalar function: the velocity potential. As a result, a potential flow is characterized by an irrotational velocity field, which is a valid approximation for several applications. The irrotationality of a potential flow is due to the curl of the gradient of a scalar always being equal to zero. In the case of an incompressible flow the velocity potential satisfies Laplace's equation, and potential theory is applicable. However, potential flows also have been used to describe compressible flows and Hele-Shaw flows. The potential flow approach occurs in the modeling of both stationary as well as nonstationary flows. Applications of potential flow include: the outer flow field for aerofoils, water waves, electroosmotic flow, and groundwater flow. For flows (or parts thereof) with strong vorticity effects, the potential flow approximation is not applicable. In flow regions where vorticity is known to be important, such as wakes and boundary layers, potential flow theory is not able to provide reasonable predictions of the flow. Fortunately, there are often large regions of a flow where the assumption of irrotationality is valid which is why potential flow is used for various applications. For instance in: flow around aircraft, groundwater flow, acoustics, water waves, and electroosmotic flow. Description and characteristics. In potential or irrotational flow, the vorticity vector field is zero, i.e., formula_0, where formula_1 is the velocity field and formula_2 is the vorticity field. Like any vector field having zero curl, the velocity field can be expressed as the gradient of certain scalar, say formula_3 which is called the velocity potential, since the curl of the gradient is always zero. We therefore have formula_4 The velocity potential is not uniquely defined since one can add to it an arbitrary function of time, say formula_5, without affecting the relevant physical quantity which is formula_6. The non-uniqueness is usually removed by suitably selecting appropriate initial or boundary conditions satisfied by formula_7 and as such the procedure may vary from one problem to another. In potential flow, the circulation formula_8 around any simply-connected contour formula_9 is zero. This can be shown using the Stokes theorem, formula_10 where formula_11 is the line element on the contour and formula_12 is the area element of any surface bounded by the contour. In multiply-connected space (say, around a contour enclosing solid body in two dimensions or around a contour enclosing a torus in three-dimensions) or in the presence of concentrated vortices, (say, in the so-called irrotational vortices or point vortices, or in smoke rings), the circulation formula_8 need not be zero. In the former case, Stokes theorem cannot be applied and in the later case, formula_13 is non-zero within the region bounded by the contour. Around a contour encircling an infinitely long solid cylinder with which the contour loops formula_14 times, we have formula_15 where formula_16 is a cyclic constant. This example belongs to a doubly-connected space. In an formula_17-tuply connected space, there are formula_18 such cyclic constants, namely, formula_19 Incompressible flow. In case of an incompressible flow — for instance of a liquid, or a gas at low Mach numbers; but not for sound waves — the velocity v has zero divergence: formula_20 Substituting here formula_21 shows that formula_7 satisfies the Laplace equation formula_22 where ∇2 ∇ ⋅ ∇ is the Laplace operator (sometimes also written Δ). Since solutions of the Laplace equation are harmonic functions, every harmonic function represents a potential flow solution. As evident, in the incompressible case, the velocity field is determined completely from its kinematics: the assumptions of irrotationality and zero divergence of flow. Dynamics in connection with the momentum equations, only have to be applied afterwards, if one is interested in computing pressure field: for instance for flow around airfoils through the use of Bernoulli's principle. In incompressible flows, contrary to common misconception, the potential flow indeed satisfies the full Navier–Stokes equations, not just the Euler equations, because the viscous term formula_23 is identically zero. It is the inability of the potential flow to satisfy the required boundary conditions, especially near solid boundaries, makes it invalid in representing the required flow field. If the potential flow satisfies the necessary conditions, then it is the required solution of the incompressible Navier–Stokes equations. In two dimensions, with the help of the harmonic function formula_7 and its conjugate harmonic function formula_24 (stream function), incompressible potential flow reduces to a very simple system that is analyzed using complex analysis (see below). Compressible flow. Steady flow. Potential flow theory can also be used to model irrotational compressible flow. The derivation of the governing equation for formula_7 from Eulers equation is quite straightforward. The continuity and the (potential flow) momentum equations for steady flows are given by formula_25 where the last equation follows from that fact that entropy is constant for a fluid particle and that square of the sound speed is formula_26. Eliminating formula_27 from the two governing equations results in formula_28 The incompressible version emerges in the limit formula_29. Substituting here formula_21 results in formula_30 where formula_31 is expressed as a function of the velocity magnitude formula_32. For a polytropic gas, formula_33, where formula_34 is the specific heat ratio and formula_35 is the stagnation enthalpy. In two dimensions, the equation simplifies to formula_36 Validity: As it stands, the equation is valid for any inviscid potential flows, irrespective of whether the flow is subsonic or supersonic (e.g. Prandtl–Meyer flow). However in supersonic and also in transonic flows, shock waves can occur which can introduce entropy and vorticity into the flow making the flow rotational. Nevertheless, there are two cases for which potential flow prevails even in the presence of shock waves, which are explained from the (not necessarily potential) momentum equation written in the following form formula_37 where formula_38 is the specific enthalpy, formula_13 is the vorticity field, formula_39 is the temperature and formula_40 is the specific entropy. Since in front of the leading shock wave, we have a potential flow, Bernoulli's equation shows that formula_41 is constant, which is also constant across the shock wave (Rankine–Hugoniot conditions) and therefore we can write formula_42 1) When the shock wave is of constant intensity, the entropy discontinuity across the shock wave is also constant i.e., formula_43 and therefore vorticity production is zero. Shock waves at the pointed leading edge of two-dimensional wedge or three-dimensional cone (Taylor–Maccoll flow) has constant intensity. 2) For weak shock waves, the entropy jump across the shock wave is a third-order quantity in terms of shock wave strength and therefore formula_44 can be neglected. Shock waves in slender bodies lies nearly parallel to the body and they are weak. Nearly parallel flows: When the flow is predominantly unidirectional with small deviations such as in flow past slender bodies, the full equation can be further simplified. Let formula_45 be the mainstream and consider small deviations from this velocity field. The corresponding velocity potential can be written as formula_46 where formula_47 characterizes the small departure from the uniform flow and satisfies the linearized version of the full equation. This is given by formula_48 where formula_49 is the constant Mach number corresponding to the uniform flow. This equation is valid provided formula_50 is not close to unity. When formula_51 is small (transonic flow), we have the following nonlinear equation formula_52 where formula_53 is the critical value of Landau derivative formula_54 and formula_55 is the specific volume. The transonic flow is completely characterized by the single parameter formula_53, which for polytropic gas takes the value formula_56. Under hodograph transformation, the transonic equation in two-dimensions becomes the Euler–Tricomi equation. Unsteady flow. The continuity and the (potential flow) momentum equations for unsteady flows are given by formula_57 The first integral of the (potential flow) momentum equation is given by formula_58 where formula_5 is an arbitrary function. Without loss of generality, we can set formula_59 since formula_7 is not uniquely defined. Combining these equations, we obtain formula_60 Substituting here formula_21 results in formula_61 Nearly parallel flows: As in before, for nearly parallel flows, we can write (after introudcing a recaled time formula_62) formula_63 provided the constant Mach number formula_50 is not close to unity. When formula_51 is small (transonic flow), we have the following nonlinear equation formula_64 Sound waves: In sound waves, the velocity magntiude formula_65 (or the Mach number) is very small, although the unsteady term is now comparable to the other leading terms in the equation. Thus neglecting all quadratic and higher-order terms and noting that in the same approximation, formula_66 is a constant (for example, in polytropic gas formula_67), we have formula_68 which is a linear wave equation for the velocity potential φ. Again the oscillatory part of the velocity vector v is related to the velocity potential by v ∇"φ", while as before Δ is the Laplace operator, and c is the average speed of sound in the homogeneous medium. Note that also the oscillatory parts of the pressure p and density ρ each individually satisfy the wave equation, in this approximation. Applicability and limitations. Potential flow does not include all the characteristics of flows that are encountered in the real world. Potential flow theory cannot be applied for viscous internal flows, except for flows between closely spaced plates. Richard Feynman considered potential flow to be so unphysical that the only fluid to obey the assumptions was "dry water" (quoting John von Neumann). Incompressible potential flow also makes a number of invalid predictions, such as d'Alembert's paradox, which states that the drag on any object moving through an infinite fluid otherwise at rest is zero. More precisely, potential flow cannot account for the behaviour of flows that include a boundary layer. Nevertheless, understanding potential flow is important in many branches of fluid mechanics. In particular, simple potential flows (called elementary flows) such as the free vortex and the point source possess ready analytical solutions. These solutions can be superposed to create more complex flows satisfying a variety of boundary conditions. These flows correspond closely to real-life flows over the whole of fluid mechanics; in addition, many valuable insights arise when considering the deviation (often slight) between an observed flow and the corresponding potential flow. Potential flow finds many applications in fields such as aircraft design. For instance, in computational fluid dynamics, one technique is to couple a potential flow solution outside the boundary layer to a solution of the boundary layer equations inside the boundary layer. The absence of boundary layer effects means that any streamline can be replaced by a solid boundary with no change in the flow field, a technique used in many aerodynamic design approaches. Another technique would be the use of Riabouchinsky solids. Analysis for two-dimensional incompressible flow. Potential flow in two dimensions is simple to analyze using conformal mapping, by the use of transformations of the complex plane. However, use of complex numbers is not required, as for example in the classical analysis of fluid flow past a cylinder. It is not possible to solve a potential flow using complex numbers in three dimensions. The basic idea is to use a holomorphic (also called analytic) or meromorphic function f, which maps the physical domain ("x", "y") to the transformed domain ("φ", "ψ"). While x, y, φ and ψ are all real valued, it is convenient to define the complex quantities formula_69 Now, if we write the mapping f as formula_70 Then, because f is a holomorphic or meromorphic function, it has to satisfy the Cauchy–Riemann equations formula_71 The velocity components ("u", "v"), in the ("x", "y") directions respectively, can be obtained directly from f by differentiating with respect to z. That is formula_72 So the velocity field v ("u", "v") is specified by formula_73 Both φ and ψ then satisfy Laplace's equation: formula_74 So φ can be identified as the velocity potential and ψ is called the stream function. Lines of constant ψ are known as streamlines and lines of constant φ are known as equipotential lines (see equipotential surface). Streamlines and equipotential lines are orthogonal to each other, since formula_75 Thus the flow occurs along the lines of constant ψ and at right angles to the lines of constant φ. Δ"ψ" 0 is also satisfied, this relation being equivalent to ∇ × v 0. So the flow is irrotational. The automatic condition then gives the incompressibility constraint ∇ · v 0. Examples of two-dimensional incompressible flows. Any differentiable function may be used for f. The examples that follow use a variety of elementary functions; special functions may also be used. Note that multi-valued functions such as the natural logarithm may be used, but attention must be confined to a single Riemann surface. Power laws. In case the following power-law conformal map is applied, from "z" "x" + "iy" to "w" "φ" + "iψ": formula_76 then, writing z in polar coordinates as "z" "x" + "iy" "reiθ", we have formula_77 In the figures to the right examples are given for several values of n. The black line is the boundary of the flow, while the darker blue lines are streamlines, and the lighter blue lines are equi-potential lines. Some interesting powers n are: : this corresponds with flow around a semi-infinite plate, : flow around a right corner, 1: a trivial case of uniform flow, 2: flow through a corner, or near a stagnation point, and −1: flow due to a source doublet The constant A is a scaling parameter: its absolute value determines the scale, while its argument arg("A") introduces a rotation (if non-zero). 1: uniform flow ==== If "w" "Az"1, that is, a power law with "n" 1, the streamlines (i.e. lines of constant ψ) are a system of straight lines parallel to the x-axis. This is easiest to see by writing in terms of real and imaginary components: formula_78 thus giving "φ" "Ax" and "ψ" "Ay". This flow may be interpreted as uniform flow parallel to the x-axis. 2 ==== If "n" 2, then "w" "Az"2 and the streamline corresponding to a particular value of ψ are those points satisfying formula_79 which is a system of rectangular hyperbolae. This may be seen by again rewriting in terms of real and imaginary components. Noting that sin 2"θ" [[Category:Pages which use a template in place of a magic word|TPotential flow]] 2 sin "θ" cos "θ" and rewriting sin "θ" and cos "θ" it is seen (on simplifying) that the streamlines are given by formula_80 The velocity field is given by ∇"φ", or formula_81 In fluid dynamics, the flowfield near the origin corresponds to a stagnation point. Note that the fluid at the origin is at rest (this follows on differentiation of "f"(z) "z"2 at "z" 0). The "ψ" 0 streamline is particularly interesting: it has two (or four) branches, following the coordinate axes, i.e. "x" 0 and "y" 0. As no fluid flows across the x-axis, it (the x-axis) may be treated as a solid boundary. It is thus possible to ignore the flow in the lower half-plane where "y" &lt; 0 and to focus on the flow in the upper halfplane. With this interpretation, the flow is that of a vertically directed jet impinging on a horizontal flat plate. The flow may also be interpreted as flow into a 90 degree corner if the regions specified by (say) "x", "y" &lt; 0 are ignored. 3 ==== If "n" 3, the resulting flow is a sort of hexagonal version of the "n" 2 case considered above. Streamlines are given by, "ψ" 3"x"2"y" − "y"3 and the flow in this case may be interpreted as flow into a 60° corner. −1: doublet ==== If "n" −1, the streamlines are given by formula_82 This is more easily interpreted in terms of real and imaginary components: formula_83 Thus the streamlines are circles that are tangent to the x-axis at the origin. The circles in the upper half-plane thus flow clockwise, those in the lower half-plane flow anticlockwise. Note that the velocity components are proportional to "r"−2; and their values at the origin is infinite. This flow pattern is usually referred to as a doublet, or dipole, and can be interpreted as the combination of a source-sink pair of infinite strength kept an infinitesimally small distance apart. The velocity field is given by formula_84 or in polar coordinates: formula_85 −2: quadrupole ==== If "n" −2, the streamlines are given by formula_86 This is the flow field associated with a quadrupole. Line source and sink. A line source or sink of strength formula_87 (formula_88 for source and formula_89 for sink) is given by the potential formula_90 where formula_87 in fact is the volume flux per unit length across a surface enclosing the source or sink. The velocity field in polar coordinates are formula_91 i.e., a purely radial flow. Line vortex. A line vortex of strength formula_8 is given by formula_92 where formula_8 is the circulation around any simple closed contour enclosing the vortex. The velocity field in polar coordinates are formula_93 i.e., a purely azimuthal flow. Analysis for three-dimensional incompressible flows. For three-dimensional flows, complex potential cannot be obtained. Point source and sink. The velocity potential of a point source or sink of strength formula_87 (formula_88 for source and formula_89 for sink) in spherical polar coordinates is given by formula_94 where formula_87 in fact is the volume flux across a closed surface enclosing the source or sink. The velocity field in spherical polar coordinates are formula_95 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\boldsymbol\\omega \\equiv \\nabla\\times\\mathbf v=0" }, { "math_id": 1, "text": "\\mathbf v(\\mathbf x,t)" }, { "math_id": 2, "text": "\\boldsymbol\\omega(\\mathbf x,t)" }, { "math_id": 3, "text": "\\varphi(\\mathbf x,t)" }, { "math_id": 4, "text": " \\mathbf{v} = \\nabla \\varphi." }, { "math_id": 5, "text": "f(t)" }, { "math_id": 6, "text": "\\mathbf v" }, { "math_id": 7, "text": "\\varphi" }, { "math_id": 8, "text": "\\Gamma" }, { "math_id": 9, "text": "C" }, { "math_id": 10, "text": "\\Gamma \\equiv \\oint_C \\mathbf v\\cdot d\\mathbf l = \\int \\boldsymbol\\omega\\cdot d\\mathbf f=0" }, { "math_id": 11, "text": "d\\mathbf l" }, { "math_id": 12, "text": "d\\mathbf f" }, { "math_id": 13, "text": "\\boldsymbol\\omega" }, { "math_id": 14, "text": "N" }, { "math_id": 15, "text": "\\Gamma = N \\kappa" }, { "math_id": 16, "text": "\\kappa" }, { "math_id": 17, "text": "n" }, { "math_id": 18, "text": "n-1" }, { "math_id": 19, "text": "\\kappa_1,\\kappa_2,\\dots,\\kappa_{n-1}." }, { "math_id": 20, "text": "\\nabla \\cdot \\mathbf{v} =0 \\,," }, { "math_id": 21, "text": "\\mathbf v=\\nabla\\varphi" }, { "math_id": 22, "text": "\\nabla^2 \\varphi = 0 \\,," }, { "math_id": 23, "text": "\\mu\\nabla^2\\mathbf v = \\mu\\nabla(\\nabla\\cdot\\mathbf v)-\\mu\\nabla\\times\\boldsymbol\\omega=0" }, { "math_id": 24, "text": "\\psi" }, { "math_id": 25, "text": "\\rho \\nabla\\cdot\\mathbf v + \\mathbf v\\cdot\\nabla \\rho = 0, \\quad (\\mathbf v \\cdot\\nabla)\\mathbf v= -\\frac{1}{\\rho}\\nabla p = -\\frac{c^2}{\\rho}\\nabla \\rho" }, { "math_id": 26, "text": "c^2=(\\partial p/\\partial\\rho)_s" }, { "math_id": 27, "text": "\\nabla\\rho" }, { "math_id": 28, "text": "c^2\\nabla\\cdot\\mathbf v - \\mathbf v\\cdot (\\mathbf v \\cdot \\nabla)\\mathbf v=0." }, { "math_id": 29, "text": "c\\to\\infty" }, { "math_id": 30, "text": "(c^2-\\varphi_x^2)\\varphi_{xx}+(c^2-\\varphi_y^2)\\varphi_{yy}+(c^2-\\varphi_z^2)\\varphi_{zz}-2(\\varphi_x\\varphi_y\\varphi_{xy}+\\varphi_y\\varphi_z\\varphi_{yz}+\\varphi_z\\varphi_x\\phi_{zx})=0" }, { "math_id": 31, "text": "c=c(v)" }, { "math_id": 32, "text": "v^2=(\\nabla\\phi)^2" }, { "math_id": 33, "text": "c^2 = (\\gamma-1)(h_0-v^2/2)" }, { "math_id": 34, "text": "\\gamma" }, { "math_id": 35, "text": "h_0" }, { "math_id": 36, "text": "(c^2-\\varphi_x^2)\\varphi_{xx}+(c^2-\\varphi_y^2)\\varphi_{yy}-2\\varphi_x\\varphi_y\\varphi_{xy}=0." }, { "math_id": 37, "text": "\\nabla (h+v^2/2) - \\mathbf v\\times\\boldsymbol\\omega = T \\nabla s" }, { "math_id": 38, "text": "h" }, { "math_id": 39, "text": "T" }, { "math_id": 40, "text": "s" }, { "math_id": 41, "text": "h+v^2/2" }, { "math_id": 42, "text": "\\mathbf v\\times\\boldsymbol\\omega = -T \\nabla s" }, { "math_id": 43, "text": "\\nabla s=0" }, { "math_id": 44, "text": "\\nabla s" }, { "math_id": 45, "text": "U\\mathbf{e}_x" }, { "math_id": 46, "text": "\\varphi = x U + \\phi" }, { "math_id": 47, "text": "\\phi" }, { "math_id": 48, "text": "(1-M^2) \\frac{\\partial^2\\phi}{\\partial x^2} + \\frac{\\partial^2\\phi}{\\partial y^2} + \\frac{\\partial^2\\phi}{\\partial z^2} =0" }, { "math_id": 49, "text": "M=U/c_\\infty" }, { "math_id": 50, "text": "M" }, { "math_id": 51, "text": "|M-1|" }, { "math_id": 52, "text": "2\\alpha_*\\frac{\\partial\\phi}{\\partial x} \\frac{\\partial^2\\phi}{\\partial x^2} = \\frac{\\partial^2\\phi}{\\partial y^2} + \\frac{\\partial^2\\phi}{\\partial z^2}" }, { "math_id": 53, "text": "\\alpha_*" }, { "math_id": 54, "text": "\\alpha = (c^4/2\\upsilon^3)(\\partial^2 \\upsilon/\\partial p^2)_s" }, { "math_id": 55, "text": "\\upsilon=1/\\rho" }, { "math_id": 56, "text": "\\alpha_*=\\alpha=(\\gamma+1)/2" }, { "math_id": 57, "text": "\\frac{\\partial\\rho}{\\partial t} + \\rho \\nabla\\cdot\\mathbf v + \\mathbf v\\cdot\\nabla \\rho = 0, \\quad \\frac{\\partial\\mathbf v}{\\partial t}+ (\\mathbf v \\cdot\\nabla)\\mathbf v= -\\frac{1}{\\rho}\\nabla p =-\\frac{c^2}{\\rho}\\nabla \\rho=-\\nabla h." }, { "math_id": 58, "text": "\\frac{\\partial\\varphi}{\\partial t} + \\frac{v^2}{2} + h = f(t), \\quad \\Rightarrow \\quad \\frac{\\partial h}{\\partial t} = -\\frac{\\partial^2\\varphi}{\\partial t^2} - \\frac{1}{2}\\frac{\\partial v^2}{\\partial t} + \\frac{df}{dt}" }, { "math_id": 59, "text": "f(t)=0" }, { "math_id": 60, "text": "\\frac{\\partial^2\\varphi}{\\partial t^2} + \\frac{1}{2} \\frac{\\partial v^2}{\\partial t}=c^2\\nabla\\cdot\\mathbf v - \\mathbf v\\cdot (\\mathbf v \\cdot \\nabla)\\mathbf v." }, { "math_id": 61, "text": "\\varphi_{tt} + \\frac{1}{2} (\\varphi_x^2+ \\varphi_y^2+ \\varphi_z^2)_t= (c^2-\\varphi_x^2)\\varphi_{xx}+(c^2-\\varphi_y^2)\\varphi_{yy}+(c^2-\\varphi_z^2)\\varphi_{zz}-2(\\varphi_x\\varphi_y\\varphi_{xy}+\\varphi_y\\varphi_z\\varphi_{yz}+\\varphi_z\\varphi_x\\phi_{zx})." }, { "math_id": 62, "text": "\\tau=c_\\infty t" }, { "math_id": 63, "text": "\\frac{\\partial^2\\phi}{\\partial \\tau^2} + M \\frac{\\partial^2\\phi}{\\partial x\\partial\\tau}= (1-M^2) \\frac{\\partial^2\\phi}{\\partial x^2} + \\frac{\\partial^2\\phi}{\\partial y^2} + \\frac{\\partial^2\\phi}{\\partial z^2}" }, { "math_id": 64, "text": "\\frac{\\partial^2\\phi}{\\partial \\tau^2} + \\frac{\\partial^2\\phi}{\\partial x\\partial\\tau} = -2\\alpha_*\\frac{\\partial\\phi}{\\partial x} \\frac{\\partial^2\\phi}{\\partial x^2} + \\frac{\\partial^2\\phi}{\\partial y^2} + \\frac{\\partial^2\\phi}{\\partial z^2}." }, { "math_id": 65, "text": "v" }, { "math_id": 66, "text": "c" }, { "math_id": 67, "text": "c^2=(\\gamma-1)h_0" }, { "math_id": 68, "text": "\\frac{\\partial^2 \\varphi}{\\partial t^2} = c^2 \\nabla^2 \\varphi," }, { "math_id": 69, "text": "\\begin{align}\n z &= x + iy \\,, \\text{ and } &\n w &= \\varphi + i\\psi \\,.\n\\end{align}" }, { "math_id": 70, "text": "\\begin{align}\n f(x + iy) &= \\varphi + i\\psi \\,, \\text{ or } &\n f(z) &= w \\,.\n\\end{align}" }, { "math_id": 71, "text": "\\begin{align}\n \\frac{\\partial\\varphi}{\\partial x} &= \\frac{\\partial\\psi}{\\partial y} \\,, &\n \\frac{\\partial\\varphi}{\\partial y} &= -\\frac{\\partial\\psi}{\\partial x} \\,.\n\\end{align}" }, { "math_id": 72, "text": "\\frac{df}{dz} = u - iv" }, { "math_id": 73, "text": "\\begin{align}\n u &= \\frac{\\partial\\varphi}{\\partial x} = \\frac{\\partial\\psi}{\\partial y}, &\n v &= \\frac{\\partial\\varphi}{\\partial y} = -\\frac{\\partial\\psi}{\\partial x} \\,.\n\\end{align}" }, { "math_id": 74, "text": "\\begin{align}\n \\Delta\\varphi &= \\frac{\\partial^2\\varphi}{\\partial x^2} + \\frac{\\partial^2\\varphi}{\\partial y^2} = 0 \\,,\\text{ and } &\n \\Delta\\psi &= \\frac{\\partial^2\\psi}{\\partial x^2} + \\frac{\\partial^2\\psi}{\\partial y^2} = 0 \\,.\n\\end{align}" }, { "math_id": 75, "text": "\n \\nabla \\varphi \\cdot \\nabla \\psi =\n \\frac{\\partial\\varphi}{\\partial x} \\frac{\\partial\\psi}{\\partial x} + \\frac{\\partial\\varphi}{\\partial y} \\frac{\\partial\\psi}{\\partial y} =\n \\frac{\\partial \\psi}{\\partial y} \\frac{\\partial \\psi}{\\partial x} - \\frac{\\partial \\psi}{\\partial x} \\frac{\\partial \\psi}{\\partial y} =\n 0 \\,.\n" }, { "math_id": 76, "text": "w=Az^n \\,," }, { "math_id": 77, "text": "\\varphi=Ar^n\\cos n\\theta \\qquad \\text{and} \\qquad \\psi=Ar^n\\sin n\\theta \\,." }, { "math_id": 78, "text": "f(x+iy) = A\\, (x+iy) = Ax + i Ay " }, { "math_id": 79, "text": "\\psi=Ar^2\\sin 2\\theta \\,," }, { "math_id": 80, "text": "\\psi=2Axy \\,." }, { "math_id": 81, "text": "\\begin{pmatrix} u \\\\ v \\end{pmatrix} = \\begin{pmatrix} \\frac{\\partial \\varphi}{\\partial x} \\\\[2px] \\frac{\\partial \\varphi}{\\partial y} \n\\end{pmatrix} = \\begin{pmatrix} + {\\partial \\psi \\over \\partial y} \\\\[2px] - {\\partial \\psi \\over \\partial x} \\end{pmatrix} = \\begin{pmatrix} +2Ax \\\\[2px] -2Ay \\end{pmatrix} \\,." }, { "math_id": 82, "text": "\\psi = -\\frac{A}{r}\\sin\\theta." }, { "math_id": 83, "text": "\\begin{align}\n \\psi = \\frac{-A y}{r^2} &= \\frac{-A y}{x^2 + y^2} \\,, \\\\\n x^2 + y^2 + \\frac{A y}{\\psi} &= 0 \\,, \\\\\n x^2 + \\left(y+\\frac{A}{2\\psi}\\right)^2 &= \\left(\\frac{A}{2\\psi}\\right)^2 \\,.\n\\end{align}" }, { "math_id": 84, "text": "(u,v)=\\left( \\frac{\\partial \\psi}{\\partial y}, - \\frac{\\partial \\psi}{\\partial x} \\right) = \\left(A\\frac{y^2-x^2}{\\left(x^2+y^2\\right)^2},-A\\frac{2xy}{\\left(x^2+y^2\\right)^2}\\right) \\,." }, { "math_id": 85, "text": "(u_r, u_\\theta)=\\left( \\frac{1}{r} \\frac{\\partial \\psi}{\\partial \\theta}, - \\frac{\\partial \\psi}{\\partial r} \\right) = \\left(-\\frac{A}{r^2}\\cos\\theta, -\\frac{A}{r^2}\\sin\\theta\\right) \\,." }, { "math_id": 86, "text": "\\psi=-\\frac{A}{r^2}\\sin 2 \\theta \\,." }, { "math_id": 87, "text": "Q" }, { "math_id": 88, "text": "Q>0" }, { "math_id": 89, "text": "Q<0" }, { "math_id": 90, "text": "w = \\frac{Q}{2\\pi} \\ln z" }, { "math_id": 91, "text": "u_r = \\frac{Q}{2\\pi r},\\quad u_\\theta=0" }, { "math_id": 92, "text": "w=\\frac{\\Gamma}{2\\pi i}\\ln z" }, { "math_id": 93, "text": "u_r = 0,\\quad u_\\theta=\\frac{\\Gamma}{2\\pi r}" }, { "math_id": 94, "text": "\\phi = -\\frac{Q}{4\\pi r}" }, { "math_id": 95, "text": "u_r = \\frac{Q}{4\\pi r^2}, \\quad u_\\theta=0, \\quad u_\\phi = 0." } ]
https://en.wikipedia.org/wiki?curid=58320
58321041
Rice index
Statistical measure of cohesion in a voting group The Rice index is a number between 0 and 1, which indicates the degree of agreement, within a voting body. History. It is named for Stuart A. Rice (1889-1969), Chairman of the United States Central Statistical Board, president of the American Statistical Association in 1933 and Assistant Director of the Office of Statistical Standards in the Bureau of the Budget from 1940 to 1955. Usage. A result of 0 indicates a stalemate, while a 1 indicates a perfect consensus. The formula is often used in the social sciences, and is the ratio of the difference between majority and minority to the sum of majority and minority. formula_0 Yes = Number of yes votes, No = Number of votes against. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "RI = \\frac{|\\text{yes} - \\text{no}|}{\\text{yes} + \\text{no}}" } ]
https://en.wikipedia.org/wiki?curid=58321041
58325116
Titu's lemma
Mathematical inequality In mathematics, the following inequality is known as Titu's lemma, Bergström's inequality, Engel's form or Sedrakyan's inequality, respectively, referring to the article "About the applications of one useful inequality" of Nairi Sedrakyan published in 1997, to the book "Problem-solving strategies" of Arthur Engel published in 1998 and to the book "Mathematical Olympiad Treasures" of Titu Andreescu published in 2003. It is a direct consequence of Cauchy–Bunyakovsky–Schwarz inequality. Nevertheless, in his article (1997) Sedrakyan has noticed that written in this form this inequality can be used as a proof technique and it has very useful new applications. In the book "Algebraic Inequalities" (Sedrakyan) several generalizations of this inequality are provided. Statement of the inequality. For any real numbers formula_0 and positive reals formula_1 we have formula_2 (Nairi Sedrakyan (1997), Arthur Engel (1998), Titu Andreescu (2003)) Probabilistic statement. Similarly to the Cauchy–Schwarz inequality, one can generalize Sedrakyan's inequality to random variables. In this formulation let formula_3 be a real random variable, and let formula_4 be a positive random variable. "X" and "Y" need not be independent, but we assume formula_5 and formula_6 are both defined. Then formula_7 Direct applications. Example 1. Nesbitt's inequality. For positive real numbers formula_8 formula_9 Example 2. International Mathematical Olympiad (IMO) 1995. For positive real numbers formula_10, where formula_11 we have that formula_12 Example 3. For positive real numbers formula_13 we have that formula_14 Example 4. For positive real numbers formula_10 we have that formula_15 Proofs. Example 1. Proof: Use formula_16 formula_17 and formula_18 to conclude: formula_19 Example 2. We have that formula_20 Example 3. We have formula_21 so that formula_22 Example 4. We have that formula_23 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a_1, a_2, a_3, \\ldots, a_n" }, { "math_id": 1, "text": "b_1, b_2, b_3,\\ldots, b_n," }, { "math_id": 2, "text": "\\frac{a^2_1}{b_1} + \\frac{a^2_2}{b_2} + \\cdots + \\frac{a^2_n}{b_n} \\geq \\frac{\\left(a_1 + a_2 + \\cdots + a_n\\right)^2}{b_1 + b_2 + \\cdots + b_n}." }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "Y" }, { "math_id": 5, "text": "E[|X|]" }, { "math_id": 6, "text": "E[Y]" }, { "math_id": 7, "text": "\\operatorname{E}[X^2/Y] \\ge \\operatorname{E}[|X|]^2 / \\operatorname{E}[Y] \\ge \\operatorname{E}[X]^2 / \\operatorname{E}[Y]." }, { "math_id": 8, "text": "a, b, c:" }, { "math_id": 9, "text": "\\frac{a}{b+c} + \\frac{b}{a+c} + \\frac{c}{a+b} \\geq \\frac{3}{2}." }, { "math_id": 10, "text": " a,b,c " }, { "math_id": 11, "text": " abc=1 " }, { "math_id": 12, "text": " \\frac{1}{a^3(b+c)}+\\frac{1}{b^3(a+c)}+\\frac{1}{c^3(a+b)} \\geq \\frac{3}{2}. " }, { "math_id": 13, "text": " a,b " }, { "math_id": 14, "text": " 8(a^4+b^4) \\geq (a+b)^4. " }, { "math_id": 15, "text": " \\frac{1}{a+b}+\\frac{1}{b+c}+\\frac{1}{a+c} \\geq \\frac{9}{2(a+b+c)}. " }, { "math_id": 16, "text": "n = 3," }, { "math_id": 17, "text": "\\left(a_1, a_2, a_3\\right) := (a, b, c)," }, { "math_id": 18, "text": "\\left(b_1, b_2, b_3\\right) := (a(b + c), b(c + a), c(a + b))" }, { "math_id": 19, "text": "\\frac{a^2}{a(b + c)} + \\frac{b^2}{b(c + a)} + \\frac{c^2}{c(a + b)} \\geq \\frac{(a + b + c)^2}{a(b + c) + b(c + a) + c(a + b)} = \\frac{a^2 + b^2 + c^2 + 2(ab + bc + ca)}{2(ab + bc + ca)} = \\frac{a^2 + b^2 + c^2}{2(ab + bc + ca)} + 1 \\geq \\frac{1}{2} (1) + 1 = \\frac{3}{2}. \\blacksquare" }, { "math_id": 20, "text": "\\frac{\\Big(\\frac{1}{a}\\Big)^2}{a(b+c)} + \\frac{\\Big(\\frac{1}{b}\\Big)^2}{b(a+c)} + \\frac{\\Big(\\frac{1}{c}\\Big)^2}{c(a+b)} \\geq \\frac{\\Big(\\frac{1}{a}+\\frac{1}{b}+\\frac{1}{c}\\Big)^2}{2(ab+bc+ac)} = \\frac{ab+bc+ac}{2a^2b^2c^2} \\geq \\frac{3 \\sqrt[3]{a^2 b^2 c^2}}{2a^2b^2c^2} = \\frac{3}{2}." }, { "math_id": 21, "text": "\\frac{a^2}{1} + \\frac{b^2}{1} \\geq \\frac{(a + b)^2}{2}" }, { "math_id": 22, "text": "a^4 + b^4 = \\frac{\\left(a^2\\right)^2}{1} + \\frac{\\left(b^2\\right)^2}{1} \\geq \\frac{\\left(a^2 + b^2\\right)^2}{2} \\geq \\frac{\\left(\\frac{(a+b)^2}{2}\\right)^2}{2} = \\frac{(a + b)^4}{8}." }, { "math_id": 23, "text": "\\frac{1}{a+b} + \\frac{1}{b+c} + \\frac{1}{a+c} \\geq \\frac{(1+1+1)^2}{2(a+b+c)} = \\frac{9}{2(a+b+c)}." } ]
https://en.wikipedia.org/wiki?curid=58325116
58338458
Marília Chaves Peixoto
Brazilian mathematician Marília Chaves Peixoto (24 February 1921 – 5 January 1961) was a Brazilian mathematician and engineer who worked in dynamical systems. Peixoto was the first Brazilian woman to receive a doctorate in mathematics and the first Brazilian woman to join the Brazilian Academy of Sciences. Early life and education. Marília Magalhães Chaves was born on 24 February 1921, to Tullio de Saboia Chaves and Zillah da Costa Magalhaes, in Santana do Livramento. She had two siblings, Lúcia de Magalhaes Chaves and Livio de Magalhaes Chaves. In an interview from 1952, she would describe her educational upbringing, stating that despite the high school in Santana do Livramento not accepting girls as students, the priests helped her enroll as a private student and take tests in the school. She later moved to Rio de Janeiro, studying at the Colégio Andrews, a secular school that allowed boys and girls to study. She graduated as an outstanding student. She studied superior courses and studied for the entrance exam of the Escola Nacional de Engenharia (National School of Engineering) at the Federal University of Rio de Janeiro. She achieved third place in the entrance exam rankings and was accepted. In 1939 she enrolled at the Escola Nacional de Engenharia, working alongside Leopoldo Nachbin and Maurício Peixoto (whom she would later marry). Career. Peixoto graduated from the Federal University of Rio de Janeiro in 1943 with a degree in engineering, having also studied mathematics at the university and acted as a monitor for the university's National Faculty of Philosophy. In 1948, she received a doctorate in mathematics, and began teaching at the Escola Politécnica da UFRJ. In 1949, Peixoto published "On the inequalities formula_0" in Annals of the Brazilian Academy of Sciences. Following her work on convex functions, Peixoto was appointed an associate member of the Brazilian Academy of Science on 12 June 1951. She was the first Brazilian woman to join the organization, and the second woman after Marie Curie, a foreign associate of the academy. Peixoto married Maurício Peixoto in 1946. In 1949 they traveled to Chicago for the International Congress of Mathematicians of 1950. Despite being a participant in her own right, in the list of members she was listed as Maurício's spouse ("Mrs. Peixoto") rather than her full title. In 1955, Peixoto published Cálculo vetorial (Vector calculus), a book aimed at students that is more focus on applications of vector calculus than proofs of results. Marília and Maurício jointly published "Structural Stability in the plane with enlarged boundary conditions" in 1959, one of several papers which led to Peixoto's theorem. She died on 5 January 1961, aged 39. She received a number of posthumous honors, including: Personal life. Peixoto had two children, Marta and Ricardo, with Maurício Peixoto. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "y'' \\ge G(x,y,y',y'')" } ]
https://en.wikipedia.org/wiki?curid=58338458
5834526
Abel equation
Equation for function that computes iterated values The Abel equation, named after Niels Henrik Abel, is a type of functional equation of the form formula_0 or formula_1. The forms are equivalent when α is invertible. h or α control the iteration of f. Equivalence. The second equation can be written formula_2 Taking "x" "α"−1("y"), the equation can be written formula_3 For a known function "f"("x") , a problem is to solve the functional equation for the function "α"−1 ≡ "h", possibly satisfying additional requirements, such as "α"−1(0)   1. The change of variables "s""α"("x") Ψ("x"), for a real parameter s, brings Abel's equation into the celebrated Schröder's equation, Ψ("f"("x")) "s" Ψ("x") . The further change "F"("x") exp("s""α"("x")) into Böttcher's equation, "F"("f"("x")) "F"("x")"s". The Abel equation is a special case of (and easily generalizes to) the translation equation, formula_4 e.g., for formula_5, formula_6.     (Observe "ω"("x",0) "x".) The Abel function "α"("x") further provides the canonical coordinate for Lie advective flows (one parameter Lie groups). History. Initially, the equation in the more general form was reported. Even in the case of a single variable, the equation is non-trivial, and admits special analysis. In the case of a linear transfer function, the solution is expressible compactly. Special cases. The equation of tetration is a special case of Abel's equation, with "f" exp. In the case of an integer argument, the equation encodes a recurrent procedure, e.g., formula_7 and so on, formula_8 Solutions. The Abel equation has at least one solution on formula_9 if and only if for all formula_10 and all formula_11, formula_12, where formula_13, is the function f iterated n times. Analytic solutions (Fatou coordinates) can be approximated by asymptotic expansion of a function defined by power series in the sectors around a parabolic fixed point. The analytic solution is unique up to a constant.
[ { "math_id": 0, "text": "f(h(x)) = h(x + 1)" }, { "math_id": 1, "text": "\\alpha(f(x)) = \\alpha(x)+1" }, { "math_id": 2, "text": "\\alpha^{-1}(\\alpha(f(x))) = \\alpha^{-1}(\\alpha(x)+1)\\, ." }, { "math_id": 3, "text": "f(\\alpha^{-1}(y)) = \\alpha^{-1}(y+1)\\, ." }, { "math_id": 4, "text": "\\omega( \\omega(x,u),v)=\\omega(x,u+v) ~," }, { "math_id": 5, "text": "\\omega(x,1) = f(x)" }, { "math_id": 6, "text": "\\omega(x,u) = \\alpha^{-1}(\\alpha(x)+u)" }, { "math_id": 7, "text": "\\alpha(f(f(x)))=\\alpha(x)+2 ~," }, { "math_id": 8, "text": "\\alpha(f_n(x))=\\alpha(x)+n ~." }, { "math_id": 9, "text": "E" }, { "math_id": 10, "text": "x \\in E" }, { "math_id": 11, "text": "n \\in \\mathbb{N}" }, { "math_id": 12, "text": "f^{n}(x) \\neq x" }, { "math_id": 13, "text": " f^{n} = f \\circ f \\circ ... \\circ f" } ]
https://en.wikipedia.org/wiki?curid=5834526
5834541
Schröder's equation
Equation for fixed point of functional composition Schröder's equation, named after Ernst Schröder, is a functional equation with one independent variable: given the function "h", find the function Ψ such that formula_0 Schröder's equation is an eigenvalue equation for the composition operator "C""h" that sends a function "f" to "f"("h"(.)). If a is a fixed point of h, meaning "h"("a") "a", then either Ψ("a") 0 (or ∞) or "s" 1. Thus, provided that Ψ("a") is finite and Ψ′("a") does not vanish or diverge, the eigenvalue s is given by "s" "h"′("a"). Functional significance. For "a" 0, if "h" is analytic on the unit disk, fixes 0, and 0 &lt; |"h"′(0)| &lt; 1, then Gabriel Koenigs showed in 1884 that there is an analytic (non-trivial) Ψ satisfying Schröder's equation. This is one of the first steps in a long line of theorems fruitful for understanding composition operators on analytic function spaces, cf. Koenigs function. Equations such as Schröder's are suitable to encoding self-similarity, and have thus been extensively utilized in studies of nonlinear dynamics (often referred to colloquially as chaos theory). It is also used in studies of turbulence, as well as the renormalization group. An equivalent transpose form of Schröder's equation for the inverse Φ Ψ−1 of Schröder's conjugacy function is "h"(Φ("y")) Φ("sy"). The change of variables α("x") log(Ψ("x"))/log("s") (the Abel function) further converts Schröder's equation to the older Abel equation, α("h"("x")) α("x") + 1. Similarly, the change of variables Ψ("x") log(φ("x")) converts Schröder's equation to Böttcher's equation, φ("h"("x")) (φ("x"))"s". Moreover, for the velocity, β("x") Ψ/Ψ′,   "Julia's equation",   β("f"("x")) "f"′("x")β("x"), holds. The "n"-th power of a solution of Schröder's equation provides a solution of Schröder's equation with eigenvalue "s""n", instead. In the same vein, for an invertible solution Ψ("x") of Schröder's equation, the (non-invertible) function Ψ("x") "k"(log Ψ("x")) is also a solution, for "any" periodic function "k"("x") with period log("s"). All solutions of Schröder's equation are related in this manner. Solutions. Schröder's equation was solved analytically if a is an attracting (but not superattracting) fixed point, that is 0 &lt; |"h"′("a")| &lt; 1 by Gabriel Koenigs (1884). In the case of a superattracting fixed point, |"h"′("a")| 0, Schröder's equation is unwieldy, and had best be transformed to Böttcher's equation. There are a good number of particular solutions dating back to Schröder's original 1870 paper. The series expansion around a fixed point and the relevant convergence properties of the solution for the resulting orbit and its analyticity properties are cogently summarized by Szekeres. Several of the solutions are furnished in terms of asymptotic series, cf. Carleman matrix. Applications. It is used to analyse discrete dynamical systems by finding a new coordinate system in which the system (orbit) generated by "h"("x") looks simpler, a mere dilation. More specifically, a system for which a discrete unit time step amounts to "x" → "h"("x"), can have its smooth orbit (or flow) reconstructed from the solution of the above Schröder's equation, its conjugacy equation. That is, "h"("x") Ψ−1("s" Ψ("x")) ≡ "h"1("x"). In general, "all of its functional iterates" (its "regular iteration group", see iterated function) are provided by the orbit formula_1 for t real — not necessarily positive or integer. (Thus a full continuous group.) The set of "h""n"("x"), i.e., of all positive integer iterates of "h"("x") (semigroup) is called the "splinter" (or Picard sequence) of "h"("x"). However, "all iterates" (fractional, infinitesimal, or negative) of "h"("x") are likewise specified through the coordinate transformation "Ψ"("x") determined to solve Schröder's equation: a holographic continuous interpolation of the initial discrete recursion "x" → "h"("x") has been constructed; in effect, the entire orbit. For instance, the functional square root is "h"1/2("x") = Ψ−1("s"1/2 Ψ("x")), so that "h"1/2("h"1/2("x")) "h"("x"), and so on. For example, special cases of the logistic map such as the chaotic case "h"("x") 4"x"(1 − "x") were already worked out by Schröder in his original article (p. 306), Ψ("x") (arcsin √"x")2, "s" 4, and hence "h""t"("x") sin2(2"t" arcsin √"x"). In fact, this solution is seen to result as motion dictated by a sequence of switchback potentials, "V"("x") ∝ "x"("x" − 1) ("nπ" + arcsin √"x")2, a generic feature of continuous iterates effected by Schröder's equation. A nonchaotic case he also illustrated with his method, "h"("x") 2"x"(1 − "x"), yields Ψ("x") = −ln(1 − 2"x"), and hence "h"t("x") = −((1 − 2"x")2"t" − 1). Likewise, for the Beverton–Holt model, "h"("x") "x"/(2 − "x"), one readily finds Ψ("x") "x"/(1 − "x"), so that formula_2
[ { "math_id": 0, "text": "\\forall x\\;\\;\\;\\Psi\\big(h(x)\\big) = s \\Psi(x)." }, { "math_id": 1, "text": "h_t(x) = \\Psi^{-1}\\big(s^t \\Psi(x)\\big)," }, { "math_id": 2, "text": "h_t(x)= \\Psi^{-1}\\big(2^{-t} \\Psi(x)\\big) = \\frac{x}{2^t + x(1 - 2^t)}." } ]
https://en.wikipedia.org/wiki?curid=5834541
58350478
Kakutani's theorem (measure theory)
In measure theory, a branch of mathematics, Kakutani's theorem is a fundamental result on the equivalence or mutual singularity of countable product measures. It gives an "if and only if" characterisation of when two such measures are equivalent, and hence it is extremely useful when trying to establish change-of-measure formulae for measures on function spaces. The result is due to the Japanese mathematician Shizuo Kakutani. Kakutani's theorem can be used, for example, to determine whether a translate of a Gaussian measure formula_0 is equivalent to formula_0 (only when the translation vector lies in the Cameron–Martin space of formula_0), or whether a dilation of formula_0 is equivalent to formula_0 (only when the absolute value of the dilation factor is 1, which is part of the Feldman–Hájek theorem). Statement of the theorem. For each formula_1, let formula_2 and formula_3 be measures on the real line formula_4, and let formula_5 and formula_6 be the corresponding product measures on formula_7. Suppose also that, for each formula_1, formula_8 and formula_9 are equivalent (i.e. have the same null sets). Then either formula_0 and formula_10 are equivalent, or else they are mutually singular. Furthermore, equivalence holds precisely when the infinite product formula_11 has a nonzero limit; or, equivalently, when the infinite series formula_12 converges.
[ { "math_id": 0, "text": "\\mu" }, { "math_id": 1, "text": "n \\in \\mathbb{N}" }, { "math_id": 2, "text": "\\mu_{n}" }, { "math_id": 3, "text": "\\nu_{n}" }, { "math_id": 4, "text": "\\mathbb{R}" }, { "math_id": 5, "text": "\\mu = \\bigotimes_{n \\in \\mathbb{N}} \\mu_n" }, { "math_id": 6, "text": "\\nu = \\bigotimes_{n \\in \\mathbb{N}} \\nu_n" }, { "math_id": 7, "text": "\\mathbb{R}^\\infty" }, { "math_id": 8, "text": "\\mu_n" }, { "math_id": 9, "text": "\\nu_n" }, { "math_id": 10, "text": "\\nu" }, { "math_id": 11, "text": "\\prod_{n \\in \\mathbb{N}} \\int_{\\mathbb{R}} \\sqrt{ \\frac{\\mathrm{d} \\mu_n} {\\mathrm{d} \\nu_n} } \\, \\mathrm{d} \\nu_n" }, { "math_id": 12, "text": "\\sum_{n \\in \\mathbb{N}} \\log \\int_{\\mathbb{R}} \\sqrt{ \\frac{\\mathrm{d} \\mu_n}{\\mathrm{d} \\nu_n} } \\, \\mathrm{d} \\nu_n" } ]
https://en.wikipedia.org/wiki?curid=58350478
5835550
Area-to-area Lee model
The Lee model for area-to-area mode is a radio propagation model that operates around 900 MHz. Built as two different modes, this model includes an adjustment factor that can be adjusted to make the model more flexible to different regions of propagation. Applicable to/under conditions. This model is suitable for using in data collected. The model predicts the behaviour of all links that has ends in specific areas. Coverage. Frequency: 900 MHz band Mathematical formulation. The model. The Lee model is formally expressed as: formula_0 where, "L" = The median path loss. Unit: decibel (dB) "L"0 = The reference path loss along 1 km. Unit: decibel (dB) formula_1 = The slope of the path loss curve. Unit: decibels per decade "d" = The distance on which the path loss is to be calculated. "F"A = Adjustment factor. Calculation of reference path loss. The reference path loss is usually computed along a 1 km or 1 mile link. Any other suitable length of path can be chosen based on the applications. formula_2 where, "G"B = Base station antenna gain. Unit: decibel with respect to isotropic antenna (dBi) formula_3 = Wavelength. Unit: meter (m). "G"M = Mobile station antenna gain. Unit: decibel with respect to isotropic antenna (dBi). Calculation of adjustment factors. The adjustment factor is calculated as: formula_4 where, "F"BH = Base station antenna height correction factor. "F"BG = Base station antenna gain correction factor. "F"MH = Mobile station antenna height correction factor. "F"MG = Mobile station antenna gain correction factor. "F"F = Frequency correction factor Base-station antenna height correction factor. formula_5 where, "h"B = Base-station antenna height. Unit: meter (m). "or" formula_6 where, "h"B = Base-station antenna height. Unit: foot (ft). Base-station antenna gain correction factor. formula_7 where, "GB = Base-station antenna gain. Unit: decibel with respect to half wave dipole antenna (dBd)" Mobile-station antenna height correction factor. formula_8 where, "hM = Mobile-station antenna height. Unit: meter(m)." Mobile-antenna gain correction factor. formula_9 where, "GM = Mobile-station antenna gain. Unit: Decibel with respect to half wave dipole antenna (dBd)." Frequency correction factor. formula_10 where, "f" = Frequency. Unit: megahertz (MHz)
[ { "math_id": 0, "text": "L \\; = \\; L_0 \\; + \\; \\gamma \\log d \\; - 10 \\log {F_A}" }, { "math_id": 1, "text": "\\gamma\\;" }, { "math_id": 2, "text": "L_0 \\; = \\; G_B \\; + \\; G_M \\; + \\; 20 \\; (\\log \\lambda \\; - \\; \\log d) \\; - \\; 22 " }, { "math_id": 3, "text": "\\lambda" }, { "math_id": 4, "text": "F_A \\; = \\; F_{BH} \\; F_{BG} \\; F_{MH} \\; F_{MG} \\; F_{F} " }, { "math_id": 5, "text": "F_{BH} \\; = \\; \\left( \\; \\frac{h_B}{30.48} \\; \\right)^2" }, { "math_id": 6, "text": "F_{BH} \\; = \\; \\left( \\; \\frac{h_B}{100} \\; \\right)^2" }, { "math_id": 7, "text": "F_{BG} \\; = \\; \\frac{G_B}{4}" }, { "math_id": 8, "text": "F_{MH}\\;=\\;\\begin{cases}\\;\\;\\frac{h_M} {3} \\;\\;\\;\\;\\mbox{ if, } h_M > 3 \\\\ \\Big( \\frac {h_M}{3} \\Big)^2 \\mbox{ if, }h_M \\le 3 \\end{cases}" }, { "math_id": 9, "text": "F_{MG} \\; = \\; G_M" }, { "math_id": 10, "text": "F_F\\;=\\;\\big( \\frac{f}{900} \\big)^{-n} \\mbox{ for } 2 < n < 3" } ]
https://en.wikipedia.org/wiki?curid=5835550
583598
Oxygen cycle
Biogeochemical cycle of oxygen Oxygen cycle refers to the movement of oxygen through the atmosphere (air), biosphere (plants and animals) and the lithosphere (the Earth’s crust). The oxygen cycle demonstrates how free oxygen is made available in each of these regions, as well as how it is used. The oxygen cycle is the biogeochemical cycle of oxygen atoms between different oxidation states in ions, oxides, and molecules through redox reactions within and between the spheres/reservoirs of the planet Earth. The word oxygen in the literature typically refers to the most common oxygen allotrope, elemental/diatomic oxygen (O2), as it is a common product or reactant of many biogeochemical redox reactions within the cycle. Processes within the oxygen cycle are considered to be biological or geological and are evaluated as either a source (O2 production) or sink (O2 consumption). Oxygen is one of the most common elements on Earth and represents a large portion of each main reservoir. By far the largest reservoir of Earth's oxygen is within the silicate and oxide minerals of the crust and mantle (99.5% by weight). The Earth's atmosphere, hydrosphere, and biosphere together hold less than 0.05% of the Earth's total mass of oxygen. Besides O2, additional oxygen atoms are present in various forms spread throughout the surface reservoirs in the molecules of biomass, H2O, CO2, HNO3, NO, NO2, CO, H2O2, O3, SO2, H2SO4, MgO, CaO, Al2O3, SiO2, and PO4. Atmosphere. The atmosphere is 21% oxygen by volume, which equates to a total of roughly 34 × 1018 mol of oxygen. Other oxygen-containing molecules in the atmosphere include ozone (O3), carbon dioxide (CO2), water vapor (H2O), and sulphur and nitrogen oxides (SO2, NO, N2O, etc.). Biosphere. The biosphere is 22% oxygen by volume, present mainly as a component of organic molecules (CxHxNxOx) and water. Hydrosphere. The hydrosphere is 33% oxygen by volume present mainly as a component of water molecules, with dissolved molecules including free oxygen and carbolic acids (HxCO3). Lithosphere. The lithosphere is 46.6% oxygen by volume, present mainly as silica minerals (SiO2) and other oxide minerals. Sources and sinks. While there are many abiotic sources and sinks for O2, the presence of the profuse concentration of free oxygen in modern Earth's atmosphere and ocean is attributed to O2 production from the biological process of oxygenic photosynthesis in conjunction with a biological sink known as the biological pump and a geologic process of carbon burial involving plate tectonics. Biology is the main driver of O2 flux on modern Earth, and the evolution of oxygenic photosynthesis by bacteria, which is discussed as part of the Great Oxygenation Event, is thought to be directly responsible for the conditions permitting the development and existence of all complex eukaryotic metabolism. Biological production. The main source of atmospheric free oxygen is photosynthesis, which produces sugars and free oxygen from carbon dioxide and water: formula_0 Photosynthesizing organisms include the plant life of the land areas, as well as the phytoplankton of the oceans. The tiny marine cyanobacterium "Prochlorococcus" was discovered in 1986 and accounts for up to half of the photosynthesis of the open oceans. Abiotic production. An additional source of atmospheric free oxygen comes from photolysis, whereby high-energy ultraviolet radiation breaks down atmospheric water and nitrous oxide into component atoms. The free hydrogen and nitrogen atoms escape into space, leaving O2 in the atmosphere: formula_1 formula_2 Biological consumption. The main way free oxygen is lost from the atmosphere is via respiration and decay, mechanisms in which animal life and bacteria consume oxygen and release carbon dioxide. Capacities and fluxes. The following tables offer estimates of oxygen cycle reservoir capacities and fluxes. These numbers are based primarily on estimates from (Walker, J. C. G.): More recent research indicates that ocean life (marine primary production) is actually responsible for more than half the total oxygen production on Earth. &lt;br&gt; Table 2: Annual gain and loss of atmospheric oxygen (Units of 1010 kg O2 per year) Ozone. The presence of atmospheric oxygen has led to the formation of ozone (O3) and the ozone layer within the stratosphere: formula_3 formula_4 O + O2 :- O3 The ozone layer is extremely important to modern life as it absorbs harmful ultraviolet radiation: formula_5 References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{6 \\ CO_2 + 6H_2O + energy \\longrightarrow C_6H_{12}O_6 + 6 \\ O_2}" }, { "math_id": 1, "text": "\\mathrm{2 \\ H_2O + energy \\longrightarrow 4 \\ H + O_2}" }, { "math_id": 2, "text": "\\mathrm{2 \\ N_2O + energy \\longrightarrow 4 \\ N + O_2}" }, { "math_id": 3, "text": "\\mathrm{O_2 + uv~light \\longrightarrow 2~O}\\qquad(\\lambda \\lesssim 200~\\text{nm})" }, { "math_id": 4, "text": "\\mathrm{O + O_2 \\longrightarrow O_3}" }, { "math_id": 5, "text": "\\mathrm{O_3 + uv~light \\longrightarrow O_2 + O}\\qquad(\\lambda \\lesssim 300~\\text{nm})" } ]
https://en.wikipedia.org/wiki?curid=583598
583637
Simplicial approximation theorem
Continuous mappings can be approximated by ones that are piecewise simple In mathematics, the simplicial approximation theorem is a foundational result for algebraic topology, guaranteeing that continuous mappings can be (by a slight deformation) approximated by ones that are piecewise of the simplest kind. It applies to mappings between spaces that are built up from simplices—that is, finite simplicial complexes. The general continuous mapping between such spaces can be represented approximately by the type of mapping that is ("affine"-) linear on each simplex into another simplex, at the cost (i) of sufficient barycentric subdivision of the simplices of the domain, and (ii) replacement of the actual mapping by a homotopic one. This theorem was first proved by L.E.J. Brouwer, by use of the Lebesgue covering theorem (a result based on compactness). It served to put the homology theory of the time—the first decade of the twentieth century—on a rigorous basis, since it showed that the topological effect (on homology groups) of continuous mappings could in a given case be expressed in a finitary way. This must be seen against the background of a realisation at the time that continuity was in general compatible with the pathological, in some other areas. This initiated, one could say, the era of combinatorial topology. There is a further simplicial approximation theorem for homotopies, stating that a homotopy between continuous mappings can likewise be approximated by a combinatorial version. Formal statement of the theorem. Let formula_0 and formula_1 be two simplicial complexes. A simplicial mapping formula_2 is called a simplicial approximation of a continuous function formula_3 if for every point formula_4, formula_5 belongs to the minimal closed simplex of formula_1 containing the point formula_6. If formula_7 is a simplicial approximation to a continuous map formula_8, then the geometric realization of formula_7, formula_9 is necessarily homotopic to formula_8. The simplicial approximation theorem states that given any continuous map formula_3 there exists a natural number formula_10 such that for all formula_11 there exists a simplicial approximation formula_12 to formula_8 (where formula_13 denotes the barycentric subdivision of formula_0, and formula_14 denotes the result of applying barycentric subdivision formula_15 times.), in other words, if formula_16 and formula_17 are simplicial complexes and formula_18 is a continuous function, then there is a subdivision formula_19 of formula_16 and a simplicial map formula_20 which is homotopic to formula_21. Moreover, if formula_22 is a positive continuous map, then there are subdivisions formula_23 of formula_24 and a simplicial map formula_25 such that formula_26 is formula_27-homotopic to formula_21; that is, there is a homotopy formula_28 from formula_21 to formula_26 such that formula_29 for all formula_30. So, we may consider the simplicial approximation theorem as a piecewise linear analog of Whitney approximation theorem.
[ { "math_id": 0, "text": " K " }, { "math_id": 1, "text": " L " }, { "math_id": 2, "text": " f : K \\to L " }, { "math_id": 3, "text": " F : |K| \\to |L| " }, { "math_id": 4, "text": " x \\in |K| " }, { "math_id": 5, "text": " |f|(x) " }, { "math_id": 6, "text": " F(x) " }, { "math_id": 7, "text": " f " }, { "math_id": 8, "text": " F " }, { "math_id": 9, "text": " |f| " }, { "math_id": 10, "text": " n_0 " }, { "math_id": 11, "text": " n \\ge n_0 " }, { "math_id": 12, "text": " f : \\mathrm{Bd}^n K \\to L " }, { "math_id": 13, "text": " \\mathrm{Bd}\\; K " }, { "math_id": 14, "text": " \\mathrm{Bd}^n K " }, { "math_id": 15, "text": " n " }, { "math_id": 16, "text": "K" }, { "math_id": 17, "text": "L" }, { "math_id": 18, "text": "f:|K|\\to |L|" }, { "math_id": 19, "text": "K'" }, { "math_id": 20, "text": "g:K'\\to L" }, { "math_id": 21, "text": "f" }, { "math_id": 22, "text": "\\epsilon:|L|\\to\\Bbb R" }, { "math_id": 23, "text": "K',L'" }, { "math_id": 24, "text": "K,L" }, { "math_id": 25, "text": "g:K'\\to L'" }, { "math_id": 26, "text": "g" }, { "math_id": 27, "text": "\\epsilon" }, { "math_id": 28, "text": "H:|K|\\times[0,1]\\to |L|" }, { "math_id": 29, "text": "\\mathrm{diam}(H(x\\times[0,1]))<\\epsilon(f(x))" }, { "math_id": 30, "text": "x\\in |K|" } ]
https://en.wikipedia.org/wiki?curid=583637
583651
Barycentric subdivision
In mathematics, the barycentric subdivision is a standard way to subdivide a given simplex into smaller ones. Its extension on simplicial complexes is a canonical method to refine them. Therefore, the barycentric subdivision is an important tool in algebraic topology. Motivation. The barycentric subdivision is an operation on simplicial complexes. In algebraic topology it is sometimes useful to replace the original spaces with simplicial complexes via triangulations: The substitution allows to assign combinatorial invariants as the Euler characteristic to the spaces. One can ask if there is an analogous way to replace the continuous functions defined on the topological spaces by functions that are linear on the simplices and which are homotopic to the original maps (see also simplicial approximation). In general, such an assignment requires a refinement of the given complex, meaning, one replaces bigger simplices by a union of smaller simplices. A standard way to effectuate such a refinement is the barycentric subdivision. Moreover, barycentric subdivision induces maps on homology groups and is helpful for computational concerns, see Excision and Mayer–Vietoris sequence. Definition. Subdivision of simplicial complexes. Let formula_0 be a geometric simplicial complex. A complex formula_1 is said to be a subdivision of formula_2 if These conditions imply that formula_2 and formula_1 equal as sets and as topological spaces, only the simplicial structure changes. Barycentric subdivision of a simplex. For a simplex formula_3 spanned by points formula_4, the barycenter is defined to be the point formula_5 . To define the subdivision, we will consider a simplex as a simplicial complex that contains only one simplex of maximal dimension, namely the simplex itself. The barycentric subdivision of a simplex can be defined inductively by its dimension. For points, i.e. simplices of dimension 0, the barycentric subdivision is defined as the point itself. Suppose then for a simplex formula_3 of dimension formula_6 that its faces formula_7 of dimension formula_8 are already divided. Therefore, there exist simplices formula_9 covering formula_10. The barycentric subdivision is then defined to be the geometric simplicial complex whose maximal simplices of dimension formula_6 are each a convex hulls of formula_11 for one pair formula_12 for some formula_13, so there will be formula_14 simplices covering formula_3. One can generalize the subdivision for simplicial complexes whose simplices are not all contained in a single simplex of maximal dimension, i.e. simplicial complexes that do not correspond geometrically to one simplex. This can be done by effectuating the steps described above simultaneously for every simplex of maximal dimension. The induction will then be based on the formula_6-th skeleton of the simplicial complex. It allows effectuating the subdivision more than once. Barycentric subdivision of a convex polytope. The operation of barycentric subdivision can be applied to any convex polytope of any dimension, producing another convex polytope of the same dimension. In this version of barycentric subdivision, it is not necessary for the polytope to form a simplicial complex: it can have faces that are not simplices. This is the dual operation to omnitruncation. The vertices of the barycentric subdivision correspond to the faces of all dimensions of the original polytope. Two vertices are adjacent in the barycentric subdivision when they correspond to two faces of different dimensions with the lower-dimensional face included in the higher-dimensional face. The facets of the barycentric subdivision are simplices, corresponding to the flags of the original polytope. For instance, the barycentric subdivision of a cube, or of a regular octahedron, is the disdyakis dodecahedron. The degree-6, degree-4, and degree-8 vertices of the disdyakis dodecahedron correspond to the vertices, edges, and square facets of the cube, respectively. Properties. Mesh. Let formula_15 a simplex and define formula_16. One way to measure the mesh of a geometric, simplicial complex is to take the maximal diameter of the simplices contained in the complex. Let formula_17 be an formula_6- dimensional simplex that comes from the covering of formula_3 obtained by the barycentric subdivision. Then, the following estimation holds: formula_18. Therefore, by applying barycentric subdivision sufficiently often, the largest edge can be made as small as desired. Homology. For some statements in homology-theory one wishes to replace simplicial complexes by a subdivision. On the level of simplicial homology groups one requires a map from the homology-group of the original simplicial complex to the groups of the subdivided complex. Indeed it can be shown that for any subdivision formula_19 of a finite simplicial complex formula_20 there is a unique sequence of maps between the homology groups formula_21 such that for each formula_22 in formula_23 the maps fulfills formula_24 and such that the maps induces endomorphisms of chain complexes. Moreover, the induced map is an isomorphism: Subdivision does not change the homology of the complex. To compute the singular homology groups of a topological space formula_25 one considers continuous functions formula_26 where formula_27 denotes the formula_6-dimensional-standard-simplex. In an analogous way as described for simplicial homology groups, barycentric subdivision can be interpreted as an endomorphism of singular chain complexes. Here again, there exists a subdivision operator formula_28 sending a chain formula_29 to a linear combination formula_30 where the sum runs over all simplices formula_31 that appear in the covering of formula_3 by barycentric subdivision, and formula_32 for all of such formula_31. This map also induces an automorphism of chain complexes. Applications. The barycentric subdivision can be applied on whole simplicial complexes as in the simplicial approximation theorem or it can be used to subdivide geometric simplices. Therefore it is crucial for statements in singular homology theory, see Mayer–Vietoris sequence and excision. Simplicial approximation. Let formula_20, formula_33 be abstract simplicial complexes above sets formula_34, formula_35. A simplicial map is a function formula_36 which maps each simplex in formula_20 onto a simplex in formula_33. By affin-linear extension on the simplices, formula_37 induces a map between the geometric realizations of the complexes. Each point in a geometric complex lies in the inner of exactly one simplex, its "support." Consider now a "continuous" map formula_38"." A simplicial map formula_39 is said to be a "simplicial approximation" of formula_40 if and only if each formula_41 is mapped by formula_42 onto the support of formula_43 in formula_33. If such an approximation exists, one can construct a homotopy formula_44 transforming formula_37 into formula_42 by defining it on each simplex; there, it always exists, because simplices are contractible. The simplicial approximation theorem guarantees for every continuous function formula_36 the existence of a simplicial approximation at least after refinement of formula_20, for instance by replacing formula_20 by its iterated barycentric subdivision. The theorem plays an important role for certain statements in algebraic topology in order to reduce the behavior of continuous maps on those of simplicial maps, as for instance in "Lefschetz's fixed-point theorem." Lefschetz's fixed-point theorem. The "Lefschetz number" is a useful tool to find out whether a continuous function admits fixed-points. This data is computed as follows: Suppose that formula_25 and formula_45 are topological spaces that admit finite triangulations. A continuous map formula_46 induces homomorphisms formula_47 between its simplicial homology groups with coefficients in a field formula_48. These are linear maps between formula_49- vectorspaces, so their trace formula_50 can be determined and their alternating sum formula_51 is called the "Lefschetz number" of formula_40. If formula_52, this number is the Euler characteristic of formula_48. The fixpoint theorem states that whenever formula_53, formula_40 has a fixed-point. In the proof this is first shown only for simplicial maps and then generalized for any continuous functions via the approximation theorem. Now, Brouwer's fixpoint theorem is a special case of this statement. Let formula_54 is an endomorphism of the unit-ball. For formula_55 all its homology groups formula_56 vanish, and formula_57 is always the identity, so formula_58, so formula_40 has a fixpoint. Mayer–Vietoris sequence. The Mayer–Vietoris sequence is often used to compute singular homology groups and gives rise to inductive arguments in topology. The related statement can be formulated as follows: Let formula_59 an open cover of the topological space formula_25 . There is an exact sequence formula_60 formula_61 where we consider singular homology groups, formula_62 are embeddings and formula_63 denotes the direct sum of abelian groups. For the construction of singular homology groups one considers continuous maps defined on the standard simplex formula_29. An obstacle in the proof of the theorem are maps formula_64 such that their image is nor contained in formula_65 neither in formula_66. This can be fixed using the subdivision operator: By considering the images of such maps as the sum of images of smaller simplices, lying in formula_65 or formula_66 one can show that the inclusion formula_67 induces an isomorphism on homology which is needed to compare the homology groups. Excision. Excision can be used to determine relative homology groups. It allows in certain cases to forget about subsets of topological spaces for their homology groups and therefore simplifies their computation: Let formula_25 be a topological space and let formula_68 be subsets, where formula_69 is closed such that formula_70. Then the inclusion formula_71 induces an isomorphism formula_72 for all formula_73 Again, in singular homology, maps formula_29 may appear such that their image is not part of the subsets mentioned in the theorem. Analogously those can be understood as a sum of images of smaller simplices obtained by the barycentric subdivision. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{S}\\subset \\mathbb{R}^n" }, { "math_id": 1, "text": "\\mathcal{S'}" }, { "math_id": 2, "text": "\\mathcal{S}" }, { "math_id": 3, "text": "\\Delta" }, { "math_id": 4, "text": "p_0,...,p_n" }, { "math_id": 5, "text": "b_{\\Delta}= \\frac{1}{n+1}(p_0+p_1+...+\np_n)" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "\\Delta _i" }, { "math_id": 8, "text": "n-1" }, { "math_id": 9, "text": " \\Delta _{i,1}, \\; \\Delta _{i,2}..., \\Delta _{i, n!} " }, { "math_id": 10, "text": "\\Delta_i" }, { "math_id": 11, "text": " \\Delta_{i,j} \\cup b_{\\Delta} " }, { "math_id": 12, "text": "i,j" }, { "math_id": 13, "text": "i \\in {0,...,n}, \\; j\\in {1,...,n!}" }, { "math_id": 14, "text": "(n + 1)!" }, { "math_id": 15, "text": " \\Delta \\subset \\mathbb{R}^n " }, { "math_id": 16, "text": "\\operatorname{diam}(\\Delta) = \\operatorname{max} \\Bigl\\{ \\|a-b\\|_{\\mathbb{R}^n} \\; \\Big|\\; a, b \\in \\Delta \\Bigr\\}" }, { "math_id": 17, "text": "\\Delta'" }, { "math_id": 18, "text": "\\operatorname{diam}(\\Delta')\\leq \\left(\\frac{n}{n+1}\\right)\\; \\operatorname{diam}(\\Delta) " }, { "math_id": 19, "text": "\\mathcal{K'}" }, { "math_id": 20, "text": "\\mathcal{K}" }, { "math_id": 21, "text": "\\lambda_n: C_n(\\mathcal{K})\\rightarrow C_n(\\mathcal{K'}) " }, { "math_id": 22, "text": "\\Delta " }, { "math_id": 23, "text": "\\mathcal{K} " }, { "math_id": 24, "text": "\\lambda(\\Delta)\\subset \\Delta " }, { "math_id": 25, "text": "X" }, { "math_id": 26, "text": "\\sigma: \\Delta^n \\rightarrow X" }, { "math_id": 27, "text": "\\Delta^n" }, { "math_id": 28, "text": "\\lambda_n: C_n(X)\\rightarrow C_n(X) " }, { "math_id": 29, "text": "\\sigma: \\Delta \\rightarrow X" }, { "math_id": 30, "text": "\\sum \\varepsilon_{B_{\\Delta}} \\sigma\\vert_{B_{\\Delta}}" }, { "math_id": 31, "text": "B_{\\Delta}" }, { "math_id": 32, "text": "\\varepsilon_{B_{\\Delta}}\\in \\{1, -1\\}" }, { "math_id": 33, "text": "\\mathcal{L}" }, { "math_id": 34, "text": "V_K" }, { "math_id": 35, "text": "V_L" }, { "math_id": 36, "text": "f:V_K \\rightarrow V_L" }, { "math_id": 37, "text": "f " }, { "math_id": 38, "text": "f:\\mathcal{K}\\rightarrow \\mathcal{L} " }, { "math_id": 39, "text": "g:\\mathcal{K}\\rightarrow \\mathcal{L} " }, { "math_id": 40, "text": "f" }, { "math_id": 41, "text": "x \\in \\mathcal{K}" }, { "math_id": 42, "text": "g" }, { "math_id": 43, "text": "f(x)" }, { "math_id": 44, "text": "H" }, { "math_id": 45, "text": "Y" }, { "math_id": 46, "text": "f: X\\rightarrow Y" }, { "math_id": 47, "text": "f_i: H_i(X,K)\\rightarrow H_i(Y,K)" }, { "math_id": 48, "text": "K" }, { "math_id": 49, "text": "K " }, { "math_id": 50, "text": "tr_i" }, { "math_id": 51, "text": "L_K(f)= \\sum_i(-1)^itr_i(f) \\in K" }, { "math_id": 52, "text": "f = id" }, { "math_id": 53, "text": "L_K(f)\\neq 0" }, { "math_id": 54, "text": "f:\\mathbb{D}^n \\rightarrow \\mathbb{D}^n" }, { "math_id": 55, "text": "k \\geq 1" }, { "math_id": 56, "text": "H_k(\\mathbb{D}^n)" }, { "math_id": 57, "text": "f_0" }, { "math_id": 58, "text": "L_K(f) = tr_0(f) = 1 \\neq 0" }, { "math_id": 59, "text": "X = A \\cup B" }, { "math_id": 60, "text": "\\cdots\\to H_{n+1}(X)\\,\\xrightarrow{\\partial_*}\\,H_{n}(A\\cap B)\\,\\xrightarrow{(i_*,j_*)}\\,H_{n}(A)\\oplus H_{n}(B) \\, \\xrightarrow{k_* - l_*}\\, H_{n}(X)\\, \\xrightarrow{\\partial_*}\\, H_{n-1} (A\\cap B)\\to \\cdots " }, { "math_id": 61, "text": " \n\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n\\cdots \\to H_0(A)\\oplus H_0(B)\\,\\xrightarrow{k_* - l_*}\\,H_0(X)\\to 0.\n" }, { "math_id": 62, "text": "i: A \\cap B \\hookrightarrow A, \\; j: A \\cap B \\hookrightarrow B, \\; k: A \\hookrightarrow X, \\; l: B \\hookrightarrow X" }, { "math_id": 63, "text": "\\oplus" }, { "math_id": 64, "text": "\\sigma" }, { "math_id": 65, "text": "A" }, { "math_id": 66, "text": "B" }, { "math_id": 67, "text": "C_n(A)\\oplus C_n(B)\\hookrightarrow C_n(X)" }, { "math_id": 68, "text": "Z \\subset A \\subset X" }, { "math_id": 69, "text": "Z" }, { "math_id": 70, "text": "Z \\subset A^{\\circ}" }, { "math_id": 71, "text": "i:(X \\setminus Z, A \\setminus Z) \\hookrightarrow (X, A)" }, { "math_id": 72, "text": "H_k(X \\setminus Z, A \\setminus Z) \\rightarrow H_k(X,A)" }, { "math_id": 73, "text": "k \\geq 0." } ]
https://en.wikipedia.org/wiki?curid=583651
5837036
Potential vorticity
Simplified approach for understanding fluid motions in a rotating system In fluid mechanics, potential vorticity (PV) is a quantity which is proportional to the dot product of vorticity and stratification. This quantity, following a parcel of air or water, can only be changed by diabatic or frictional processes. It is a useful concept for understanding the generation of vorticity in cyclogenesis (the birth and development of a cyclone), especially along the polar front, and in analyzing flow in the ocean. Potential vorticity (PV) is seen as one of the important theoretical successes of modern meteorology. It is a simplified approach for understanding fluid motions in a rotating system such as the Earth's atmosphere and ocean. Its development traces back to the circulation theorem by Bjerknes in 1898, which is a specialized form of Kelvin's circulation theorem. Starting from Hoskins et al., 1985, PV has been more commonly used in operational weather diagnosis such as tracing dynamics of air parcels and inverting for the full flow field. Even after detailed numerical weather forecasts on finer scales were made possible by increases in computational power, the PV view is still used in academia and routine weather forecasts, shedding light on the synoptic scale features for forecasters and researchers. Baroclinic instability requires the presence of a potential vorticity gradient along which waves amplify during cyclogenesis. Bjerknes circulation theorem. Vilhelm Bjerknes generalized Helmholtz's vorticity equation (1858) and Kelvin's circulation theorem (1869) to inviscid, geostrophic, and baroclinic fluids, i.e., fluids of varying density in a rotational frame which has a constant angular speed. If we define circulation as the integral of the tangent component of velocity around a closed fluid loop and take the integral of a closed chain of fluid parcels, we obtain formula_0 (1) where formula_1 is the time derivative in the rotational frame (not inertial frame), formula_2 is the relative circulation, formula_3 is projection of the area surrounded by the fluid loop on the equatorial plane, formula_4 is density, formula_5 is pressure, and formula_6 is the frame's angular speed. With Stokes' theorem, the first term on the right-hand-side can be rewritten as formula_7(2) which states that the rate of the change of the circulation is governed by the variation of density in pressure coordinates and the equatorial projection of its area, corresponding to the first and second terms on the right hand side. The first term is also called the "solenoid term". Under the condition of a barotropic fluid with a constant projection area formula_3, the Bjerknes circulation theorem reduces to Kelvin's theorem. However, in the context of atmospheric dynamics, such conditions are not a good approximation: if the fluid circuit moves from the equatorial region to the extratropics, formula_3 is not conserved. Furthermore, the complex geometry of the material circuit approach is not ideal for making an argument about fluid motions. Rossby's shallow water PV. Carl Rossby proposed in 1939 that, instead of the full three-dimensional vorticity vector, the local vertical component of the absolute vorticity formula_8 is the most important component for large-scale atmospheric flow, and that the large-scale structure of a two-dimensional non-divergent barotropic flow can be modeled by assuming that formula_8 is conserved. His later paper in 1940 relaxed this theory from 2D flow to quasi-2D shallow water equations on a beta plane. In this system, the atmosphere is separated into several incompressible layers stacked upon each other, and the vertical velocity can be deduced from integrating the convergence of horizontal flow. For a one-layer shallow water system without external forces or diabatic heating, Rossby showed that formula_9, (3) where formula_10 is the relative vorticity, formula_11 is the layer depth, and formula_12 is the Coriolis parameter. The conserved quantity, in parenthesis in equation (3), was later named the "shallow water potential vorticity". For an atmosphere with multiple layers, with each layer having constant potential temperature, the above equation takes the form formula_13 (4) in which formula_14 is the relative vorticity on an isentropic surface—a surface of constant potential temperature, and formula_15 is a measure of the weight of unit cross-section of an individual air column inside the layer. Interpretation. Equation (3) is the atmospheric equivalent to the conservation of angular momentum. For example, a spinning ice skater with her arms spread out laterally can accelerate her rate of spin by contracting her arms. Similarly, when a vortex of air is broadened, it in turn spins more slowly. When the air converges horizontally, the air speed increases to maintain potential vorticity, and the vertical extent increases to conserve mass. On the other hand, divergence causes the vortex to spread, slowing down the rate of spin. Ertel's potential vorticity. Hans Ertel generalized Rossby's work via an independent paper published in 1942. By identifying a conserved quantity following the motion of an air parcel, it can be proved that a certain quantity called the Ertel potential vorticity is also conserved for an idealized continuous fluid. We look at the momentum equation and the mass continuity equation of an idealized compressible fluid in Cartesian coordinates: formula_16 (5) formula_17 (6) where formula_18 is the geopotential height. Writing the absolute vorticity as formula_19, formula_20 as formula_21, and then take the curl of the full momentum equation (5), we have formula_22 (7) Consider formula_23 to be a hydrodynamical invariant, that is, formula_24 equals to zero following the fluid motion in question. Scalar multiplication of equation (7) by formula_25, and note that formula_26, we have formula_27 (8) The second term on the left-hand side of equation (8) is equal to formula_28, in which the second term is zero. From the triple vector product formula, we have formula_29 (9) where the second row is due to the fact that formula_30 is conserved following the motion, formula_31. Substituting equation (9) into equation (8) above, formula_32 (10) Combining the first, second, and fourth term in equation (10) can yield formula_33. Dividing by formula_34 and using a variant form of mass continuity equation,formula_35, equation (10) gives formula_36 (11) If the invariant formula_37 is only a function of pressure formula_38 and density formula_34, then its gradient is perpendicular to the cross product of formula_39 and formula_40, which means that the right-hand side of equation (11) is equal to zero. Specifically for the atmosphere, potential temperature is chosen as the invariant for frictionless and adiabatic motions. Therefore, the conservation law of Ertel's potential vorticity is given by formula_41 (12) the potential vorticity is defined as formula_42 (13) where formula_4 is the fluid density, formula_43 is the absolute vorticity and formula_44 is the gradient of potential temperature. It can be shown through a combination of the first law of thermodynamics and momentum conservation that the potential vorticity can only be changed by diabatic heating (such as latent heat released from condensation) or frictional processes. If the atmosphere is stably stratified so that the potential temperature formula_45 increases monotonically with height, formula_45 can be used as a vertical coordinate instead of formula_46. In the formula_47 coordinate system, "density" is defined as formula_48. Then, if we start the derivation from the horizontal momentum equation in isentropic coordinates, Ertel PV takes a much simpler form formula_49 (14) where formula_50 is the local vertical vector of unit length and formula_51 is the 3-dimensional gradient operator in isentropic coordinates. It can be seen that this form of potential vorticity is just the continuous form of Rossby's isentropic multi-layer PV in equation (4). Interpretation. The Ertel PV conservation theorem, equation (12), states that for a dry atmosphere, if an air parcel conserves its potential temperature, its potential vorticity is also conserved following its full three-dimensional motions. In other words, in adiabatic motion, air parcels conserve Ertel PV on an isentropic surface. Remarkably, this quantity can serve as a Lagrangian tracer that links the wind and temperature fields. Using the Ertel PV conservation theorem has led to various advances in understanding the general circulation. One of them was "tropopause folding" process described in Reed et al., (1950). For the upper-troposphere and stratosphere, air parcels follow adiabatic movements during a synoptic period of time. In the extratropical region, isentropic surfaces in the stratosphere can penetrate into the tropopause, and thus air parcels can move between stratosphere and troposphere, although the strong gradient of PV near the tropopause usually prevents this motion. However, in frontal region near jet streaks, which is a concentrated region within a jet stream where the wind speeds are the strongest, the PV contour can extend substantially downward into the troposphere, which is similar to the isentropic surfaces. Therefore, stratospheric air can be advected, following both constant PV and isentropic surfaces, downwards deep into the troposphere. The use of PV maps was also proved to be accurate in distinguishing air parcels of recent stratospheric origin even under sub-synoptic-scale disturbances. (An illustration can be found in Holton, 2004, figure 6.4) The Ertel PV also acts as a flow tracer in the ocean, and can be used to explain how a range of mountains, such as the Andes, can make the upper westerly winds swerve towards the equator and back. Maps depicting Ertel PV are usually used In meteorological analysis in which the potential vorticity unit (PVU) defined as formula_52. Quasi-geostrophic PV. One of the simplest but nevertheless insightful balancing conditions is in the form of quasi-geostrophic equations. This approximation basically states that for three-dimensional atmospheric motions that are nearly hydrostatic and geostrophic, their geostrophic part can be determined approximately by the pressure field, whereas the ageostrophic part governs the evolution of the geostrophic flow. The potential vorticity in the quasi-geostrophic limit (QGPV) was first formulated by Charney and Stern in 1960. Similar to Chapter 6.3 in Holton 2004, we start from horizontal momentum (15), mass continuity (16), hydrostatic (17), and thermodynamic (18) equations on a beta plane, while assuming that the flow is inviscid and hydrostatic, formula_53 (15) formula_54 (16) formula_55 (17) formula_56 (18) where formula_57 represents the geostrophic evolution, formula_58, formula_59 is the diabatic heating term in formula_60, formula_61 is the geopotential height, formula_62 is the geostrophic component of horizontal velocity, formula_63 is the ageostrophic velocity, formula_64 is horizontal gradient operator in (x, y, p) coordinates. With some manipulation (see Quasi-geostrophic equations or Holton 2004, Chapter 6 for details), one can arrive at a conservation law formula_65 (19) where formula_66 is the spatially averaged dry static stability. Assuming that the flow is adiabatic, which means formula_67, we have the conservation of QGPV. The conserved quantity formula_68 takes the form formula_69 (20) which is the QGPV, and it is also known as the pseudo-potential-vorticity. Apart from the diabatic heating term on the right-hand-side of equation(19), it can also be shown that QGPV can be changed by frictional forces. The Ertel PV reduces to the QGPV if one expand the Ertel PV to the leading order, and assume that the evolution equation is quasi-geostrophic, i.e., formula_70. Because of this factor, one should also note that the Ertel PV conserves following air parcel on an isentropic surface and is therefore a good Lagrangian tracer, whereas the QGPV is conserved following large-scale geostrophic flow. QGPV has been widely used in depicting large-scale atmospheric flow structures, as discussed in the section PV invertibility principle; PV invertibility principle. Apart from being a Lagrangian tracer, the potential vorticity also gives dynamical implications via the invertibility principle. For a 2-dimensional ideal fluid, the vorticity distribution controls the stream function by a Laplace operator, formula_71 (21) where formula_10 is the relative vorticity, and formula_72 is the streamfunction. Hence from the knowledge of vorticity field, the operator can be inverted and the stream function can be calculated. In this particular case (equation 21), vorticity gives all the information needed to deduce motions, or streamfunction, thus one can think in terms of vorticity to understand the dynamics of the fluid. A similar principle was originally introduced for the potential vorticity in three-dimensional fluid in the 1940s by Kleinschmit, and was developed by Charney and Stern in their quasi-geostrophic theory. Despite theoretical elegance of Ertel's potential vorticity, early applications of Ertel PV are limited to tracer studies using special isentropic maps. It is generally insufficient to deduce other variables from the knowledge of Ertel PV fields only, since it is a product of wind (formula_73) and temperature fields (formula_45 and formula_74). However, large-scale atmospheric motions are inherently quasi-static; wind and mass fields are adjusted and balanced against each other (e.g., gradient balance, geostrophic balance). Therefore, other assumptions can be made to form a closure and deduce the complete structure of the flow in question:(1) introduce balancing conditions of certain form. These conditions must be physically realizable and stable without instabilities such as static instability. Also, the space and time scales of the motion must be compatible with the assumed balance;(2) specify a certain reference state, such as distribution of temperature, potential temperature, or geopotential height;(3) assert proper boundary conditions and invert the PV field globally.The first and second assumptions are expressed explicitly in the derivation of quasi-geostrophic PV. Leading-order geostrophic balance is used as the balancing condition. The second-order terms such as ageostrophic winds, perturbations of potential temperature and perturbations of geostrophic height should have consistent magnitude, i.e., of the order of Rossby number. The reference state is zonally averaged potential temperature and geopotential height. The third assumption is apparent even for 2-dimensional vorticity inversion because inverting the Laplace operator in equation (21), which is a second-order elliptic operator, requires knowledge of the boundary conditions. For example, in equation (20), invertibility implies that given the knowledge of formula_68, the Laplace-like operator can be inverted to yield geopotential height formula_75. formula_75 is also proportional to the QG streamfunction formula_76 under the quasi-geostrophic assumption. The geostrophic wind field can then be readily deduced from formula_76. Lastly, the temperature field formula_45 is given by substituting formula_75 into the hydrostatic equation (17). Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{DC}{Dt} = - \\oint \\frac{1}{\\rho}\\nabla p \\cdot \\mathrm{d}\\mathbf{r} - 2\\Omega\\frac{DA_e}{Dt}," }, { "math_id": 1, "text": "\\frac{D}{Dt}" }, { "math_id": 2, "text": "C" }, { "math_id": 3, "text": "A_e" }, { "math_id": 4, "text": "\\rho" }, { "math_id": 5, "text": "p" }, { "math_id": 6, "text": "\\Omega" }, { "math_id": 7, "text": "\\frac{DC}{Dt} = \\int_A \\frac{\\nabla \\rho \\times \\nabla p}{\\rho^2} \\cdot \\mathrm{d}\\mathbf{A} - 2\\Omega\\frac{DA_e}{Dt}," }, { "math_id": 8, "text": "\\zeta_a" }, { "math_id": 9, "text": "\\frac{D}{Dt}\\left(\\frac{\\zeta + f}{h}\\right) = 0" }, { "math_id": 10, "text": "\\zeta" }, { "math_id": 11, "text": "h" }, { "math_id": 12, "text": "f" }, { "math_id": 13, "text": "\\frac{D}{Dt}\\left(\\frac{\\zeta_\\theta + f}{\\Delta}\\right) = 0," }, { "math_id": 14, "text": "\\zeta_\\theta" }, { "math_id": 15, "text": "\\Delta = - \\delta p / g" }, { "math_id": 16, "text": "\\frac{\\partial \\mathbf{v}}{\\partial t} + \\frac{1}{2}\\nabla \\mathbf{v^2} - \\mathbf{v}\\times(\\nabla\\times\\mathbf{v}) + 2\\mathbf{\\Omega} \\times \\mathbf{v} = -\\nabla\\Phi - \\frac{1}{\\rho} \\nabla p," }, { "math_id": 17, "text": "\\frac{D \\rho}{D t} = - \\rho\\nabla\\cdot\\mathbf{v}," }, { "math_id": 18, "text": "\\Phi" }, { "math_id": 19, "text": "\\mathbf{\\zeta_a} = \\nabla\\times\\mathbf{v} + 2\\mathbf{\\Omega}" }, { "math_id": 20, "text": "1/\\rho" }, { "math_id": 21, "text": "\\alpha" }, { "math_id": 22, "text": "\\frac{\\partial}{\\partial t}\\nabla\\times\\mathbf{v} - \\nabla\\times(\\mathbf{v}\\times\\mathbf{\\zeta_a}) = \\nabla p\\times\\nabla\\alpha." }, { "math_id": 23, "text": "\\psi = \\psi(\\mathbf{r}, t)" }, { "math_id": 24, "text": "\\frac{D\\psi}{Dt}" }, { "math_id": 25, "text": "\\nabla\\psi" }, { "math_id": 26, "text": "\\frac{\\partial}{\\partial t}\\mathbf{\\zeta_a} = \\frac{\\partial}{\\partial t}\\nabla\\times\\mathbf{v}" }, { "math_id": 27, "text": "\\nabla\\psi\\cdot\\frac{\\partial}{\\partial t}\\mathbf{\\zeta_a} - \\nabla\\psi\\cdot[\\nabla\\times(\\mathbf{v}\\times\\mathbf{\\zeta_a})] = \\nabla\\psi\\cdot(\\nabla p\\times\\nabla\\alpha)." }, { "math_id": 28, "text": "\\nabla\\cdot[\\nabla\\psi\\times(\\mathbf{v}\\times\\mathbf{\\zeta_a})] - (\\mathbf{v}\\times\\zeta_a)\\cdot(\\nabla\\times\\nabla\\psi)" }, { "math_id": 29, "text": "\\begin{align} \\nabla\\psi\\times(\\mathbf{v}\\times\\mathbf{\\zeta_a}) & = \\mathbf{v}(\\mathbf{\\zeta_a}\\cdot\\nabla\\psi) - \\mathbf{\\zeta_a}(\\mathbf{v}\\cdot\\nabla\\psi)\\\\\n& = \\mathbf{v}(\\mathbf{\\zeta_a}\\cdot\\nabla\\psi) + \\mathbf{\\zeta_a}\\frac{\\partial\\psi}{\\partial t},\n \\end{align}\n\n" }, { "math_id": 30, "text": "\\psi" }, { "math_id": 31, "text": "\\frac{D\\psi}{Dt} = 0" }, { "math_id": 32, "text": "\\nabla\\psi\\cdot\\frac{\\partial}{\\partial t}\\mathbf{\\zeta_a} + \\mathbf{v}\\cdot\\nabla(\\mathbf{\\zeta_a}\\cdot\\nabla\\psi) + (\\mathbf{\\zeta_a}\\cdot\\nabla\\psi)\\nabla\\cdot\\mathbf{v} + \\mathbf{\\zeta_a}\\cdot\\frac{\\partial}{\\partial t}\\nabla\\psi= \\nabla\\psi\\cdot(\\nabla p\\times\\nabla\\alpha)." }, { "math_id": 33, "text": "\\frac{D}{Dt}(\\mathbf{\\zeta_a}\\cdot\\nabla\\psi)" }, { "math_id": 34, "text": "\\rho" }, { "math_id": 35, "text": "\\frac{1}{\\rho}\\nabla\\cdot\\mathbf{v} = -\\frac{1}{\\rho^2}\\frac{D\\rho}{Dt} = \\frac{D\\alpha}{Dt}" }, { "math_id": 36, "text": "\\alpha\\frac{D}{Dt}(\\mathbf{\\zeta_a}\\cdot\\nabla\\psi) + (\\mathbf{\\zeta_a}\\cdot\\nabla\\psi)\\frac{D\\alpha}{Dt} = \\alpha \\nabla\\psi\\cdot(\\nabla p\\times\\nabla\\alpha)" }, { "math_id": 37, "text": "\\psi" }, { "math_id": 38, "text": "p" }, { "math_id": 39, "text": "\\nabla p" }, { "math_id": 40, "text": "\\nabla \\rho " }, { "math_id": 41, "text": "\\frac{D}{Dt}PV = 0." }, { "math_id": 42, "text": "PV = \\frac{\\mathbf{\\zeta_a}\\cdot\\nabla\\theta}{\\rho}," }, { "math_id": 43, "text": "\\mathbf{\\zeta_a}" }, { "math_id": 44, "text": "\\nabla \\theta" }, { "math_id": 45, "text": "\\theta" }, { "math_id": 46, "text": "z" }, { "math_id": 47, "text": "(x, y, \\theta)" }, { "math_id": 48, "text": "\\sigma\\equiv -g^{-1}\\partial p/\\partial\\theta" }, { "math_id": 49, "text": "PV_\\theta = \\frac{f + \\mathbf{k}\\cdot(\\nabla_\\theta\\times\\mathbf{v})}{\\sigma}" }, { "math_id": 50, "text": "\\mathbf{k}" }, { "math_id": 51, "text": "\\nabla_\\theta" }, { "math_id": 52, "text": "{10^{-6} \\cdot \\mathrm{K} \\cdot \\mathrm{m}^2 \\over \\mathrm{kg} \\cdot \\mathrm{s}} \\equiv 1\\ \\mathrm{PVU}" }, { "math_id": 53, "text": "\\frac{D_g\\mathbf{u_g}}{Dt} + f_0\\mathbf{k}\\times\\mathbf{u_a} + \\beta y\\mathbf{k}\\times\\mathbf{u_g}= 0" }, { "math_id": 54, "text": "\\nabla_{hp}\\cdot\\mathbf{u_a} + \\frac{\\partial\\omega}{\\partial p} = 0\n" }, { "math_id": 55, "text": "\\frac{\\partial\\Phi}{\\partial p} = - \\frac{R\\pi}{p}\\theta\n" }, { "math_id": 56, "text": "\\frac{D_g\\theta}{Dt} + \\omega\\frac{d\\theta_0}{d p} = \\frac{J}{c_p\\pi}\n" }, { "math_id": 57, "text": "\\frac{D_g}{Dt} = \\frac{\\partial}{\\partial t} + u_g\\frac{\\partial}{\\partial x} + v_g\\frac{\\partial}{\\partial y}" }, { "math_id": 58, "text": "\\pi = (p/ps)^{R/c_p}\n" }, { "math_id": 59, "text": "J\n" }, { "math_id": 60, "text": "Js^{-1}kg^{-1}\n" }, { "math_id": 61, "text": "\\Phi\n" }, { "math_id": 62, "text": "\\mathbf{u_g}\n" }, { "math_id": 63, "text": "\\mathbf{u_a}\n" }, { "math_id": 64, "text": "\\nabla_{hp}\n" }, { "math_id": 65, "text": "\\frac{D_g}{Dt}q = -f_0\\frac{\\partial}{\\partial p}\\left(\\sigma\\frac{\\kappa J}{p}\\right)," }, { "math_id": 66, "text": "\\sigma = -\\frac{R\\pi}{p}\\frac{d\\theta_0}{dp}\n" }, { "math_id": 67, "text": "J = 0\n" }, { "math_id": 68, "text": "q\n" }, { "math_id": 69, "text": "{q = {{{1 \\over f_o}{\\nabla^2 \\Phi}}+{f}+{{\\partial \\over \\partial p}\\left({{f_o \\over \\sigma}{\\partial \\Phi \\over \\partial p}}\\right)}}}," }, { "math_id": 70, "text": "\\frac{D}{Dt}\\approx\\frac{D_g}{Dt}" }, { "math_id": 71, "text": "{\\zeta = {{{\\nabla^2 \\Psi}}}}," }, { "math_id": 72, "text": "\\Psi" }, { "math_id": 73, "text": "\\zeta_a" }, { "math_id": 74, "text": "\\rho\n" }, { "math_id": 75, "text": "\\Phi" }, { "math_id": 76, "text": "\\Psi" } ]
https://en.wikipedia.org/wiki?curid=5837036