id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
12558801
Run average
Baseball statistic In baseball statistics, run average (RA) refers to measures of the rate at which runs are allowed or scored. For "pitchers," the run average is the number of runs—earned or unearned—allowed per nine innings. It is calculated using this formula: formula_0 where Run average for pitchers differs from the more commonly used earned run average (ERA) by adding unearned runs to the numerator. This measure is also known as total run average (TRA) or runs allowed average. For "batters," the run average is the number of runs scored per at bat. Run average for pitchers. Although presentations of pitching statistics generally feature the ERA rather than the RA, the latter statistic is notable for both historical and analytical reasons. For early leagues or leagues for which statistics must be calculated from box scores, such as the Negro leagues, data on earned runs may be unavailable and RA may be the only statistic available. The analytical case for RA appeared as early as 1976, when sportswriter Leonard Koppett proposed that RA would be a better measure of pitcher performance than ERA. Subsequently, sabermetrician Bill James wrote, "I think that the distinction between earned runs and unearned runs is silly and artificial, a distinction having no meaning except in the eyes of some guy up in the press box." In baseball, defense—that is, preventing the opponent from scoring runs—is the joint responsibility of the pitcher and the fielders. ERA attempts to adjust for some of the influence of the fielders on a pitcher's runs allowed by removing runs that are scored because of fielding errors—that is, unearned runs. However, removing unearned runs doesn't adequately adjust for the effects of defensive support, because it makes no adjustment for other important aspects of fielding, such as proficiency at turning double plays, throwing out base stealers, and fielding range. Errors are the only aspect of fielding that ERA adjusts for, and are generally regarded as a small part of fielding in modern baseball. Another problem with ERA is the inconsistency with which official scorers call plays as errors. The rules give scorers considerable discretion regarding the plays that can be called as errors. Researcher Craig R. Wright found large differences between teams in the rate at which their scorers called errors, and even found some evidence of home team bias—that is, calling errors to favor the statistics of players for the home team. While ERA doesn't charge the pitcher for the runs that result from errors, it may tend to over correct for the influence of fielding. Even though unearned runs would not have scored without an error, in most cases the pitcher also contributes to the scoring of the unearned run—either by allowing the opposing player to reach base via a walk or hit, or by allowing a subsequent batter a hit that advances and scores the runner. During the early days of baseball history, this over correction for fielding errors caused pitchers on bad teams to be overrated in terms of ERA. Removing unearned runs in calculating ERA may be useful if they are unrelated to pitcher performance, but Wright concludes that fielding errors are somewhat dependent on a pitcher's style. Because errors occur most often on ground balls, pitchers with high strikeout rates who give up fly balls are likely to give up fewer unearned runs than groundball control-type pitchers. For example, Ron Guidry—a flyball power pitcher—and Tommy John—a groundball control pitcher—were teammates on the Yankees from 1979 to 1982, supported by the same defense. During that period, 13.7% of John's runs allowed were unearned, compared to 9.8% of Guidry's. Wright concludes that this difference is attributable to their pitching styles, and thus, that unearned runs are partially attributable to the pitcher. Adjusted RA+. Similar to adjusted ERA+, it is possible to adjust RA for the ballpark effects and compare it to the league average. The formula for this adjustment is: formula_1 where Values of RA+ above 100 indicate better-than-average pitching performance. Unlike unadjusted RA, which must be higher than unadjusted ERA, a pitcher's adjusted RA+ can be either higher or lower than his adjusted ERA+. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathrm{RA} = 9 \\cdot \\frac{\\mathrm{R}}{\\mathrm{IP}}" }, { "math_id": 1, "text": "\\mathrm{RA+} = 100 \\cdot \\frac{\\mathrm{lgRA}}{\\mathrm{RA}}" } ]
https://en.wikipedia.org/wiki?curid=12558801
12559506
Walter Weldon
English chemist, journalist, and publisher of needlework patterns (1832–1885) Walter Weldon FRS FRSE (31 October 1832 – 20 September 1885) was a 19th-century English industrial chemist and journalist. He was President of the Society of Chemical Industry from 1883-84. Life. He was born in Loughborough on 31 October 1832, the son of Reuben Weldon and his wife, Esther Fowke. Weldon was brother to Ernest James Weldon, founder of Weldon & Wilkinson Ltd. In 1854 he began work as a journalist in London with "The Dial" (which was afterwards incorporated in "The Morning Star"), and in 1860 he started a monthly magazine, "Weldon's Register of Facts and Occurrences relating to Literature, the Sciences and the Arts", which was later discontinued. In the 1860s he turned to industrial chemistry, described below. However, he is remembered for his pattern work. His publications in the late 1800s were through Weldon & Company, a pattern company who produced hundreds of patterns and projects for numerous types of Victorian needlework. In about 1885, Weldon & Company started to publish monthly 14-page needlework newsletters, each covering one needlework technique. These were affordable, at 2 pence each. In 1888, the company began to collect these newsletters in groups of 12, publishing them a series of books entitled "Weldon's Practical Needlework", each volume consisting of the various newsletters (one year of publications) bound together with a cloth cover and costing 2s. 6d. "Weldon's Ladies' Journal" (1875–1954) supplied dressmaking patterns, and was a blueprint for subsequent 'home weeklies'. In 1877 he was elected a Fellow of the Royal Society of Edinburgh. His proposers were Alexander Crum Brown, Sir James Dewar, John Hutton Balfour and Sir Andrew Douglas Maclagan. In 1882 he was further elected a Fellow of the Royal Society of London. Weldon was interested in parapsychology, and was a spiritualist and a member of the Society for Psychical Research. Family. Weldon married Anne Cotton in 1854. Their second son was Walter Frank Raphael Weldon, an English evolutionary zoologist and biometrician. Chemistry. Weldon was a successful chemist and developed the Weldon process to produce chlorine by boiling hydrochloric acid with manganese dioxide. MnO2 was expensive, and Weldon created a process for its recycling by treating the manganese chloride produced with milk of lime and blowing air through the mixture to form a precipitate known as Weldon mud which was used to generate more chlorine. Manganese dioxide reacts with hydrochloric acid to form chlorine and Manganese chloride: formula_0 This was put into operation about 1869, and by 1875 it was being used by almost every chlorine manufacturer throughout Europe. He continued to work at the production of chlorine in connection with the processes of creating various sodium salts and became a leading authority on the subject. None of his later proposals met with equal success. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathrm{MnO_2 + 4 \\ HCl \\longrightarrow MnCl_2 + Cl_2 + 2 \\ H_2O}" } ]
https://en.wikipedia.org/wiki?curid=12559506
12559595
Midhinge
In statistics, the midhinge is the average of the first and third quartiles and is thus a measure of location. Equivalently, it is the 25% trimmed mid-range or 25% midsummary; it is an L-estimator. formula_0 The midhinge is related to the interquartile range (IQR), the difference of the third and first quartiles (i.e. formula_1), which is a measure of statistical dispersion. The two are complementary in sense that if one knows the midhinge and the IQR, one can find the first and third quartiles. The use of the term "hinge" for the lower or upper quartiles derives from John Tukey's work on exploratory data analysis in the late 1970s, and "midhinge" is a fairly modern term dating from around that time. The midhinge is slightly simpler to calculate than the trimean (formula_2), which originated in the same context and equals the average of the median (formula_3) and the midhinge. formula_4 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\operatorname{MH}(X) = \\overline{Q_{1, 3}(X)} = \\frac{Q_1(X) + Q_3(X)}{2} = \\frac{P_{25}(X) + P_{75}(X)}{2} = M_{25}(X)" }, { "math_id": 1, "text": "IQR = Q_3 - Q_1" }, { "math_id": 2, "text": "TM" }, { "math_id": 3, "text": "\\tilde{X} = Q_2 = P_{50}" }, { "math_id": 4, "text": "\\operatorname{MH}(X) = 2 \\operatorname{TM}(X) - \\operatorname{med}(X) = 2 \\frac{Q_1 + 2Q_2 + Q_3}{4} - Q_2" } ]
https://en.wikipedia.org/wiki?curid=12559595
1256073
Phasor
Complex number representing a particular sine wave In physics and engineering, a phasor (a portmanteau of phase vector) is a complex number representing a sinusoidal function whose amplitude (A), and initial phase (θ) are time-invariant and whose angular frequency (ω) is fixed. It is related to a more general concept called analytic representation, which decomposes a sinusoid into the product of a complex constant and a factor depending on time and frequency. The complex constant, which depends on amplitude and phase, is known as a phasor, or complex amplitude, and (in older texts) sinor or even complexor. A common application is in the steady-state analysis of an electrical network powered by time varying current where all signals are assumed to be sinusoidal with a common frequency. Phasor representation allows the analyst to represent the amplitude and phase of the signal using a single complex number. The only difference in their analytic representations is the complex amplitude (phasor). A linear combination of such functions can be represented as a linear combination of phasors (known as phasor arithmetic or phasor algebra53) and the time/frequency dependent factor that they all have in common. The origin of the term phasor rightfully suggests that a (diagrammatic) calculus somewhat similar to that possible for vectors is possible for phasors as well. An important additional feature of the phasor transform is that differentiation and integration of sinusoidal signals (having constant amplitude, period and phase) corresponds to simple algebraic operations on the phasors; the phasor transform thus allows the analysis (calculation) of the AC steady state of RLC circuits by solving simple algebraic equations (albeit with complex coefficients) in the phasor domain instead of solving differential equations (with real coefficients) in the time domain. The originator of the phasor transform was Charles Proteus Steinmetz working at General Electric in the late 19th century. He got his inspiration from Oliver Heaviside. Heaviside's operational calculus was modified so that the variable p becomes jω. The complex number j has simple meaning: phase shift. Glossing over some mathematical details, the phasor transform can also be seen as a particular case of the Laplace transform (limited to a single frequency), which, in contrast to phasor representation, can be used to (simultaneously) derive the transient response of an RLC circuit. However, the Laplace transform is mathematically more difficult to apply and the effort may be unjustified if only steady state analysis is required. Notation. Phasor notation (also known as angle notation) is a mathematical notation used in electronics engineering and electrical engineering. A vector whose polar coordinates are magnitude formula_0 and angle formula_1 is written formula_2 formula_3 can represent either the vector formula_4 or the complex number formula_5, according to Euler's formula with formula_6, both of which have magnitudes of 1. The angle may be stated in degrees with an implied conversion from degrees to radians. For example formula_7 would be assumed to be formula_8 which is the vector formula_9 or the number formula_10 Definition. A real-valued sinusoid with constant amplitude, frequency, and phase has the form: formula_11 where only parameter formula_12 is time-variant. The inclusion of an imaginary component: formula_13 gives it, in accordance with Euler's formula, the factoring property described in the lead paragraph: formula_14 whose real part is the original sinusoid. The benefit of the complex representation is that linear operations with other complex representations produces a complex result whose real part reflects the same linear operations with the real parts of the other complex sinusoids. Furthermore, all the mathematics can be done with just the phasors formula_15 and the common factor formula_16 is reinserted prior to the real part of the result. The function formula_17 is an "analytic representation" of formula_18 Figure 2 depicts it as a rotating vector in the complex plane. It is sometimes convenient to refer to the entire function as a "phasor", as we do in the next section. Arithmetic. Multiplication by a constant (scalar). Multiplication of the phasor formula_19 by a complex constant, formula_20, produces another phasor. That means its only effect is to change the amplitude and phase of the underlying sinusoid: formula_21 In electronics, formula_20 would represent an impedance, which is independent of time. In particular it is "not" the shorthand notation for another phasor. Multiplying a phasor current by an impedance produces a phasor voltage. But the product of two phasors (or squaring a phasor) would represent the product of two sinusoids, which is a non-linear operation that produces new frequency components. Phasor notation can only represent systems with one frequency, such as a linear system stimulated by a sinusoid. Addition. The sum of multiple phasors produces another phasor. That is because the sum of sinusoids with the same frequency is also a sinusoid with that frequency: formula_22 where: formula_23 and, if we take formula_24, then formula_25 is: or, via the law of cosines on the complex plane (or the trigonometric identity for angle differences): formula_33 where formula_34 A key point is that "A"3 and "θ"3 do not depend on "ω" or "t", which is what makes phasor notation possible. The time and frequency dependence can be suppressed and re-inserted into the outcome as long as the only operations used in between are ones that produce another phasor. In angle notation, the operation shown above is written: formula_35 Another way to view addition is that two vectors with coordinates ["A"1 cos("ωt" + "θ"1), "A"1 sin("ωt" + "θ"1)] and ["A"2 cos("ωt" + "θ"2), "A"2 sin("ωt" + "θ"2)] are added vectorially to produce a resultant vector with coordinates ["A"3 cos("ωt" + "θ"3), "A"3 sin("ωt" + "θ"3)] (see animation). In physics, this sort of addition occurs when sinusoids interfere with each other, constructively or destructively. The static vector concept provides useful insight into questions like this: "What phase difference would be required between three identical sinusoids for perfect cancellation?" In this case, simply imagine taking three vectors of equal length and placing them head to tail such that the last head matches up with the first tail. Clearly, the shape which satisfies these conditions is an equilateral triangle, so the angle between each phasor to the next is 120° (<templatestyles src="Fraction/styles.css" />1⁄3 radians), or one third of a wavelength <templatestyles src="Fraction/styles.css" />1⁄3. So the phase difference between each wave must also be 120°, as is the case in three-phase power. In other words, what this shows is that: formula_36 In the example of three waves, the phase difference between the first and the last wave was 240°, while for two waves destructive interference happens at 180°. In the limit of many waves, the phasors must form a circle for destructive interference, so that the first phasor is nearly parallel with the last. This means that for many sources, destructive interference happens when the first and last wave differ by 360 degrees, a full wavelength formula_37. This is why in single slit diffraction, the minima occur when light from the far edge travels a full wavelength further than the light from the near edge. As the single vector rotates in an anti-clockwise direction, its tip at point A will rotate one complete revolution of 360° or 2π radians representing one complete cycle. If the length of its moving tip is transferred at different angular intervals in time to a graph as shown above, a sinusoidal waveform would be drawn starting at the left with zero time. Each position along the horizontal axis indicates the time that has elapsed since zero time, "t" = 0. When the vector is horizontal the tip of the vector represents the angles at 0°, 180°, and at 360°. Likewise, when the tip of the vector is vertical it represents the positive peak value, (+"A"max) at 90° or <templatestyles src="Fraction/styles.css" />1⁄2 and the negative peak value, (−"A"max) at 270° or <templatestyles src="Fraction/styles.css" />1⁄2. Then the time axis of the waveform represents the angle either in degrees or radians through which the phasor has moved. So we can say that a phasor represents a scaled voltage or current value of a rotating vector which is "frozen" at some point in time, (t) and in our example above, this is at an angle of 30°. Sometimes when we are analysing alternating waveforms we may need to know the position of the phasor, representing the alternating quantity at some particular instant in time especially when we want to compare two different waveforms on the same axis. For example, voltage and current. We have assumed in the waveform above that the waveform starts at time "t" = 0 with a corresponding phase angle in either degrees or radians. But if a second waveform starts to the left or to the right of this zero point, or if we want to represent in phasor notation the relationship between the two waveforms, then we will need to take into account this phase difference, Φ of the waveform. Consider the diagram below from the previous Phase Difference tutorial. Differentiation and integration. The time derivative or integral of a phasor produces another phasor. For example: formula_38 Therefore, in phasor representation, the time derivative of a sinusoid becomes just multiplication by the constant formula_39. Similarly, integrating a phasor corresponds to multiplication by formula_40 The time-dependent factor, formula_41 is unaffected. When we solve a linear differential equation with phasor arithmetic, we are merely factoring formula_16 out of all terms of the equation, and reinserting it into the answer. For example, consider the following differential equation for the voltage across the capacitor in an RC circuit: formula_42 When the voltage source in this circuit is sinusoidal: formula_43 we may substitute formula_44 formula_45 where phasor formula_46 and phasor formula_47 is the unknown quantity to be determined. In the phasor shorthand notation, the differential equation reduces to: formula_48 <templatestyles src="Math_proof/styles.css" />Derivation Since this must hold for all formula_12, specifically: formula_49 it follows that: It is also readily seen that: formula_50 Substituting these into Eq.1 and Eq.2, multiplying Eq.2 by formula_51 and adding both equations gives: formula_52 Solving for the phasor capacitor voltage gives: formula_53 As we have seen, the factor multiplying formula_54 represents differences of the amplitude and phase of formula_55 relative to formula_56 and formula_57 In polar coordinate form, the first term of the last expression is: formula_58 where formula_59. Therefore: formula_60 Ratio of phasors. A quantity called complex impedance is the ratio of two phasors, which is not a phasor, because it does not correspond to a sinusoidally varying function. Applications. Circuit laws. With phasors, the techniques for solving DC circuits can be applied to solve linear AC circuits. In an AC circuit we have real power (P) which is a representation of the average power into the circuit and reactive power ("Q") which indicates power flowing back and forth. We can also define the complex power "S" = "P" + "jQ" and the apparent power which is the magnitude of S. The power law for an AC circuit expressed in phasors is then "S" = "VI"* (where "I"* is the complex conjugate of "I", and the magnitudes of the voltage and current phasors "V" and of "I" are the RMS values of the voltage and current, respectively). Given this we can apply the techniques of analysis of resistive circuits with phasors to analyze single frequency linear AC circuits containing resistors, capacitors, and inductors. Multiple frequency linear AC circuits and AC circuits with different waveforms can be analyzed to find voltages and currents by transforming all waveforms to sine wave components (using Fourier series) with magnitude and phase then analyzing each frequency separately, as allowed by the superposition theorem. This solution method applies only to inputs that are sinusoidal and for solutions that are in steady state, i.e., after all transients have died out. The concept is frequently involved in representing an electrical impedance. In this case, the phase angle is the phase difference between the voltage applied to the impedance and the current driven through it. Power engineering. In analysis of three phase AC power systems, usually a set of phasors is defined as the three complex cube roots of unity, graphically represented as unit magnitudes at angles of 0, 120 and 240 degrees. By treating polyphase AC circuit quantities as phasors, balanced circuits can be simplified and unbalanced circuits can be treated as an algebraic combination of symmetrical components. This approach greatly simplifies the work required in electrical calculations of voltage drop, power flow, and short-circuit currents. In the context of power systems analysis, the phase angle is often given in degrees, and the magnitude in RMS value rather than the peak amplitude of the sinusoid. The technique of synchrophasors uses digital instruments to measure the phasors representing transmission system voltages at widespread points in a transmission network. Differences among the phasors indicate power flow and system stability. Telecommunications: analog modulations. The rotating frame picture using phasor can be a powerful tool to understand analog modulations such as amplitude modulation (and its variants) and frequency modulation. formula_61 where the term in brackets is viewed as a rotating vector in the complex plane. The phasor has length formula_0, rotates anti-clockwise at a rate of formula_62 revolutions per second, and at time formula_63 makes an angle of formula_1 with respect to the positive real axis. The waveform formula_64 can then be viewed as a projection of this vector onto the real axis. A modulated waveform is represented by this phasor (the carrier) and two additional phasors (the modulation phasors). If the modulating signal is a single tone of the form formula_65, where formula_66 is the modulation depth and formula_67 is the frequency of the modulating signal, then for amplitude modulation the two modulation phasors are given by, formula_68 formula_69 The two modulation phasors are phased such that their vector sum is always in phase with the carrier phasor. An alternative representation is two phasors counter rotating around the end of the carrier phasor at a rate formula_67 relative to the carrier phasor. That is, formula_70 formula_71 Frequency modulation is a similar representation except that the modulating phasors are not in phase with the carrier. In this case the vector sum of the modulating phasors is shifted 90° from the carrier phase. Strictly, frequency modulation representation requires additional small modulation phasors at formula_72 etc, but for most practical purposes these are ignored because their effect is very small. Footnotes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\theta" }, { "math_id": 2, "text": "A \\angle \\theta." }, { "math_id": 3, "text": "1 \\angle \\theta" }, { "math_id": 4, "text": "(\\cos \\theta,\\, \\sin \\theta)" }, { "math_id": 5, "text": "\\cos \\theta + i \\sin \\theta = e^{i\\theta}" }, { "math_id": 6, "text": "i^2 = -1" }, { "math_id": 7, "text": "1 \\angle 90" }, { "math_id": 8, "text": "1 \\angle 90^\\circ," }, { "math_id": 9, "text": "(0,\\, 1)" }, { "math_id": 10, "text": "e^{i\\pi/2} = i." }, { "math_id": 11, "text": "A\\cos(\\omega t + \\theta)," }, { "math_id": 12, "text": "t" }, { "math_id": 13, "text": "i \\cdot A\\sin(\\omega t + \\theta)" }, { "math_id": 14, "text": "A\\cos(\\omega t + \\theta) + i\\cdot A\\sin(\\omega t + \\theta) = A e^{i(\\omega t + \\theta)} = A e^{i \\theta} \\cdot e^{i\\omega t}," }, { "math_id": 15, "text": "A e^{i \\theta}," }, { "math_id": 16, "text": "e^{i\\omega t}" }, { "math_id": 17, "text": "Ae^{i(\\omega t + \\theta)}" }, { "math_id": 18, "text": "A\\cos(\\omega t + \\theta)." }, { "math_id": 19, "text": "A e^{i\\theta} e^{i\\omega t}" }, { "math_id": 20, "text": "B e^{i\\phi}" }, { "math_id": 21, "text": "\\begin{align}\n &\\operatorname{Re}\\left( \\left(A e^{i\\theta} \\cdot B e^{i\\phi}\\right) \\cdot e^{i\\omega t} \\right) \\\\\n ={} &\\operatorname{Re}\\left( \\left(AB e^{i(\\theta + \\phi)}\\right) \\cdot e^{i\\omega t} \\right) \\\\\n ={} &AB \\cos(\\omega t + (\\theta + \\phi)).\n\\end{align}" }, { "math_id": 22, "text": "\\begin{align}\n &A_1\\cos(\\omega t + \\theta_1) + A_2\\cos(\\omega t + \\theta_2) \\\\[3pt]\n ={} &\\operatorname{Re}\\left( A_1 e^{i\\theta_1}e^{i\\omega t} \\right) + \\operatorname{Re}\\left( A_2 e^{i\\theta_2}e^{i\\omega t} \\right) \\\\[3pt]\n ={} &\\operatorname{Re}\\left( A_1 e^{i\\theta_1}e^{i\\omega t} + A_2 e^{i\\theta_2} e^{i\\omega t} \\right) \\\\[3pt]\n ={} &\\operatorname{Re}\\left( \\left(A_1 e^{i\\theta_1} + A_2 e^{i\\theta_2}\\right) e^{i\\omega t} \\right) \\\\[3pt]\n ={} &\\operatorname{Re}\\left( \\left(A_3 e^{i\\theta_3}\\right) e^{i\\omega t} \\right) \\\\[3pt]\n ={} &A_3 \\cos(\\omega t + \\theta_3),\n\\end{align}" }, { "math_id": 23, "text": "A_3^2 = (A_1 \\cos\\theta_1 + A_2 \\cos \\theta_2)^2 + (A_1 \\sin\\theta_1 + A_2 \\sin\\theta_2)^2," }, { "math_id": 24, "text": " \\theta_3 \\in \\left[-\\frac{\\pi}{2}, \\frac{3\\pi}{2}\\right]" }, { "math_id": 25, "text": "\\theta_3" }, { "math_id": 26, "text": "\\sgn(A_1 \\sin(\\theta_1) + A_2 \\sin(\\theta_2)) \\cdot \\frac{\\pi}{2}," }, { "math_id": 27, "text": "A_1 \\cos\\theta_1 + A_2 \\cos\\theta_2 = 0," }, { "math_id": 28, "text": "\\sgn" }, { "math_id": 29, "text": "\\arctan\\left(\\frac{A_1 \\sin\\theta_1 + A_2 \\sin\\theta_2}{A_1 \\cos\\theta_1 + A_2 \\cos\\theta_2}\\right)," }, { "math_id": 30, "text": "A_1 \\cos\\theta_1 + A_2 \\cos\\theta_2 > 0" }, { "math_id": 31, "text": "\\pi + \\arctan\\left(\\frac{A_1 \\sin\\theta_1 + A_2 \\sin\\theta_2}{A_1 \\cos\\theta_1 + A_2 \\cos\\theta_2}\\right)," }, { "math_id": 32, "text": "A_1 \\cos\\theta_1 + A_2 \\cos\\theta_2 < 0" }, { "math_id": 33, "text": "\n A_3^2 = A_1^2 + A_2^2 - 2 A_1 A_2 \\cos(180^\\circ - \\Delta\\theta)\n = A_1^2 + A_2^2 + 2 A_1 A_2 \\cos(\\Delta\\theta),\n" }, { "math_id": 34, "text": "\\Delta\\theta = \\theta_1 - \\theta_2." }, { "math_id": 35, "text": "A_1 \\angle \\theta_1 + A_2 \\angle \\theta_2 = A_3 \\angle \\theta_3." }, { "math_id": 36, "text": "\\cos(\\omega t) + \\cos\\left(\\omega t + \\frac{2\\pi}{3}\\right) + \\cos\\left(\\omega t - \\frac{2\\pi}{3}\\right) = 0." }, { "math_id": 37, "text": "\\lambda" }, { "math_id": 38, "text": "\\begin{align}\n &\\operatorname{Re}\\left( \\frac{\\mathrm{d}}{\\mathrm{d}t} \\mathord\\left(A e^{i\\theta} \\cdot e^{i\\omega t}\\right) \\right) \\\\\n ={} &\\operatorname{Re}\\left( A e^{i\\theta} \\cdot i\\omega e^{i\\omega t} \\right) \\\\\n ={} &\\operatorname{Re}\\left( A e^{i\\theta} \\cdot e^{i\\pi/2} \\omega e^{i\\omega t} \\right) \\\\\n ={} &\\operatorname{Re}\\left( \\omega A e^{i(\\theta + \\pi/2)} \\cdot e^{i\\omega t} \\right) \\\\\n ={} &\\omega A \\cdot \\cos\\left(\\omega t + \\theta + \\frac{\\pi}{2}\\right).\n\\end{align}" }, { "math_id": 39, "text": "i \\omega = e^{i\\pi/2} \\cdot \\omega" }, { "math_id": 40, "text": "\\frac{1}{i\\omega} = \\frac{e^{-i\\pi/2}}{\\omega}." }, { "math_id": 41, "text": "e^{i\\omega t}," }, { "math_id": 42, "text": "\\frac{\\mathrm{d}\\, v_\\text{C}(t)}{\\mathrm{d}t} + \\frac{1}{RC}v_\\text{C}(t) = \\frac{1}{RC} v_\\text{S}(t)." }, { "math_id": 43, "text": "v_\\text{S}(t) = V_\\text{P} \\cdot \\cos(\\omega t + \\theta)," }, { "math_id": 44, "text": "v_\\text{S}(t) = \\operatorname{Re}\\left( V_\\text{s} \\cdot e^{i \\omega t} \\right)." }, { "math_id": 45, "text": "v_\\text{C}(t) = \\operatorname{Re}\\left(V_\\text{c} \\cdot e^{i \\omega t} \\right)," }, { "math_id": 46, "text": "V_\\text{s} = V_\\text{P} e^{i\\theta}," }, { "math_id": 47, "text": "V_\\text{c}" }, { "math_id": 48, "text": "i \\omega V_\\text{c} + \\frac{1}{RC} V_\\text{c} = \\frac{1}{RC}V_\\text{s}." }, { "math_id": 49, "text": "t - \\frac{\\pi}{2\\omega}," }, { "math_id": 50, "text": "\\begin{align}\n \\frac{\\mathrm{d}}{\\mathrm{d}t} \\operatorname{Re}\\left( V_\\text{c} \\cdot e^{i\\omega t} \\right)\n &= \\operatorname{Re}\\left(\\frac{\\mathrm{d}}{\\mathrm{d}t} \\mathord\\left( V_\\text{c} \\cdot e^{i\\omega t} \\right)\\right)\n = \\operatorname{Re}\\left(i\\omega V_\\text{c} \\cdot e^{i\\omega t} \\right) \\\\\n \\frac{\\mathrm{d}}{\\mathrm{d}t} \\operatorname{Im}\\left( V_\\text{c} \\cdot e^{i\\omega t} \\right)\n &= \\operatorname{Im}\\left(\\frac{\\mathrm{d}}{\\mathrm{d}t} \\mathord\\left( V_\\text{c} \\cdot e^{i\\omega t} \\right) \\right)\n = \\operatorname{Im}\\left(i\\omega V_\\text{c} \\cdot e^{i\\omega t} \\right).\n\\end{align}" }, { "math_id": 51, "text": "i," }, { "math_id": 52, "text": "\\begin{align}\n i\\omega V_\\text{c} \\cdot e^{i\\omega t} + \\frac{1}{RC}V_\\text{c} \\cdot e^{i\\omega t} &= \\frac{1}{RC}V_\\text{s} \\cdot e^{i\\omega t} \\\\\n \\left(i\\omega V_\\text{c} + \\frac{1}{RC}V_\\text{c} \\right) \\!\\cdot e^{i\\omega t} &= \\left(\\frac{1}{RC}V_\\text{s}\\right) \\cdot e^{i \\omega t} \\\\\n i\\omega V_\\text{c} + \\frac{1}{RC}V_\\text{c} &= \\frac{1}{RC}V_\\text{s}.\n\\end{align}" }, { "math_id": 53, "text": "V_\\text{c} = \\frac{1}{1 + i \\omega RC} \\cdot V_\\text{s} = \\frac{1 - i\\omega R C}{1 + (\\omega RC)^2} \\cdot V_\\text{P} e^{i\\theta}." }, { "math_id": 54, "text": "V_\\text{s}" }, { "math_id": 55, "text": "v_\\text{C}(t)" }, { "math_id": 56, "text": "V_\\text{P}" }, { "math_id": 57, "text": "\\theta." }, { "math_id": 58, "text": "\\frac{1 - i\\omega R C}{1 + (\\omega RC)^2}=\\frac{1}{\\sqrt{1 + (\\omega RC)^2}}\\cdot e^{-i \\phi(\\omega)}," }, { "math_id": 59, "text": "\\phi(\\omega) = \\arctan(\\omega RC)" }, { "math_id": 60, "text": "v_\\text{C}(t) =\\operatorname{Re}\\left(V_\\text{c} \\cdot e^{i \\omega t} \\right)= \\frac{1}{\\sqrt{1 + (\\omega RC)^2}}\\cdot V_\\text{P} \\cos(\\omega t + \\theta - \\phi(\\omega))." }, { "math_id": 61, "text": "x(t) = \\operatorname{Re}\\left( A e^{i \\theta} \\cdot e^{i 2\\pi f_0 t} \\right)," }, { "math_id": 62, "text": "f_0" }, { "math_id": 63, "text": "t = 0" }, { "math_id": 64, "text": "x(t)" }, { "math_id": 65, "text": "Am \\cos{2\\pi f_m t} " }, { "math_id": 66, "text": "m" }, { "math_id": 67, "text": "f_m" }, { "math_id": 68, "text": "{1 \\over 2} Am e^{i \\theta} \\cdot e^{i 2\\pi (f_0+f_m) t}," }, { "math_id": 69, "text": "{1 \\over 2} Am e^{i \\theta} \\cdot e^{i 2\\pi (f_0-f_m) t}." }, { "math_id": 70, "text": "{1 \\over 2} Am e^{i \\theta} \\cdot e^{i 2\\pi f_m t}," }, { "math_id": 71, "text": "{1 \\over 2} Am e^{i \\theta} \\cdot e^{-i 2\\pi f_m t}." }, { "math_id": 72, "text": "2f_m, 3f_m" } ]
https://en.wikipedia.org/wiki?curid=1256073
1256105
Direct integral
In mathematics and functional analysis, a direct integral or Hilbert integral is a generalization of the concept of direct sum. The theory is most developed for direct integrals of Hilbert spaces and direct integrals of von Neumann algebras. The concept was introduced in 1949 by John von Neumann in one of the papers in the series "On Rings of Operators". One of von Neumann's goals in this paper was to reduce the classification of (what are now called) von Neumann algebras on separable Hilbert spaces to the classification of so-called factors. Factors are analogous to full matrix algebras over a field, and von Neumann wanted to prove a continuous analogue of the Artin–Wedderburn theorem classifying semi-simple rings. Results on direct integrals can be viewed as generalizations of results about finite-dimensional C*-algebras of matrices; in this case the results are easy to prove directly. The infinite-dimensional case is complicated by measure-theoretic technicalities. Direct integral theory was also used by George Mackey in his analysis of systems of imprimitivity and his general theory of induced representations of locally compact separable groups. Direct integrals of Hilbert spaces. The simplest example of a direct integral are the "L"2 spaces associated to a (σ-finite) countably additive measure μ on a measurable space "X". Somewhat more generally one can consider a separable Hilbert space "H" and the space of square-integrable "H"-valued functions formula_0 Terminological note: The terminology adopted by the literature on the subject is followed here, according to which a measurable space "X" is referred to as a "Borel space" and the elements of the distinguished σ-algebra of "X" as Borel sets, regardless of whether or not the underlying σ-algebra comes from a topological space (in most examples it does). A Borel space is "standard" if and only if it is isomorphic to the underlying Borel space of a Polish space; all Polish spaces of a given cardinality are isomorphic to each other (as Borel spaces). Given a countably additive measure μ on "X", a measurable set is one that differs from a Borel set by a null set. The measure μ on "X" is a "standard" measure if and only if there is a null set "E" such that its complement "X" − "E" is a standard Borel space. All measures considered here are σ-finite. Definition. Let "X" be a Borel space equipped with a countably additive measure μ. A "measurable family of Hilbert spaces" on ("X", μ) is a family {"H""x"}"x"∈ "X", which is locally equivalent to a trivial family in the following sense: There is a countable partition formula_1 by measurable subsets of "X" such that formula_2 where H"n" is the canonical "n"-dimensional Hilbert space, that is formula_3 In the above, formula_4 is the space of square summable sequences; all separable Hilbert spaces are isomorphic to formula_5 A "cross-section" of {"H""x"}"x"∈ "X" is a family {"s""x"}"x" ∈ "X" such that "s""x" ∈ "H""x" for all "x" ∈ "X". A cross-section is measurable if and only if its restriction to each partition element "X""n" is measurable. We will identify measurable cross-sections "s", "t" that are equal almost everywhere. Given a measurable family of Hilbert spaces, the direct integral formula_6 consists of equivalence classes (with respect to almost everywhere equality) of measurable square integrable cross-sections of {"H""x"}"x"∈ "X". This is a Hilbert space under the inner product formula_7 Given the local nature of our definition, many definitions applicable to single Hilbert spaces apply to measurable families of Hilbert spaces as well. Remark. This definition is apparently more restrictive than the one given by von Neumann and discussed in Dixmier's classic treatise on von Neumann algebras. In the more general definition, the Hilbert space "fibers" "H""x" are allowed to vary from point to point without having a local triviality requirement (local in a measure-theoretic sense). One of the main theorems of the von Neumann theory is to show that in fact the more general definition is equivalent to the simpler one given here. Note that the direct integral of a measurable family of Hilbert spaces depends only on the measure class of the measure μ; more precisely: Theorem. Suppose μ, ν are σ-finite countably additive measures on "X" that have the same sets of measure 0. Then the mapping formula_8 is a unitary operator formula_9 Example. The simplest example occurs when "X" is a countable set and μ is a discrete measure. Thus, when "X" = N and μ is counting measure on N, then any sequence {"H""k"} of separable Hilbert spaces can be considered as a measurable family. Moreover, formula_10 Decomposable operators. For the example of a discrete measure on a countable set, any bounded linear operator "T" on formula_11 is given by an infinite matrix formula_12 For this example, of a discrete measure on a countable set, "decomposable operators" are defined as the operators that are block diagonal, having zero for all non-diagonal entries. Decomposable operators can be characterized as those which commute with diagonal matrices: formula_13 The above example motivates the general definition: A family of bounded operators {"T""x"}"x"∈ "X" with "T""x" ∈ L("H""x") is said to be "strongly measurable" if and only if its restriction to each "X""n" is strongly measurable. This makes sense because "H""x" is constant on "X""n". Measurable families of operators with an essentially bounded norm, that is formula_14 define bounded linear operators formula_15 acting in a pointwise fashion, that is formula_16 Such operators are said to be "decomposable". Examples of decomposable operators are those defined by scalar-valued (i.e. C-valued) measurable functions λ on "X". In fact, Theorem. The mapping formula_17 given by formula_18 is an involutive algebraic isomorphism onto its image. This allows "L"∞μ("X") to be identified with the image of φ. Theorem Decomposable operators are precisely those that are in the operator commutant of the abelian algebra "L"∞μ("X"). Decomposition of Abelian von Neumann algebras. The spectral theorem has many variants. A particularly powerful version is as follows: Theorem. For any Abelian von Neumann algebra A on a separable Hilbert space "H", there is a standard Borel space "X" and a measure μ on "X" such that it is unitarily equivalent as an operator algebra to "L"∞μ("X") acting on a direct integral of Hilbert spaces formula_19 To assert A is unitarily equivalent to "L"∞μ("X") as an operator algebra means that there is a unitary formula_20 such that "U" A "U"* is the algebra of diagonal operators "L"∞μ("X"). Note that this asserts more than just the algebraic equivalence of A with the algebra of diagonal operators. This version of the spectral theorem does not explicitly state how the underlying standard Borel space "X" is obtained. There is a uniqueness result for the above decomposition. Theorem. If the Abelian von Neumann algebra A is unitarily equivalent to both "L"∞μ("X") and "L"∞ν("Y") acting on the direct integral spaces formula_21 and μ, ν are standard measures, then there is a Borel isomorphism formula_22 where "E", "F" are null sets such that formula_23 The isomorphism φ is a measure class isomorphism, in that φ and its inverse preserve sets of measure 0. The previous two theorems provide a complete classification of Abelian von Neumann algebras on separable Hilbert spaces. This classification takes into account the realization of the von Neumann algebra as an algebra of operators. If one considers the underlying von Neumann algebra independently of its realization (as a von Neumann algebra), then its structure is determined by very simple measure-theoretic invariants. Direct integrals of von Neumann algebras. Let {"H""x"}"x" ∈ "X" be a measurable family of Hilbert spaces. A family of von Neumann algebras {"A""x"}"x" ∈ "X" with formula_24 is measurable if and only if there is a countable set "D" of measurable operator families that pointwise generate {"A""x"} "x" ∈ "X" as a von Neumann algebra in the following sense: For almost all "x" ∈ "X", formula_25 where W*("S") denotes the von Neumann algebra generated by the set "S". If {"A""x"}"x" ∈ "X" is a measurable family of von Neumann algebras, the direct integral of von Neumann algebras formula_26 consists of all operators of the form formula_27 for "T""x" ∈ "A""x". One of the main theorems of von Neumann and Murray in their original series of papers is a proof of the decomposition theorem: Any von Neumann algebra is a direct integral of factors. Precisely stated, Theorem. If {"A""x"}"x" ∈ "X" is a measurable family of von Neumann algebras and μ is standard, then the family of operator commutants is also measurable and formula_28 Central decomposition. Suppose "A" is a von Neumann algebra. Let Z("A") be the center of "A". The center is the set of operators in "A" that commute with all operators "A": formula_29 Then Z("A") is an Abelian von Neumann algebra. Example. The center of L("H") is 1-dimensional. In general, if "A" is a von Neumann algebra, if the center is 1-dimensional we say "A" is a factor. When "A" is a von Neumann algebra whose center contains a sequence of minimal pairwise orthogonal non-zero projections {"E""i"}"i" ∈ N such that formula_30 then "A" "E""i" is a von Neumann algebra on the range "H""i" of "E""i". It is easy to see "A" "E""i" is a factor. Thus, in this special case formula_31 represents "A" as a direct sum of factors. This is a special case of the central decomposition theorem of von Neumann. In general, the structure theorem of Abelian von Neumann algebras represents Z(A) as an algebra of scalar diagonal operators. In any such representation, all the operators in A are decomposable operators. This can be used to prove the basic result of von Neumann: any von Neumann algebra admits a decomposition into factors. Theorem. Suppose formula_32 is a direct integral decomposition of "H" and A is a von Neumann algebra on "H" so that Z(A) is represented by the algebra of scalar diagonal operators "L"∞μ("X") where "X" is a standard Borel space. Then formula_33 where for almost all "x" ∈ "X", "A""x" is a von Neumann algebra that is a "factor". Measurable families of representations. If "A" is a separable C*-algebra, the above results can be applied to measurable families of non-degenerate *-representations of "A". In the case that "A" has a unit, non-degeneracy is equivalent to unit-preserving. By the general correspondence that exists between strongly continuous unitary representations of a locally compact group "G" and non-degenerate *-representations of the groups C*-algebra C*("G"), the theory for C*-algebras immediately provides a decomposition theory for representations of separable locally compact groups. Theorem. Let "A" be a separable C*-algebra and π a non-degenerate involutive representation of "A" on a separable Hilbert space "H". Let W*(π) be the von Neumann algebra generated by the operators π("a") for "a" ∈ "A". Then corresponding to any central decomposition of W*(π) over a standard measure space ("X", μ) (which, as stated, is unique in a measure theoretic sense), there is a measurable family of factor representations formula_34 of "A" such that formula_35 Moreover, there is a subset "N" of "X" with μ measure zero, such that π"x", π"y" are disjoint whenever "x", "y" ∈ "X" − "N", where representations are said to be "disjoint" if and only if there are no intertwining operators between them. One can show that the direct integral can be indexed on the so-called "quasi-spectrum" "Q" of "A", consisting of quasi-equivalence classes of factor representations of "A". Thus, there is a standard measure μ on "Q" and a measurable family of factor representations indexed on "Q" such that π"x" belongs to the class of "x". This decomposition is essentially unique. This result is fundamental in the theory of group representations. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " L^2_\\mu(X, H). " }, { "math_id": 1, "text": " \\{X_n\\}_{1 \\leq n \\leq \\omega} " }, { "math_id": 2, "text": " H_x = \\mathbf{H}_n \\quad x \\in X_n " }, { "math_id": 3, "text": " \\mathbf{H}_n = \\left\\{ \\begin{matrix} \\mathbb{C}^n & \\mbox{ if } n < \\omega \\\\ \\ell^2 & \\mbox{ if } n = \\omega \\end{matrix}\\right. " }, { "math_id": 4, "text": "\\ell^2" }, { "math_id": 5, "text": "\\ell^2." }, { "math_id": 6, "text": " \\int^\\oplus_X H_x \\, \\mathrm{d} \\mu(x) " }, { "math_id": 7, "text": " \\langle s | t \\rangle = \\int_X \\langle s(x) | t(x) \\rangle \\, \\mathrm{d} \\mu(x) " }, { "math_id": 8, "text": " s \\mapsto \\left(\\frac{\\mathrm{d} \\mu}{\\mathrm{d} \\nu}\\right)^{1/2} s " }, { "math_id": 9, "text": " \\int^\\oplus_X H_x \\, \\mathrm{d} \\mu(x) \\rightarrow \\int^\\oplus_X H_x \\, \\mathrm{d} \\nu(x). " }, { "math_id": 10, "text": " \\int^\\oplus_X H_x \\, \\mathrm{d} \\mu(x) \\cong \\bigoplus_{k \\in \\mathbb{N}} H_k " }, { "math_id": 11, "text": " H = \\bigoplus_{k \\in \\mathbb{N}} H_k " }, { "math_id": 12, "text": " \\begin{bmatrix} T_{1 1} & T_{1 2} & \\cdots & T_{1 n} & \\cdots \\\\ T_{2 1} & T_{2 2} & \\cdots & T_{2 n} & \\cdots \\\\\n\\vdots & \\vdots & \\ddots & \\vdots & \\cdots \\\\\nT_{n 1} & T_{n 2} & \\cdots & T_{n n} & \\cdots \\\\\n\\vdots & \\vdots & \\cdots & \\vdots & \\ddots\n\\end{bmatrix}. " }, { "math_id": 13, "text": " \\begin{bmatrix} \\lambda_{1} & 0 & \\cdots & 0 & \\cdots \\\\ 0 & \\lambda_{2} & \\cdots & 0 & \\cdots \\\\\n\\vdots & \\vdots & \\ddots & \\vdots & \\cdots \\\\\n0 & 0 & \\cdots & \\lambda_{n} & \\cdots \\\\\n\\vdots & \\vdots & \\cdots & \\vdots & \\ddots\n\\end{bmatrix}. " }, { "math_id": 14, "text": " \\operatorname{ess-sup}_{x \\in X} \\|T_x\\| < \\infty " }, { "math_id": 15, "text": " \\int^\\oplus_X \\ T_x d \\mu(x) \\in \\operatorname{L}\\bigg(\\int^\\oplus_X H_x \\ d \\mu(x)\\bigg) " }, { "math_id": 16, "text": " \\bigg[\\int^\\oplus_X \\ T_x d \\mu(x) \\bigg] \\bigg(\\int^\\oplus_X \\ s_x d \\mu(x) \\bigg) = \\int^\\oplus_X \\ T_x(s_x) d \\mu(x). " }, { "math_id": 17, "text": " \\phi: L^\\infty_\\mu(X) \\rightarrow \\operatorname{L}\\bigg(\\int^\\oplus_X H_x \\ d \\mu(x)\\bigg) " }, { "math_id": 18, "text": " \\lambda \\mapsto \\int^\\oplus_X \\ \\lambda_x d \\mu(x) " }, { "math_id": 19, "text": " \\int_X^\\oplus H_x d \\mu(x). \\quad" }, { "math_id": 20, "text": " U: H \\rightarrow \\int_X^\\oplus H_x d\\mu(x) " }, { "math_id": 21, "text": " \\int_X^\\oplus H_x d \\mu(x), \\quad \\int_Y^\\oplus K_y d \\nu(y) " }, { "math_id": 22, "text": "\\varphi: X - E \\rightarrow Y - F " }, { "math_id": 23, "text": " K_{\\phi(x)} = H_x \\quad \\mbox{almost everywhere} " }, { "math_id": 24, "text": " A_x \\subseteq \\operatorname{L}(H_x) " }, { "math_id": 25, "text": " \\operatorname{W^*}(\\{S_x: S \\in D\\}) = A_x " }, { "math_id": 26, "text": " \\int_X^\\oplus A_x d\\mu(x) " }, { "math_id": 27, "text": " \\int_X^\\oplus T_x d\\mu(x) " }, { "math_id": 28, "text": " \\bigg[\\int_X^\\oplus A_x d\\mu(x)\\bigg]' = \\int_X^\\oplus A'_x d\\mu(x). " }, { "math_id": 29, "text": " \\mathbf{Z}(A) = A \\cap A' " }, { "math_id": 30, "text": " 1 = \\sum_{i \\in \\mathbb{N}} E_i " }, { "math_id": 31, "text": " A = \\bigoplus_{i \\in \\mathbb{N}} A E_i " }, { "math_id": 32, "text": " H = \\int_X^\\oplus H_x d \\mu(x) " }, { "math_id": 33, "text": " \\mathbf{A} = \\int^\\oplus_X A_x d \\mu(x) " }, { "math_id": 34, "text": " \\{\\pi_x\\}_{x \\in X} " }, { "math_id": 35, "text": " \\pi(a) = \\int_X^\\oplus \\pi_x(a) d \\mu(x), \\quad \\forall a \\in A. " } ]
https://en.wikipedia.org/wiki?curid=1256105
12561401
Aluminium smelting
Process of extracting aluminium from its oxide alumina Aluminium smelting is the process of extracting aluminium from its oxide, alumina, generally by the Hall-Héroult process. Alumina is extracted from the ore bauxite by means of the Bayer process at an alumina refinery. This is an electrolytic process, so an aluminium smelter uses huge amounts of electric power; smelters tend to be located close to large power stations, often hydro-electric ones, in order to hold down costs and reduce the overall carbon footprint. Smelters are often located near ports, since many smelters use imported alumina. Layout of an aluminium smelter. The Hall-Héroult electrolysis process is the major production route for primary aluminium. An electrolytic cell is made of a steel shell with a series of insulating linings of refractory materials. The cell consists of a brick-lined outer steel shell as a container and support. Inside the shell, cathode blocks are cemented together by ramming paste. The top lining is in contact with the molten metal and acts as the cathode. The molten electrolyte is maintained at high temperature inside the cell. The prebaked anode is also made of carbon in the form of large sintered blocks suspended in the electrolyte. A single Soderberg electrode or a number of prebaked carbon blocks are used as anode, while the principal formulation and the fundamental reactions occurring on their surface are the same. An aluminium smelter consists of a large number of cells (pots) in which the electrolysis takes place. A typical smelter contains anywhere from 300 to 720 pots, each of which produces about a ton of aluminium a day, though the largest proposed smelters are up to five times that capacity. Smelting is run as a batch process, with the aluminium deposited at the bottom of the pots and periodically siphoned off. Particularly in Australia these smelters are used to control electrical network demand, and as a result power is supplied to the smelter at a very low price. However power must not be interrupted for more than 4–5 hours, since the pots have to be repaired at significant cost if the liquid metal solidifies. Principle. Aluminium is produced by electrolytic reduction of aluminium oxide dissolved in molten cryolite. &lt;chem&gt;Al^3+ + 3e- -&gt; Al&lt;/chem&gt; At the same time the carbon electrode is oxidised, initially to carbon monoxide &lt;chem&gt;C + 1/2O2 -&gt; CO&lt;/chem&gt; Although the formation of carbon monoxide (CO) is thermodynamically favoured at the reaction temperature, the presence of considerable overvoltage (difference between reversible and polarization potentials) changes the thermodynamic equilibrium and a mixture of CO and CO2 is produced. Thus the idealised overall reactions may be written as formula_0 By increasing the current density up to 1 A/cm2, the proportion of CO2 increases and carbon consumption decreases. As three electrons are needed to produce each atom of aluminium, the process consumes a large amount of electricity. For this reason aluminium smelters are sited close to sources of inexpensive electricity, such as hydroelectric. Cell components. Electrolyte: The electrolyte is a molten bath of cryolite (Na3AlF6) and dissolved alumina. Cryolite is a good solvent for alumina with low melting point, satisfactory viscosity, and low vapour pressure. Its density is also lower than that of liquid aluminium (2 vs 2.3 g/cm3), which allows natural separation of the product from the salt at the bottom of the cell. The cryolite ratio (NaF/AlF3) in pure cryolite is 3, with a melting temperature of 1010 °C, and it forms a eutectic with 11% alumina at 960 °C. In industrial cells the cryolite ratio is kept between 2 and 3 to decrease its melting temperature to 940–980 °C. Cathode: Carbon cathodes are essentially made of anthracite, graphite and petroleum coke, which are calcined at around 1200 °C and crushed and sieved prior to being used in cathode manufacturing. Aggregates are mixed with coal-tar pitch, formed, and baked. Carbon purity is not as stringent as for anode, because metal contamination from cathode is not significant. Carbon cathode must have adequate strength, good electrical conductivity and high resistance to wear and sodium penetration. Anthracite cathodes have higher wear resistance and slower creep with lower amplitude [15] than graphitic and graphitized petroleum coke cathodes. Instead, dense cathodes with more graphitic order have higher electrical conductivity, lower energy consumption [14], and lower swelling due to sodium penetration. Swelling results in early and non-uniform deterioration of cathode blocks. Anode: Carbon anodes have a specific situation in aluminium smelting and depending on the type of anode, aluminium smelting is divided in two different technologies; “Soderberg” and “prebaked” anodes. Anodes are also made of petroleum coke, mixed with coal-tar-pitch, followed by forming and baking at elevated temperatures. The quality of anode affects technological, economical and environmental aspects of aluminium production. Energy efficiency is related to the nature of anode materials, as well as the porosity of baked anodes. Around 10% of cell power is consumed to overcome the electrical resistance of prebaked anode (50–60 μΩm). Carbon is consumed more than theoretical value due to a low current efficiency and non-electrolytic consumption. Inhomogeneous anode quality due to the variation in raw materials and production parameters also affects its performance and the cell stability. Prebaked consumable carbon anodes are divided into graphitized and coke types. For manufacturing of the graphitized anodes, anthracite and petroleum coke are calcined and classified. They are then mixed with coal-tar pitch and pressed. The pressed green anode is then baked at 1200 °C and graphitized. Coke anodes are made of calcined petroleum coke, recycled anode butts, and coal-tar pitch (binder). The anodes are manufactured by mixing aggregates with coal tar pitch to form a paste with a doughy consistency. This material is most often vibro-compacted but in some plants pressed. The green anode is then sintered at 1100–1200 °C for 300–400 hours, without graphitization, to increase its strength through decomposition and carbonization of the binder. Higher baking temperatures increase the mechanical properties and thermal conductivity, and decrease the air and CO2 reactivity. The specific electrical resistance of the coke-type anodes is higher than that of the graphitized ones, but they have higher compressive strength and lower porosity. Soderberg electrodes (in-situ baking), used for the first time in 1923 in Norway, are composed of a steel shell and a carbonaceous mass which is baked by the heat being escaped from the electrolysis cell. Soderberg Carbon-based materials such as coke and anthracite are crushed, heat-treated, and classified. These aggregates are mixed with pitch or oil as binder, briquetted and loaded into the shell. Temperature increases bottom to the top of the column and in-situ baking takes place as the anode is lowered into the bath. Significant amount of hydrocarbons are emitted during baking which is a disadvantage of this type of electrodes. Most of the modern smelters use prebaked anodes since the process control is easier and a slightly better energy efficiency is achieved, compared to Soderberg anodes. Environmental issues of aluminium smelters. The process produces a quantity of fluoride waste: perfluorocarbons and hydrogen fluoride as gases, and sodium and aluminium fluorides and unused cryolite as particulates. This can be as small as 0.5 kg per tonne of aluminium in the best plants in 2007, up to 4 kg per tonne of aluminium in older designs in 1974. Unless carefully controlled, hydrogen fluorides tend to be very toxic to vegetation around the plants. The Soderberg process which bakes the Anthracite/pitch mix as the anode is consumed, produces significant emissions of polycyclic aromatic hydrocarbons as the pitch is consumed in the smelter. The linings of the pots end up contaminated with cyanide-forming materials; Alcoa has a process for converting spent linings into aluminium fluoride for reuse and synthetic sand usable for building purposes and inert waste. Inert anodes. Inert anodes are non-carbon based alternatives to traditional anodes used during aluminum reduction. These anodes do not chemically react with the electrolyte, and are therefore not consumed during the reduction process. Because the anode does not contain carbon, carbon dioxide is not produced. Through a review of literature, Haradlsson et al. found that inert anodes reduced the green house gas emissions of the aluminum smelting process by approximately 2 tonnes CO2eq/ tonne Al. Types of anodes. Ceramic anode materials include Ni-Fe, Sn, and Ni-Li based oxides. These anodes show promise as they are extremely stable during the reduction process at normal operating temperatures (~1000 °C), ensuring that the Al is not contaminated. The stability of these anodes also allows them to be used with a range of electrolytes. However, ceramic anodes suffer from poor electrical conductivity and low mechanical strength. Alternatively metal anodes boast high mechanical strength and conductivity but tend to corrode easily during the reduction process. Some material systems that are used in inert metal anodes include Al-Cu, Ni-Cu, and Fe-Ni-Cu systems. Additional additives such as Sn, Ag, V, Nb, Ir, Ru can be included in these systems to form non reactive oxides on the anode surface, but this significantly increases the cost and embodied energy of the anode. Cermet anodes are the combination of a metal and ceramic anode, and aim to take advantage of the desirable properties of both; the electrical cpnductivity and toughness of the metal and stability of the ceramic. These anodes often consist of a combination of the above metal and ceramic materials. In industry, Alcoa and Rio Tinto have formed a joint venture, Elysis, to commercialize inert anode technology developed by Alcoa. The inert anode is a cermet material, a metallic dispersion of copper alloy in a ceramic matrix of nickel ferrite. Unfortunately, as the number of anode components increases , the structure of the anode becomes more unstable. As a result. cermet anodes also suffer from corrosion issues during reduction. Energy use. Aluminium smelting is highly energy intensive, and in some countries is economical only if there are inexpensive sources of electricity. In some countries, smelters are given exemptions to energy policy like renewable energy targets. To reduce the energy cost of the smelting process, alternative electrolytes such as Na3AlF6 are being investigated that can operate at a lower temperature. However, changing the electrolyte changes the kinetics of the liberated oxygen from the Al2O3 ore. This change in bubble formation can alter the rate the anode reacts with Oxygen or the electrolyte and effectively change the efficiency of the reduction process. Inert anodes, used in tandem with vertical electrode cells, can also reduce the energy cost of aluminum reduction up to 30% by lowering the voltage needed for reduction to occur. Applying these two technologies at the same times allows the anode-cathode distance to be minimized which decreases restive losses. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{cases}\n\\ce{Al2O3 + 3/2C <=> 2Al + 3/2CO2} &: \\Delta G\\circ = (264460 + 3.75 T\\log T - 92.52 T)\\ \\ce{cal} \\\\\n\\ce{Al2O3 + 3C <=> 2Al + 3CO} &: \\Delta G^\\circ = (325660 + 3.75 T\\log T - 155.07 T)\\ \\ce{cal}\n\\end{cases}" } ]
https://en.wikipedia.org/wiki?curid=12561401
12562469
Bondareva–Shapley theorem
The Bondareva–Shapley theorem, in game theory, describes a necessary and sufficient condition for the non-emptiness of the core of a cooperative game in characteristic function form. Specifically, the game's core is non-empty if and only if the game is "balanced". The Bondareva–Shapley theorem implies that market games and convex games have non-empty cores. The theorem was formulated independently by Olga Bondareva and Lloyd Shapley in the 1960s. Theorem. Let the pair formula_0 be a cooperative game in characteristic function form, where formula_1 is the set of players and where the "value function" formula_2 is defined on formula_3's power set (the set of all subsets of formula_3).&lt;br&gt; The core of formula_4 is non-empty if and only if for every function formula_5 where &lt;br&gt;&lt;br&gt; formula_6&lt;br&gt; the following condition holds: formula_7
[ { "math_id": 0, "text": "\\langle N, v\\rangle" }, { "math_id": 1, "text": " N" }, { "math_id": 2, "text": " v: 2^N \\to \\mathbb{R} " }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "\\langle N, v \\rangle " }, { "math_id": 5, "text": "\\alpha : 2^N \\setminus \\{\\emptyset\\} \\to [0,1]" }, { "math_id": 6, "text": "\\forall i \\in N : \\sum_{S \\in 2^N : \\; i \\in S} \\alpha (S) = 1" }, { "math_id": 7, "text": "\\sum_{S \\in 2^N\\setminus\\{\\emptyset\\}} \\alpha (S) v (S) \\leq v (N)." } ]
https://en.wikipedia.org/wiki?curid=12562469
1256751
Zeno machine
Hypothetical computational model In mathematics and computer science, Zeno machines (abbreviated ZM, and also called accelerated Turing machine, ATM) are a hypothetical computational model related to Turing machines that are capable of carrying out computations involving a countably infinite number of algorithmic steps. These machines are ruled out in most models of computation. The idea of Zeno machines was first discussed by Hermann Weyl in 1927; the name refers to Zeno's paradoxes, attributed to the ancient Greek philosopher Zeno of Elea. Zeno machines play a crucial role in some theories. The theory of the Omega Point devised by physicist Frank J. Tipler, for instance, can only be valid if Zeno machines are possible. Definition. A Zeno machine is a Turing machine that can take an infinite number of steps, and then continue take more steps. This can be thought of as a supertask where formula_0 units of time are taken to perform the formula_1-th step; thus, the first step takes 0.5 units of time, the second takes 0.25, the third 0.125 and so on, so that after one unit of time, a countably infinite number of steps will have been performed. Infinite time Turing machines. A more formal model of the Zeno machine is the infinite time Turing machine. Defined first in unpublished work by Jeffrey Kidder and expanded upon by Joel Hamkins and Andy Lewis, in "Infinite Time Turing Machines", the infinite time Turing machine is an extension of the classical Turing machine model, to include transfinite time; that is time beyond all finite time. A classical Turing machine has a status at step formula_2 (in the start state, with an empty tape, read head at cell 0) and a procedure for getting from one status to the successive status. In this way the status of a Turing machine is defined for all steps corresponding to a natural number. An ITTM maintains these properties, but also defines the status of the machine at limit ordinals, that is ordinals that are neither formula_2 nor the successor of any ordinal. The status of a Turing machine consists of 3 parts: Just as a classical Turing machine has a labeled start state, which is the state at the start of a program, an ITTM has a labeled "limit" state which is the state for the machine at any limit ordinal. This is the case even if the machine has no other way to access this state, for example no node transitions to it. The location of the read-write head is set to zero for at any limit step. Lastly the state of the tape is determined by the limit supremum of previous tape states. For some machine formula_5, a cell formula_6 and, a limit ordinal formula_7 then formula_8 That is the formula_6th cell at time formula_7 is the limit supremum of that same cell as the machine approaches formula_7. This can be thought of as the limit if it converges or formula_3 otherwise. Computability. Zeno machines have been proposed as a model of computation more powerful than classical Turing machines, based on their ability to solve the halting problem for classical Turing machines. Cristian Calude and Ludwig Staiger present the following pseudocode algorithm as a solution to the halting problem when run on a Zeno machine. begin program write 0 on the first position of the output tape; begin loop simulate 1 successive step of the given Turing machine on the given input; if the Turing machine has halted then write 1 on the first position of the output tape and break out of loop; end loop end program By inspecting the first position of the output tape after formula_3 unit of time has elapsed we can determine whether the given Turing machine halts. In contrast Oron Shagrir argues that the state of a Zeno machine is only defined on the interval formula_9, and so it is impossible to inspect the tape at time formula_3. Furthermore since classical Turing machines don't have any timing information, the addition of timing information whether accelerating or not does not itself add any computational power. Infinite time Turing machines however, are capable of implementing the given algorithm, halting at time formula_4 with the correct solution, since they do define their state for transfinite steps. All formula_10 sets are decidable by infinite time Turing machines, and formula_11 sets are semidecidable. Zeno machines cannot solve their own halting problem. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{1}{2^n}" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "0" }, { "math_id": 3, "text": "1" }, { "math_id": 4, "text": "\\omega" }, { "math_id": 5, "text": "T" }, { "math_id": 6, "text": "k" }, { "math_id": 7, "text": "\\lambda" }, { "math_id": 8, "text": "\nT(\\lambda)_k = \\limsup_{n\\rightarrow \\lambda}T(n)_k\n" }, { "math_id": 9, "text": "[0,1)" }, { "math_id": 10, "text": "\\Pi_1^1" }, { "math_id": 11, "text": "\\Delta_2^1" } ]
https://en.wikipedia.org/wiki?curid=1256751
1257367
Theodorus of Cyrene
5th century BC Greek mathematician Theodorus of Cyrene (Greek: ,  ) was an ancient Greek mathematician who lived during the 5th century BC. The only first-hand accounts of him that survive are in three of Plato's dialogues: the "Theaetetus", the "Sophist", and the "Statesman". In the former dialogue, he posits a mathematical construction now known as the Spiral of Theodorus. Life. Little is known as Theodorus' biography beyond what can be inferred from Plato's dialogues. He was born in the northern African colony of Cyrene, and apparently taught both there and in Athens. He complains of old age in the "Theaetetus", the dramatic date of 399 BC of which suggests his period of flourishing to have occurred in the mid-5th century. The text also associates him with the sophist Protagoras, with whom he claims to have studied before turning to geometry. A dubious tradition repeated among ancient biographers like Diogenes Laërtius held that Plato later studied with him in Cyrene, Libya. This eminent mathematician Theodorus was, along with Alcibiades and many other of Socrates' companions (many of whom would be associated with the Thirty Tyrants), accused of distributing the mysteries at a symposium, according to Plutarch, who himself was priest of the temple at Delphi. Work in mathematics. Theodorus' work is known through a sole theorem, which is delivered in the literary context of the "Theaetetus" and has been argued alternately to be historically accurate or fictional. In the text, his student Theaetetus attributes to him the theorem that the square roots of the non-square numbers up to 17 are irrational: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Theodorus here was drawing some figures for us in illustration of roots, showing that squares containing three square feet and five square feet are not commensurable in length with the unit of the foot, and so, selecting each one in its turn up to the square containing seventeen square feet and at that he stopped. The square containing "two" square units is not mentioned, perhaps because the incommensurability of its side with the unit was already known.) Theodorus's method of proof is not known. It is not even known whether, in the quoted passage, "up to" (μέχρι) means that seventeen is included. If seventeen is excluded, then Theodorus's proof may have relied merely on considering whether numbers are even or odd. Indeed, Hardy and Wright and Knorr suggest proofs that rely ultimately on the following theorem: If formula_0 is soluble in integers, and formula_1 is odd, then formula_1 must be congruent to 1 "modulo" 8 (since formula_2 and formula_3 can be assumed odd, so their squares are congruent to 1 "modulo" 8. That one cannot prove the irrationality the square root of 17 by considerations restricted to the arithmetic of the even and the odd has been shown in one system of the arithmetic of the even and the odd in and, but it is an open problem in a stronger natural axiom system for the arithmetic of the even and the odd A possibility suggested earlier by Zeuthen is that Theodorus applied the so-called Euclidean algorithm, formulated in Proposition X.2 of the "Elements" as a test for incommensurability. In modern terms, the theorem is that a real number with an "infinite" continued fraction expansion is irrational. Irrational square roots have periodic expansions. The period of the square root of 19 has length 6, which is greater than the period of the square root of any smaller number. The period of √17 has length one (so does √18; but the irrationality of √18 follows from that of √2). The so-called Spiral of Theodorus is composed of contiguous right triangles with hypotenuse lengths equal √2, √3, √4, …, √17; additional triangles cause the diagram to overlap. Philip J. Davis interpolated the vertices of the spiral to get a continuous curve. He discusses the history of attempts to determine Theodorus' method in his book "Spirals: From Theodorus to Chaos", and makes brief references to the matter in his fictional "Thomas Gray" series. That Theaetetus established a more general theory of irrationals, whereby square roots of non-square numbers are irrational, is suggested in the eponymous Platonic dialogue as well as commentary on, and scholia to, the "Elements". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x^2=ny^2" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "y" } ]
https://en.wikipedia.org/wiki?curid=1257367
12576332
Bolza surface
In mathematics, a Riemann surface In mathematics, the Bolza surface, alternatively, complex algebraic Bolza curve (introduced by Oskar Bolza (1887)), is a compact Riemann surface of genus formula_0 with the highest possible order of the conformal automorphism group in this genus, namely formula_1 of order 48 (the general linear group of formula_2 matrices over the finite field formula_3). The full automorphism group (including reflections) is the semi-direct product formula_4 of order 96. An affine model for the Bolza surface can be obtained as the locus of the equation formula_5 in formula_6. The Bolza surface is the smooth completion of the affine curve. Of all genus formula_0 hyperbolic surfaces, the Bolza surface maximizes the length of the systole . As a hyperelliptic Riemann surface, it arises as the ramified double cover of the Riemann sphere, with ramification locus at the six vertices of a regular octahedron inscribed in the sphere, as can be readily seen from the equation above. The Bolza surface has attracted the attention of physicists, as it provides a relatively simple model for quantum chaos; in this context, it is usually referred to as the Hadamard–Gutzwiller model. The spectral theory of the Laplace–Beltrami operator acting on functions on the Bolza surface is of interest to both mathematicians and physicists, since the surface is conjectured to maximize the first positive eigenvalue of the Laplacian among all compact, closed Riemann surfaces of genus formula_0 with constant negative curvature. Triangle surface. The Bolza surface is conformally equivalent to a formula_7 triangle surface – see Schwarz triangle. More specifically, the Fuchsian group defining the Bolza surface is a subgroup of the group generated by reflections in the sides of a hyperbolic triangle with angles formula_8. The group of orientation preserving isometries is a subgroup of the index-two subgroup of the group of reflections, which consists of products of an even number of reflections, which has an abstract presentation in terms of generators formula_9 and relations formula_10 as well as formula_11. The Fuchsian group formula_12 defining the Bolza surface is also a subgroup of the (3,3,4) triangle group, which is a subgroup of index 2 in the formula_7 triangle group. The formula_7 group does not have a realization in terms of a quaternion algebra, but the formula_13 group does. Under the action of formula_12 on the Poincare disk, the fundamental domain of the Bolza surface is a regular octagon with angles formula_14 and corners at formula_15 where formula_16. Opposite sides of the octagon are identified under the action of the Fuchsian group. Its generators are the matrices formula_17 where formula_18 and formula_19, along with their inverses. The generators satisfy the relation formula_20 These generators are connected to the length spectrum, which gives all of the possible lengths of geodesic loops.  The shortest such length is called the "systole" of the surface. The systole of the Bolza surface is formula_21 The formula_22 element formula_23 of the length spectrum for the Bolza surface is given by formula_24 where formula_25 runs through the positive integers (but omitting 4, 24, 48, 72, 140, and various higher values) and where formula_26 is the unique odd integer that minimizes formula_27 It is possible to obtain an equivalent closed form of the systole directly from the triangle group. Formulae exist to calculate the side lengths of a (2,3,8) triangles explicitly. The systole is equal to four times the length of the side of medial length in a (2,3,8) triangle, that is, formula_28 The geodesic lengths formula_23 also appear in the Fenchel–Nielsen coordinates of the surface. A set of Fenchel-Nielsen coordinates for a surface of genus 2 consists of three pairs, each pair being a length and twist.  Perhaps the simplest such set of coordinates for the Bolza surface is formula_29, where formula_30. There is also a "symmetric" set of coordinates formula_31, where all three of the lengths are the systole formula_32 and all three of the twists are given by formula_33 Symmetries of the surface. The fundamental domain of the Bolza surface is a regular octagon in the Poincaré disk; the four symmetric actions that generate the (full) symmetry group are: These are shown by the bold lines in the adjacent figure. They satisfy the following set of relations: formula_34 where formula_35 is the trivial (identity) action. One may use this set of relations in GAP to retrieve information about the representation theory of the group. In particular, there are four 1-dimensional, two 2-dimensional, four 3-dimensional, and three 4-dimensional irreducible representations, and formula_36 as expected. Spectral theory. Here, spectral theory refers to the spectrum of the Laplacian, formula_37. The first eigenspace (that is, the eigenspace corresponding to the first positive eigenvalue) of the Bolza surface is three-dimensional, and the second, four-dimensional , . It is thought that investigating perturbations of the nodal lines of functions in the first eigenspace in Teichmüller space will yield the conjectured result in the introduction. This conjecture is based on extensive numerical computations of eigenvalues of the surface and other surfaces of genus 2. In particular, the spectrum of the Bolza surface is known to a very high accuracy . The following table gives the first ten positive eigenvalues of the Bolza surface. The spectral determinant and Casimir energy formula_38 of the Bolza surface are formula_39 and formula_40 respectively, where all decimal places are believed to be correct. It is conjectured that the spectral determinant is maximized in genus 2 for the Bolza surface. Quaternion algebra. Following MacLachlan and Reid, the quaternion algebra can be taken to be the algebra over formula_41 generated as an associative algebra by generators "i,j" and relations formula_42 with an appropriate choice of an order.
[ { "math_id": 0, "text": "2" }, { "math_id": 1, "text": "GL_2(3)" }, { "math_id": 2, "text": "2\\times 2" }, { "math_id": 3, "text": "\\mathbb{F}_3" }, { "math_id": 4, "text": "GL_{2}(3)\\rtimes\\mathbb{Z}_{2}" }, { "math_id": 5, "text": "y^2=x^5-x" }, { "math_id": 6, "text": "\\mathbb C^2" }, { "math_id": 7, "text": "(2,3,8)" }, { "math_id": 8, "text": "\\tfrac{\\pi}{2}, \\tfrac{\\pi}{3}, \\tfrac{\\pi}{8}" }, { "math_id": 9, "text": "s_2, s_3, s_8" }, { "math_id": 10, "text": "s_2{}^2=s_3{}^3=s_8{}^8=1" }, { "math_id": 11, "text": "s_2 s_3 = s_8" }, { "math_id": 12, "text": "\\Gamma" }, { "math_id": 13, "text": "(3,3,4)" }, { "math_id": 14, "text": "\\tfrac{\\pi}{4}" }, { "math_id": 15, "text": "p_k=2^{-1/4}e^{i\\left(\\tfrac{\\pi}{8}+\\tfrac{k\\pi}{4}\\right)}," }, { "math_id": 16, "text": "k=0,\\ldots, 7" }, { "math_id": 17, "text": "g_k=\\begin{pmatrix}1+\\sqrt{2} & (2+\\sqrt{2})\\alpha e^{\\tfrac{ik\\pi}{4}}\\\\(2+\\sqrt{2})\\alpha e^{ -\\tfrac{ik\\pi}{4}} & 1+\\sqrt{2}\\end{pmatrix}," }, { "math_id": 18, "text": "\\alpha=\\sqrt{\\sqrt{2}-1}" }, { "math_id": 19, "text": "k=0,\\ldots, 3" }, { "math_id": 20, "text": "g_0 g_1^{-1} g_2 g_3^{-1} g_0^{-1} g_1 g_2^{-1} g_3=1." }, { "math_id": 21, "text": "\\ell_1=2\\operatorname{\\rm arcosh}(1+\\sqrt{2})\\approx 3.05714." }, { "math_id": 22, "text": "n^\\text{th}" }, { "math_id": 23, "text": "\\ell_n" }, { "math_id": 24, "text": "\\ell_n=2\\operatorname{\\rm arcosh}(m+n\\sqrt{2})," }, { "math_id": 25, "text": "n" }, { "math_id": 26, "text": "m" }, { "math_id": 27, "text": "\\vert m-n\\sqrt{2}\\vert." }, { "math_id": 28, "text": "\\ell_1=4\\operatorname{\\rm arcosh}\\left(\\tfrac{\\csc\\left(\\tfrac{\\pi}{8}\\right)}{2}\\right)\\approx 3.05714." }, { "math_id": 29, "text": "(\\ell_2,\\tfrac{1}{2};\\; \\ell_1,0;\\; \\ell_1,0)" }, { "math_id": 30, "text": "\\ell_2=2\\operatorname{\\rm arcosh}(3+2\\sqrt{2})\\approx 4.8969" }, { "math_id": 31, "text": "(\\ell_1,t;\\; \\ell_1,t;\\; \\ell_1,t)" }, { "math_id": 32, "text": "\\ell_1" }, { "math_id": 33, "text": "t=\\frac{\\operatorname{\\rm arcosh}\\left(\\sqrt{\\tfrac{2}{7}(3+\\sqrt{2})}\\right)}{\\operatorname{\\rm arcosh}(1+\\sqrt{2})}\\approx 0.321281." }, { "math_id": 34, "text": " \\langle R,\\,S,\\,T,\\,U\\mid R^8=S^2=T^2=U^3=RSRS=STST=RTR^3 T=e, \\,UR=R^7 U^2,\\,U^2 R=STU,\\,US=SU^2,\\, UT=RSU \\rangle," }, { "math_id": 35, "text": "e" }, { "math_id": 36, "text": "4(1^2)+2(2^2)+4(3^2)+3(4^2)=96" }, { "math_id": 37, "text": "\\Delta" }, { "math_id": 38, "text": "\\zeta(-1/2)" }, { "math_id": 39, "text": "\\det{}_{\\zeta}(\\Delta)\\approx 4.72273280444557" }, { "math_id": 40, "text": "\\zeta_\\Delta(-1/2)\\approx -0.65000636917383" }, { "math_id": 41, "text": "\\mathbb{Q}(\\sqrt{2})" }, { "math_id": 42, "text": "i^2=-3,\\;j^2=\\sqrt{2},\\;ij=-ji," } ]
https://en.wikipedia.org/wiki?curid=12576332
12578002
AsciiMath
Mathematical markup language AsciiMath is a client-side mathematical markup language for displaying mathematical expressions in web browsers. Using the JavaScript script ASCIIMathML.js, AsciiMath notation is converted to MathML at the time the page is loaded by the browser, natively in Mozilla Firefox, Safari, and via a plug-in in IE7. The simplified markup language supports a subset of the LaTeX language instructions, as well as a less verbose syntax (which, for example, replaces "\times" with "xx" or "times" to produce the "×" symbol). The resulting MathML mathematics can be styled by applying CSS to class "mstyle". The script ASCIIMathML.js is freely available under the MIT License. The latest version also includes support for SVG graphics, natively in Mozilla Firefox and via a plug-in in IE7. Per May 2009 there is a new version available. This new version still contains the original ASCIIMathML and LaTeXMathML as developed by Peter Jipsen, but the ASCIIsvg part has been extended with linear-logarithmic, logarithmic-linear, logarithmic-logarithmic, polar graphs and pie charts, normal and stacked bar charts, different functions like integration and differentiation and a series of event trapping functions, buttons and sliders, in order to create interactive lecture material and exams online in web pages. ASCIIMathML.js has been integrated into MathJax, starting with MathJax v2.0. Example. The well-known quadratic formula formula_0 looks like this in AsciiMath: x=(-b +- sqrt(b^2 – 4ac))/(2a) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x=\\frac{-b \\pm \\sqrt{b^2 - 4ac}}{2a}" } ]
https://en.wikipedia.org/wiki?curid=12578002
12578506
Dissipative particle dynamics
Dissipative particle dynamics (DPD) is an off-lattice mesoscopic simulation technique which involves a set of particles moving in continuous space and discrete time. Particles represent whole molecules or fluid regions, rather than single atoms, and atomistic details are not considered relevant to the processes addressed. The particles' internal degrees of freedom are integrated out and replaced by simplified pairwise dissipative and random forces, so as to conserve momentum locally and ensure correct hydrodynamic behaviour. The main advantage of this method is that it gives access to longer time and length scales than are possible using conventional MD simulations. Simulations of polymeric fluids in volumes up to 100 nm in linear dimension for tens of microseconds are now common. DPD was initially devised by Hoogerbrugge and Koelman to avoid the lattice artifacts of the so-called lattice gas automata and to tackle hydrodynamic time and space scales beyond those available with molecular dynamics (MD). It was subsequently reformulated and slightly modified by P. Español to ensure the proper thermal equilibrium state. A series of new DPD algorithms with reduced computational complexity and better control of transport properties are presented. The algorithms presented in this article choose randomly a pair particle for applying DPD thermostating thus reducing the computational complexity. Equations. The total non-bonded force acting on a DPD particle "i" is given by a sum over all particles "j" that lie within a fixed cut-off distance, of three pairwise-additive forces: formula_0 where the first term in the above equation is a conservative force, the second a dissipative force and the third a random force. The conservative force acts to give beads a chemical identity, while the dissipative and random forces together form a thermostat that keeps the mean temperature of the system constant. A key property of all of the non-bonded forces is that they conserve momentum locally, so that hydrodynamic modes of the fluid emerge even for small particle numbers. Local momentum conservation requires that the random force between two interacting beads be antisymmetric. Each pair of interacting particles therefore requires only a single random force calculation. This distinguishes DPD from Brownian dynamics in which each particle experiences a random force independently of all other particles. Beads can be connected into ‘molecules’ by tying them together with soft (often Hookean) springs. The most common applications of DPD keep the particle number, volume and temperature constant, and so take place in the NVT ensemble. Alternatively, the pressure instead of the volume is held constant, so that the simulation is in the NPT ensemble. Parallelization. In principle, simulations of very large systems, approaching a cubic micron for milliseconds, are possible using a parallel implementation of DPD running on multiple processors in a Beowulf-style cluster. Because the non-bonded forces are short-ranged in DPD, it is possible to parallelize a DPD code very efficiently using a spatial domain decomposition technique. In this scheme, the total simulation space is divided into a number of cuboidal regions each of which is assigned to a distinct processor in the cluster. Each processor is responsible for integrating the equations of motion of all beads whose centres of mass lie within its region of space. Only beads lying near the boundaries of each processor's space require communication between processors. In order to ensure that the simulation is efficient, the crucial requirement is that the number of particle-particle interactions that require inter-processor communication be much smaller than the number of particle-particle interactions within the bulk of each processor's region of space. Roughly speaking, this means that the volume of space assigned to each processor should be sufficiently large that its surface area (multiplied by a distance comparable to the force cut-off distance) is much less than its volume. Applications. A wide variety of complex hydrodynamic phenomena have been simulated using DPD, the list here is necessarily incomplete. The goal of these simulations often is to relate the macroscopic non-Newtonian flow properties of the fluid to its microscopic structure. Such DPD applications range from modeling the rheological properties of concrete to simulating liposome formation in biophysics to other recent three-phase phenomena such as dynamic wetting. The DPD method has also found popularity in modeling heterogeneous multi-phase flows containing deformable objects such as blood cells and polymer micelles. Further reading. The full trace of the developments of various important aspects of the DPD methodology since it was first proposed in the early 1990s can be found in "Dissipative Particle Dynamics: Introduction, Methodology and Complex Fluid Applications – A Review". The state-of-the-art in DPD was captured in a CECAM workshop in 2008. Innovations to the technique presented there include DPD with energy conservation; non-central frictional forces that allow the fluid viscosity to be tuned; an algorithm for preventing bond crossing between polymers; and the automated calibration of DPD interaction parameters from atomistic molecular dynamics. Recently, examples of automated calibration and parameterization have been shown against experimental observables. Additionally, datasets for the purpose of interaction potential calibration and parameterisation have been explored. Swope "et al", have provided a detailed analysis of literature data and an experimental dataset based on Critical micelle concentration (CMC) and micellar mean aggregation number (Nagg). Examples of micellar simulations using DPD have been well documented previously. Available packages. Some available simulation packages that can (also) perform DPD simulations are:
[ { "math_id": 0, "text": " f_i =\\sum_{j \\ne i}(F^C_{ij} + F^D_{ij} + F^R_{ij}) " } ]
https://en.wikipedia.org/wiki?curid=12578506
1258255
CARINE
CARINE (Computer Aided Reasoning Engine) is a first-order classical logic automated theorem prover. It was initially built for the study of the enhancement effects of the strategies delayed clause-construction (DCC) and attribute sequences (ATS) in a depth-first search based algorithm. CARINE's main search algorithm is semi-linear resolution (SLR) which is based on an iteratively-deepening depth-first search (also known as depth-first iterative-deepening (DFID)) and used in theorem provers like THEO. SLR employs DCC to achieve a high inference rate, and ATS to reduce the search space. Delayed Clause Construction (DCC). Delayed Clause Construction is a stalling strategy that enhances a theorem prover's performance by reducing the work to construct clauses to a minimum. Instead of constructing every conclusion (clause) of an applied inference rule, the information to construct such clause is temporarily stored until the theorem prover decides to either discard the clause or construct it. If the theorem prover decides to keep the clause, it will be constructed and stored in memory, otherwise the information to construct the clause is erased. Storing the information from which an inferred clause can be constructed require almost no additional CPU operations. However, constructing a clause "may" consume a lot of time. Some theorem provers spend 30%–40% of their total execution time constructing and deleting clauses. With DCC this wasted time can be salvaged. DCC is useful when too many intermediate clauses (especially first-order clauses) are being constructed and discarded in a short period of time because the operations performed to construct such short lived clauses are avoided. DCC may not be very effective on theorems with only propositional clauses. How does DCC work? After every application of an inference rule, certain variables may have to be substituted by terms (e.g. "x" → "f"("a")) and thus a substitution set is formed. Instead of constructing the resulting clause and discarding the substitution set, the theorem prover simply maintains the substitution set along with some other information, like what clauses where involved in the inference rule and what inference rule was applied, and continues the derivation without constructing the resulting clause of the inference rule. This procedure keeps going along a derivation until the theorem provers reaches a point where it decides, based on certain criteria and heuristics, whether to construct the final clause in the derivation (and probably some other clause(s) along the path) or discard the whole derivation i.e., deletes from memory the maintained substitution sets and whatever information stored with them. Attribute sequences (ATS). (An informal definition of) a clause in theorem proving is a statement that can result in a true or false answer depending on the evaluation of its literals. A clause is represented as a disjunction (i.e., OR), conjunction (i.e., AND), set, or multi-set (similar to a set but can contain identical elements) of literals. An example of a clause as a disjunction of literals is: formula_0 where the symbols formula_1 and formula_2 are, respectively, logical or and logical not. The above example states that if "Y" is wealthy AND smart AND beautiful then "X" loves "Y". It does not say who "X" and "Y" are though. Note that the above representation comes from the logical statement: For all "Y", "X" belonging to the domain of human beings: formula_3 By using some transformation rules of formal logic we produce the disjunction of literals of the example given above. "X" and "Y" are variables. formula_2"wealthy", formula_2"smart", formula_2"beautiful", "loves" are literals. Suppose we substitute the variable "X" for the constant John and the variable "Y" for the constant Jane then the above clause will become: formula_4 A clause attribute is a characteristic of a clause. Some examples of clause attributes are: the clause formula_5 has: An attribute sequence is a sequence of k "n"-tuples of clause attributes that represent a projection of a set of derivations of length k. k and n are strictly positive integers. The set of derivations form the domain and the attribute sequences form the codomain of the mapping between derivations and attribute sequences. &lt;(2,2),(2,1),(1,1)&gt; is an attribute sequence where "k" = 3 and "n" = 2. It corresponds to some derivation, say, &lt;(B1,B2),(R1,B3),(R2,B4)&gt; where B1, B2, R1, B3, R2, and B4 are clauses. The attribute here is assumed to be the length of a clause. The first pair (2,2) corresponds to the pair (B1,B2) from the derivation. It indicates that the length of B1 is 2 and the length of B2 is also 2. The second pair (2,1) corresponds to the pair (R1,B3) and it indicates that the length of R1 is 2 and the length of B3 is 1. The last pair (1,1) corresponds to the pair (R2,B4) and it indicates that the length of R2 is 1 and the length of B4 is 1. Note: An "n"-tuple of clause attributes is similar (but not the same) to the "feature vector" named by Stephan Schulz, PhD (see E equational theorem prover). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lnot \\operatorname{wealthy}(Y) \\lor \\lnot \\operatorname{smart}(Y) \\lor \\lnot \\operatorname{beautiful}(Y) \\lor \\operatorname{loves}(X, Y)" }, { "math_id": 1, "text": "\\lor" }, { "math_id": 2, "text": "\\lnot" }, { "math_id": 3, "text": "\\operatorname{wealthy}(Y) \\land \\operatorname{smart}(Y) \\land \\operatorname{beautiful}(Y) \\implies \\operatorname{loves}(X,Y)" }, { "math_id": 4, "text": "\\lnot \\operatorname{wealthy}(\\text{Jane}) \\lor \\lnot \\operatorname{smart}(\\text{Jane}) \\lor \\lnot \\operatorname{beautiful}(\\text{Jane}) \\lor \\operatorname{loves}(\\text{John},\\text{Jane})" }, { "math_id": 5, "text": "C = \\lnot P(x) \\lor Q(a,b,f(x))" }, { "math_id": 6, "text": "\\lnot P(x)" } ]
https://en.wikipedia.org/wiki?curid=1258255
12583075
Hannan–Quinn information criterion
In statistics, the Hannan–Quinn information criterion (HQC) is a criterion for model selection. It is an alternative to Akaike information criterion (AIC) and Bayesian information criterion (BIC). It is given as formula_0 where "formula_1" is the log-likelihood, "k" is the number of parameters, and "n" is the number of observations. Burnham &amp; Anderson (2002, p. 287) say that HQC, "while often cited, seems to have seen little use in practice". They also note that HQC, like BIC, but unlike AIC, is not an estimator of Kullback–Leibler divergence. Claeskens &amp; Hjort (2008, ch. 4) note that HQC, like BIC, but unlike AIC, is not asymptotically efficient; however, it misses the optimal estimation rate by a very small formula_2 factor. They further point out that whatever method is being used for fine-tuning the criterion will be more important in practice than the term formula_2, since this latter number is small even for very large formula_3; however, the formula_2 term ensures that, unlike AIC, HQC is strongly consistent. It follows from the law of the iterated logarithm that any strongly consistent method must miss efficiency by at least a formula_2 factor, so in this sense HQC is asymptotically very well-behaved. Van der Pas and Grünwald prove that model selection based on a modified Bayesian estimator, the so-called switch distribution, in many cases behaves asymptotically like HQC, while retaining the advantages of Bayesian methods such as the use of priors etc.
[ { "math_id": 0, "text": " \\mathrm{HQC} = -2 L_{max} + 2 k \\ln(\\ln(n)), \\ " }, { "math_id": 1, "text": "L_{max}" }, { "math_id": 2, "text": "\\ln(\\ln(n))" }, { "math_id": 3, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=12583075
12589048
Tychonoff plank
Topological space in mathematics In topology, the Tychonoff plank is a topological space defined using ordinal spaces that is a counterexample to several plausible-sounding conjectures. It is defined as the topological product of the two ordinal spaces formula_0 and formula_1, where formula_2 is the first infinite ordinal and formula_3 the first uncountable ordinal. The deleted Tychonoff plank is obtained by deleting the point formula_4. Properties. The Tychonoff plank is a compact Hausdorff space and is therefore a normal space. However, the deleted Tychonoff plank is non-normal. Therefore the Tychonoff plank is not completely normal. This shows that a subspace of a normal space need not be normal. The Tychonoff plank is not perfectly normal because it is not a Gδ space: the singleton formula_5 is closed but not a Gδ set. The Stone–Čech compactification of the deleted Tychonoff plank is the Tychonoff plank. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "[0,\\omega_1]" }, { "math_id": 1, "text": "[0,\\omega]" }, { "math_id": 2, "text": "\\omega" }, { "math_id": 3, "text": "\\omega_1" }, { "math_id": 4, "text": "\\infty = (\\omega_1,\\omega)" }, { "math_id": 5, "text": "\\{\\infty\\}" } ]
https://en.wikipedia.org/wiki?curid=12589048
12589161
Neural cryptography
Branch of cryptography Neural cryptography is a branch of cryptography dedicated to analyzing the application of stochastic algorithms, especially artificial neural network algorithms, for use in encryption and cryptanalysis. Definition. Artificial neural networks are well known for their ability to selectively explore the solution space of a given problem. This feature finds a natural niche of application in the field of cryptanalysis. At the same time, neural networks offer a new approach to attack ciphering algorithms based on the principle that any function could be reproduced by a neural network, which is a powerful proven computational tool that can be used to find the inverse-function of any cryptographic algorithm. The ideas of mutual learning, self learning, and stochastic behavior of neural networks and similar algorithms can be used for different aspects of cryptography, like public-key cryptography, solving the key distribution problem using neural network mutual synchronization, hashing or generation of pseudo-random numbers. Another idea is the ability of a neural network to separate space in non-linear pieces using "bias". It gives different probabilities of activating the neural network or not. This is very useful in the case of Cryptanalysis. Two names are used to design the same domain of research: Neuro-Cryptography and Neural Cryptography. The first work that it is known on this topic can be traced back to 1995 in an IT Master Thesis. Applications. In 1995, Sebastien Dourlens applied neural networks to cryptanalyze DES by allowing the networks to learn how to invert the S-tables of the DES. The bias in DES studied through Differential Cryptanalysis by Adi Shamir is highlighted. The experiment shows about 50% of the key bits can be found, allowing the complete key to be found in a short time. Hardware application with multi micro-controllers have been proposed due to the easy implementation of multilayer neural networks in hardware. One example of a public-key protocol is given by Khalil Shihab. He describes the decryption scheme and the public key creation that are based on a backpropagation neural network. The encryption scheme and the private key creation process are based on Boolean algebra. This technique has the advantage of small time and memory complexities. A disadvantage is the property of backpropagation algorithms: because of huge training sets, the learning phase of a neural network is very long. Therefore, the use of this protocol is only theoretical so far. Neural key exchange protocol. The most used protocol for key exchange between two parties A and B in the practice is Diffie–Hellman key exchange protocol. Neural key exchange, which is based on the synchronization of two tree parity machines, should be a secure replacement for this method. Synchronizing these two machines is similar to synchronizing two chaotic oscillators in chaos communications. Tree parity machine. The tree parity machine is a special type of multi-layer feedforward neural network. It consists of one output neuron, K hidden neurons and K×N input neurons. Inputs to the network take three values: formula_0 The weights between input and hidden neurons take the values: formula_1 Output value of each hidden neuron is calculated as a sum of all multiplications of input neurons and these weights: formula_2 Signum is a simple function, which returns −1,0 or 1: &lt;br&gt; formula_3 If the scalar product is 0, the output of the hidden neuron is mapped to −1 in order to ensure a binary output value. The output of neural network is then computed as the multiplication of all values produced by hidden elements: &lt;br&gt; formula_4 Output of the tree parity machine is binary. Protocol. Each party (A and B) uses its own tree parity machine. Synchronization of the tree parity machines is achieved in these steps After the full synchronization is achieved (the weights wij of both tree parity machines are same), A and B can use their weights as keys.&lt;br&gt; This method is known as a bidirectional learning.&lt;br&gt; One of the following learning rules can be used for the synchronization: formula_5 formula_6 formula_7 Where: formula_8 if formula_9 otherwise formula_10 And: formula_11 is a function that keeps the formula_12 in the range formula_13 Attacks and security of this protocol. In every attack it is considered, that the attacker E can eavesdrop messages between the parties A and B, but does not have an opportunity to change them. Brute force. To provide a brute force attack, an attacker has to test all possible keys (all possible values of weights wij). By K hidden neurons, K×N input neurons and boundary of weights L, this gives (2L+1)KN possibilities. For example, the configuration K = 3, L = 3 and N = 100 gives us 3*10253 key possibilities, making the attack impossible with today's computer power. Learning with own tree parity machine. One of the basic attacks can be provided by an attacker, who owns the same tree parity machine as the parties A and B. He wants to synchronize his tree parity machine with these two parties. In each step there are three situations possible: It has been proven, that the synchronization of two parties is faster than learning of an attacker. It can be improved by increasing of the synaptic depth L of the neural network. That gives this protocol enough security and an attacker can find out the key only with small probability. Other attacks. For conventional cryptographic systems, we can improve the security of the protocol by increasing of the key length. In the case of neural cryptography, we improve it by increasing of the synaptic depth L of the neural networks. Changing this parameter increases the cost of a successful attack exponentially, while the effort for the users grows polynomially. Therefore, breaking the security of neural key exchange belongs to the complexity class NP. Alexander Klimov, Anton Mityaguine, and Adi Shamir say that the original neural synchronization scheme can be broken by at least three different attacks—geometric, probabilistic analysis, and using genetic algorithms. Even though this particular implementation is insecure, the ideas behind chaotic synchronization could potentially lead to a secure implementation. Permutation parity machine. The permutation parity machine is a binary variant of the tree parity machine. It consists of one input layer, one hidden layer and one output layer. The number of neurons in the output layer depends on the number of hidden units K. Each hidden neuron has N binary input neurons: formula_14 The weights between input and hidden neurons are also binary: formula_15 Output value of each hidden neuron is calculated as a sum of all exclusive disjunctions (exclusive or) of input neurons and these weights: formula_16 (⊕ means XOR). The function formula_17 is a threshold function, which returns 0 or 1: &lt;br&gt; formula_18 The output of neural network with two or more hidden neurons can be computed as the exclusive or of the values produced by hidden elements: &lt;br&gt; formula_19 Other configurations of the output layer for K&gt;2 are also possible. This machine has proven to be robust enough against some attacks so it could be used as a cryptographic mean, but it has been shown to be vulnerable to a probabilistic attack. Security against quantum computers. A quantum computer is a device that uses quantum mechanisms for computation. In this device the data are stored as qubits (quantum binary digits). That gives a quantum computer in comparison with a conventional computer the opportunity to solve complicated problems in a short time, e.g. discrete logarithm problem or factorization. Algorithms that are not based on any of these number theory problems are being searched because of this property. Neural key exchange protocol is not based on any number theory. It is based on the difference between unidirectional and bidirectional synchronization of neural networks. Therefore, something like the neural key exchange protocol could give rise to potentially faster key exchange schemes.
[ { "math_id": 0, "text": "x_{ij} \\in \\left\\{ -1,0,+1 \\right\\}" }, { "math_id": 1, "text": "w_{ij} \\in \\left\\{-L,...,0,...,+L \\right\\}" }, { "math_id": 2, "text": "\\sigma_i=\\sgn(\\sum_{j=1}^{N}w_{ij}x_{ij})" }, { "math_id": 3, "text": "\\sgn (x) = \\begin{cases}\n-1 & \\text{if } x < 0, \\\\\n0 & \\text{if } x = 0, \\\\\n1 & \\text{if } x > 0. \\end{cases}" }, { "math_id": 4, "text": "\\tau=\\prod_{i=1}^{K}\\sigma_i" }, { "math_id": 5, "text": "w_i^+=g(w_i+\\sigma_ix_i\\Theta(\\sigma_i\\tau)\\Theta(\\tau^A\\tau^B))" }, { "math_id": 6, "text": "w_i^+=g(w_i-\\sigma_ix_i\\Theta(\\sigma_i\\tau)\\Theta(\\tau^A\\tau^B))" }, { "math_id": 7, "text": "w_i^+=g(w_i+x_i\\Theta(\\sigma_i\\tau)\\Theta(\\tau^A\\tau^B))" }, { "math_id": 8, "text": "\\Theta(a,b)=0" }, { "math_id": 9, "text": "a \\ne b" }, { "math_id": 10, "text": "\\Theta(a,b)=1" }, { "math_id": 11, "text": "g(x)" }, { "math_id": 12, "text": "w_i" }, { "math_id": 13, "text": "\\{-L, -L+1,...,0,...,L-1,L\\}" }, { "math_id": 14, "text": "x_{ij} \\in \\left\\{ 0,1 \\right\\}" }, { "math_id": 15, "text": "w_{ij} \\in \\left\\{0,1 \\right\\}" }, { "math_id": 16, "text": "\\sigma_i=\\theta_N(\\sum_{j=1}^{N}w_{ij}\\oplus x_{ij})" }, { "math_id": 17, "text": "\\theta_N(x)" }, { "math_id": 18, "text": "\\theta_N(x) = \\begin{cases}\n0 & \\text{if } x \\leq N/2, \\\\\n1 & \\text{if } x > N/2. \\end{cases}" }, { "math_id": 19, "text": "\\tau=\\bigoplus_{i=1}^{K}\\sigma_i" } ]
https://en.wikipedia.org/wiki?curid=12589161
12590908
Direct linear transformation
Direct linear transformation (DLT) is an algorithm which solves a set of variables from a set of similarity relations: formula_0   for formula_1 where formula_2 and formula_3 are known vectors, formula_4 denotes equality up to an unknown scalar multiplication, and formula_5 is a matrix (or linear transformation) which contains the unknowns to be solved. This type of relation appears frequently in projective geometry. Practical examples include the relation between 3D points in a scene and their projection onto the image plane of a pinhole camera, and homographies. Introduction. An ordinary system of linear equations formula_6   for formula_1 can be solved, for example, by rewriting it as a matrix equation formula_7 where matrices formula_8 and formula_9 contain the vectors formula_2 and formula_3 in their respective columns. Given that there exists a unique solution, it is given by formula_10 Solutions can also be described in the case that the equations are over or under determined. What makes the direct linear transformation problem distinct from the above standard case is the fact that the left and right sides of the defining equation can differ by an unknown multiplicative factor which is dependent on "k". As a consequence, formula_5 cannot be computed as in the standard case. Instead, the similarity relations are rewritten as proper linear homogeneous equations which then can be solved by a standard method. The combination of rewriting the similarity equations as homogeneous linear equations and solving them by standard methods is referred to as a direct linear transformation algorithm or DLT algorithm. DLT is attributed to Ivan Sutherland. Example. Suppose that formula_11. Let formula_12 and formula_13 be two known vectors, and we want to find the formula_14 matrix formula_5 such that formula_15 where formula_16 is the unknown scalar factor related to equation "k". To get rid of the unknown scalars and obtain homogeneous equations, define the anti-symmetric matrix formula_17 and multiply both sides of the equation with formula_18 from the left formula_19 Since formula_20 the following homogeneous equations, which no longer contain the unknown scalars, are at hand formula_21 In order to solve formula_5 from this set of equations, consider the elements of the vectors formula_2 and formula_3 and matrix formula_5: formula_22,   formula_23,   and   formula_24 and the above homogeneous equation becomes formula_25   for formula_26 This can also be written in the matrix form: formula_27   for formula_1 where formula_28 and formula_29 both are 6-dimensional vectors defined as formula_30   and   formula_31 So far, we have 1 equation and 6 unknowns. A set of homogeneous equations can be written in the matrix form formula_32 where formula_33 is a formula_34 matrix which holds the known vectors formula_28 in its rows. The unknown formula_29 can be determined, for example, by a singular value decomposition of formula_33; formula_29 is a right singular vector of formula_33 corresponding to a singular value that equals zero. Once formula_29 has been determined, the elements of matrix formula_5 can rearranged from vector formula_35. Notice that the scaling of formula_29 or formula_5 is not important (except that it must be non-zero) since the defining equations already allow for unknown scaling. In practice the vectors formula_2 and formula_3 may contain noise which means that the similarity equations are only approximately valid. As a consequence, there may not be a vector formula_29 which solves the homogeneous equation formula_32 exactly. In these cases, a total least squares solution can be used by choosing formula_29 as a right singular vector corresponding to the smallest singular value of formula_36 More general cases. The above example has formula_37 and formula_38, but the general strategy for rewriting the similarity relations into homogeneous linear equations can be generalized to arbitrary dimensions for both formula_2 and formula_39 If formula_37 and formula_40 the previous expressions can still lead to an equation formula_41   for   formula_1 where formula_5 now is formula_42 Each "k" provides one equation in the formula_43 unknown elements of formula_5 and together these equations can be written formula_44 for the known formula_45 matrix formula_33 and unknown "2q"-dimensional vector formula_46 This vector can be found in a similar way as before. In the most general case formula_47 and formula_40. The main difference compared to previously is that the matrix formula_48 now is formula_49 and anti-symmetric. When formula_50 the space of such matrices is no longer one-dimensional, it is of dimension formula_51 This means that each value of "k" provides "M" homogeneous equations of the type formula_52   for   formula_53   and for formula_1 where formula_54 is a "M"-dimensional basis of the space of formula_49 anti-symmetric matrices. Example "p" = 3. In the case that "p" = 3 the following three matrices formula_54 can be chosen formula_55,   formula_56,   formula_57 In this particular case, the homogeneous linear equations can be written as formula_58   for   formula_1 where formula_59 is the matrix representation of the vector cross product. Notice that this last equation is vector valued; the left hand side is the zero element in formula_60. Each value of "k" provides three homogeneous linear equations in the unknown elements of formula_5. However, since formula_59 has rank = 2, at most two equations are linearly independent. In practice, therefore, it is common to only use two of the three matrices formula_54, for example, for "m"=1, 2. However, the linear dependency between the equations is dependent on formula_2, which means that in unlucky cases it would have been better to choose, for example, "m"=2,3. As a consequence, if the number of equations is not a concern, it may be better to use all three equations when the matrix formula_33 is constructed. The linear dependence between the resulting homogeneous linear equations is a general concern for the case "p" &gt; 2 and has to be dealt with either by reducing the set of anti-symmetric matrices formula_54 or by allowing formula_33 to become larger than necessary for determining formula_46 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathbf{x}_{k} \\propto \\mathbf{A} \\, \\mathbf{y}_{k} " }, { "math_id": 1, "text": " \\, k = 1, \\ldots, N " }, { "math_id": 2, "text": " \\mathbf{x}_{k} " }, { "math_id": 3, "text": " \\mathbf{y}_{k} " }, { "math_id": 4, "text": " \\, \\propto " }, { "math_id": 5, "text": " \\mathbf{A} " }, { "math_id": 6, "text": " \\mathbf{x}_{k} = \\mathbf{A} \\, \\mathbf{y}_{k} " }, { "math_id": 7, "text": " \\mathbf{X} = \\mathbf{A} \\, \\mathbf{Y} " }, { "math_id": 8, "text": " \\mathbf{X} " }, { "math_id": 9, "text": " \\mathbf{Y} " }, { "math_id": 10, "text": " \\mathbf{A} = \\mathbf{X} \\, \\mathbf{Y}^{T} \\, (\\mathbf{Y} \\, \\mathbf{Y}^{T})^{-1} ." }, { "math_id": 11, "text": " k \\in \\{1, ..., N\\} " }, { "math_id": 12, "text": " \\mathbf{x}_{k} = (x_{1k}, x_{2k}) \\in \\mathbb{R}^{2} " }, { "math_id": 13, "text": " \\mathbf{y}_{k} = (y_{1k}, y_{2k}, y_{3k}) \\in \\mathbb{R}^{3} " }, { "math_id": 14, "text": " 2 \\times 3 " }, { "math_id": 15, "text": " \\alpha_{k} \\, \\mathbf{x}_{k} = \\mathbf{A} \\, \\mathbf{y}_{k} " }, { "math_id": 16, "text": " \\alpha_{k} \\neq 0 " }, { "math_id": 17, "text": " \\mathbf{H} = \\begin{pmatrix} 0 & -1 \\\\ 1 & 0 \\end{pmatrix} " }, { "math_id": 18, "text": " \\mathbf{x}_{k}^{T} \\, \\mathbf{H} " }, { "math_id": 19, "text": " \\begin{align}\n(\\mathbf{x}_{k}^{T} \\, \\mathbf{H}) \\, \\alpha_{k} \\, \\mathbf{x}_{k} &= (\\mathbf{x}_{k}^{T} \\, \\mathbf{H}) \\, \\mathbf{A} \\, \\mathbf{y}_{k} \\\\\n\\alpha_{k} \\, \\mathbf{x}_{k}^{T} \\, \\mathbf{H} \\, \\mathbf{x}_{k} &= \\mathbf{x}_{k}^{T} \\, \\mathbf{H} \\, \\mathbf{A} \\, \\mathbf{y}_{k}\n\\end{align}\n" }, { "math_id": 20, "text": " \\mathbf{x}_{k}^{T} \\, \\mathbf{H} \\, \\mathbf{x}_{k} = 0, " }, { "math_id": 21, "text": " \\mathbf{x}_{k}^{T} \\, \\mathbf{H} \\, \\mathbf{A} \\, \\mathbf{y}_{k} = 0" }, { "math_id": 22, "text": " \\mathbf{x}_{k} = \\begin{pmatrix} x_{1k} \\\\ x_{2k} \\end{pmatrix} " }, { "math_id": 23, "text": " \\mathbf{y}_{k} = \\begin{pmatrix} y_{1k} \\\\ y_{2k} \\\\ y_{3k} \\end{pmatrix} " }, { "math_id": 24, "text": " \\mathbf{A} = \\begin{pmatrix} a_{11} & a_{12} & a_{13}\\\\ a_{21} & a_{22} & a_{23} \\end{pmatrix} " }, { "math_id": 25, "text": " 0 = a_{11} \\, x_{2k} \\, y_{1k} - a_{21} \\, x_{1k} \\, y_{1k} + a_{12} \\, x_{2k} \\, y_{2k} - a_{22} \\, x_{1k} \\, y_{2k} + a_{13} \\, x_{2k} \\, y_{3k} - a_{23} \\, x_{1k} \\, y_{3k} " }, { "math_id": 26, "text": " \\, k = 1, \\ldots, N. " }, { "math_id": 27, "text": " 0 = \\mathbf{b}_{k}^{T} \\, \\mathbf{a} " }, { "math_id": 28, "text": " \\mathbf{b}_{k} " }, { "math_id": 29, "text": " \\mathbf{a} " }, { "math_id": 30, "text": " \\mathbf{b}_{k} = \\begin{pmatrix} x_{2k} \\, y_{1k} \\\\ -x_{1k} \\, y_{1k} \\\\ x_{2k} \\, y_{2k} \\\\ -x_{1k} \\, y_{2k} \\\\ x_{2k} \\, y_{3k} \\\\ -x_{1k} \\, y_{3k} \\end{pmatrix} " }, { "math_id": 31, "text": " \\mathbf{a} = \\begin{pmatrix} a_{11} \\\\ a_{21} \\\\ a_{12} \\\\ a_{22} \\\\ a_{13} \\\\ a_{23} \\end{pmatrix}. " }, { "math_id": 32, "text": " \\mathbf{0} = \\mathbf{B} \\, \\mathbf{a} " }, { "math_id": 33, "text": " \\mathbf{B} " }, { "math_id": 34, "text": " N \\times 6 " }, { "math_id": 35, "text": "\\mathbf{a}" }, { "math_id": 36, "text": " \\mathbf{B}. " }, { "math_id": 37, "text": " \\mathbf{x}_{k} \\in \\mathbb{R}^{2} " }, { "math_id": 38, "text": " \\mathbf{y}_{k} \\in \\mathbb{R}^{3} " }, { "math_id": 39, "text": " \\mathbf{y}_{k}. " }, { "math_id": 40, "text": " \\mathbf{y}_{k} \\in \\mathbb{R}^{q} " }, { "math_id": 41, "text": " 0 = \\mathbf{x}_{k}^{T} \\, \\mathbf{H} \\, \\mathbf{A} \\, \\mathbf{y}_{k} " }, { "math_id": 42, "text": " 2 \\times q. " }, { "math_id": 43, "text": " 2q " }, { "math_id": 44, "text": " \\mathbf{B} \\, \\mathbf{a} = \\mathbf{0} " }, { "math_id": 45, "text": " N \\times 2 \\, q " }, { "math_id": 46, "text": " \\mathbf{a}. " }, { "math_id": 47, "text": " \\mathbf{x}_{k} \\in \\mathbb{R}^{p} " }, { "math_id": 48, "text": " \\mathbf{H} " }, { "math_id": 49, "text": " p \\times p " }, { "math_id": 50, "text": " p > 2 " }, { "math_id": 51, "text": " M = \\frac{p\\,(p-1)}{2}. " }, { "math_id": 52, "text": " 0 = \\mathbf{x}_{k}^{T} \\, \\mathbf{H}_{m} \\, \\mathbf{A} \\, \\mathbf{y}_{k} " }, { "math_id": 53, "text": " \\, m = 1, \\ldots, M " }, { "math_id": 54, "text": " \\mathbf{H}_{m} " }, { "math_id": 55, "text": " \\mathbf{H}_{1} = \\begin{pmatrix} 0 & 0 & 0 \\\\ 0 & 0 & -1 \\\\ 0 & 1 & 0 \\end{pmatrix} " }, { "math_id": 56, "text": " \\mathbf{H}_{2} = \\begin{pmatrix} 0 & 0 & 1 \\\\ 0 & 0 & 0 \\\\ -1 & 0 & 0 \\end{pmatrix} " }, { "math_id": 57, "text": " \\mathbf{H}_{3} = \\begin{pmatrix} 0 & -1 & 0 \\\\ 1 & 0 & 0 \\\\ 0 & 0 & 0 \\end{pmatrix} ." }, { "math_id": 58, "text": " \\mathbf{0} = [\\mathbf{x}_{k}]_{\\times} \\, \\mathbf{A} \\, \\mathbf{y}_{k} " }, { "math_id": 59, "text": " [\\mathbf{x}_{k}]_{\\times} " }, { "math_id": 60, "text": " \\mathbb{R}^{3} " } ]
https://en.wikipedia.org/wiki?curid=12590908
12591223
Filter press
An industrial filter press is a tool used in separation processes, specifically to separate solids and liquids. The machine stacks many filter elements and allows the filter to be easily opened to remove the filtered solids, and allows easy cleaning or replacement of the filter media. Filter presses cannot be operated in a continuous process but can offer very high performance, particularly when low residual liquid in the solid is desired. Among other uses, filter presses are utilised in marble factories in order to separate water from mud in order to reuse the water during the marble cutting process. Concept behind filter press technology. Generally, the slurry that will be separated is injected into the centre of the press and each chamber of the press is filled. Optimal filling time will ensure the last chamber of the press is loaded before the mud in the first chamber begins to cake. As the chambers fill, pressure inside the system will increase due to the formation of thick sludge. Then, the liquid is strained through filter cloths by force using pressurized air, but the use of water could be more cost-efficient in certain cases, such as if water was re-used from a previous process. History. The first form of filter press was invented in the United Kingdom in 1853, used in obtaining seed oil through the use of pressure cells. However, there were many disadvantages associated with them, such as high labour requirement and discontinuous process. Major developments in filter press technology started in the middle of 20th century. In Japan in 1958, Kenichiro Kurita and Seiichi Suwa succeeded in developing the world's first automatic horizontal-type filter press to improve the cake removal efficiency and moisture absorption. Nine years later, Kurita Company began developing flexible diaphragms to decrease moisture in filter cakes. The device enables optimisation of the automatic filtration cycle, cake compression, cake discharge and filter-cloth washing leading to the increment in opportunities for various industrial applications. A detailed historical review, dating back to when the Shang Dynasty used presses to extract tea from camellia the leaves and oil from the hips in 1600 BC, was compiled by K. McGrew. Types of filter presses. There are four main basic types of filter presses: plate and frame filter presses, recessed plate and frame filter presses, membrane filter presses and (fully) automatic filter presses. Plate and frame filter press. A plate and frame filter press is the most fundamental design, and may be referred to as a "membrane plate filter." This type of filter press consists of many alternating plates and frames assembled with the supports of a pair of rails, with filter membranes inserted between each plate-frame pair. The stack is compressed with sufficient force to provide a liquid-tight seal between each plate and frame, the filter membrane may have an integrated seal around the edge or the filter material itself may act as a gasket when compressed. As the slurry is pumped through the membranes, the filter cake accumulates and becomes thicker. The filter resistance increases as well, and the process is stopped when the pressure differential reaches a point where the plates are considered full enough. To remove the filter cake and clear the filters, the stack of plates and frames are separated and the cake either falls off or is scraped from the membranes to be collected in a tray below. The filter membranes are then cleaned using wash liquid and the stack is re-compressed ready to start the next cycle. An early example of this is the Dehne filter press, developed by A L G Dehne (1832–1906) of Halle, Germany, and commonly used in the late 19th and early 20th century for extracting sugar from sugar beet and from sugar cane, and for drying ore slurries. Its great disadvantage was the amount of labor involved in its operation. (Fully) Automatic filter press. An automatic filter press has the same concept as the manual filter and frame filter, except that the whole process is fully automated. It consists of larger plate and frame filter presses with mechanical "plate shifters". The function of the plate shifter is to move the plates and allow rapid discharge of the filter cakes accumulated in between the plates. It also contains a diaphragm compressor in the filter plates which aids in optimizing the operating condition by further drying the filter cakes. Fully automatic filter presses provide a high degree of automation while providing uninterrupted operation at the same time. The option of the simultaneous filter plate opening system, for example, helps to realise a particularly fast cake release reducing the cycle time to a minimum. The result is a high-speed filter press that allows increased production per unit area of filter. For this reason, these machines are used in applications with highly filterable products where high filtration speeds are required. These include, e.g. mining concentrates and residues. There are different systems for fully automatic operation. These include, e.g. the vibration/shaking devices, spreader clamp/spreader cloth version or scraping devices. The unmanned operating time of a fully automatic filter press is 24/7. Recessed plate filter press. A recessed plate filter press does not use frames and instead has a recess in each plate with sloping edges in which the filter cloths lie, the filter cake builds up in the recess directly between two plates and when the plates are separated the sloping edges allow the cake to fall out with minimal effort. To simplify construction and usage the plates typically have a hole through the centre, passing through the filter cloth and around which it is sealed so that the slurry flows through the centre of each plate down the stack rather than inward from the edge of each plate. Although easier to clean, there are disadvantages to this method, such as longer cloth changing time, inability to accommodate filter media that cannot conform to the curved recess such as paper, and the possibility of forming uneven cake. Membrane filter press. Membrane filter presses have a great influence on the dryness of the solid by using an inflatable membrane in the filter plates to compress remaining liquid from the filter cake before the plates are opened. Compared to conventional filtration processes, it achieves the lowest residual moisture values in the filter cake. This makes the membrane filter press a powerful and widely used system. Depending on the degree of dewatering, different dry matter contents (dry matter content – percentage by weight of dry material in the filter cake) can be achieved in the filter cake by squeezing with membrane plates. The range of achievable dry matter contents extends from 30 to over 80 percent. Membrane filter presses not only offer the advantage of an extremely high degree of dewatering; they also reduce the filtration cycle time by more than 50 percent on average, depending on the suspension. This results in faster cycle and turnaround times, which lead to an increase in productivity. The membrane inflation medium consists either of compressed air or a liquid medium (e.g. water). Applications. Filter presses are used in a huge variety of different applications, from dewatering of mineral mining slurries to blood plasma purification. At the same time, filter press technology is widely established for ultrafine coal dewatering as well as filtrate recovery in coal preparation plants. According to G.Prat, the "filter press is proven to be the most effective and reliable technique to meet today's requirement". One of the examples is Pilot scale plate filter press, which is specialized in dewatering coal slurries. In 2013 the Society for Mining, Metallurgy and Exploration published an article highlighting this specific application. It was mentioned that the use of the filter press is very beneficial to plant operations, since it offers dewatering ultraclean coal as product, as well as improving quality of water removed to be available for equipment cleaning. Other industrial uses for automatic membrane filter presses include municipal waste sludge dewatering, ready mix concrete water recovery, metal concentrate recovery, and large-scale fly ash pond dewatering. Many specialized applications are associated with different types of filter press that are currently used in various industries. Plate filter press is extensively used in sugaring operations such as the production of maple syrup in Canada, since it offers very high efficiency and reliability. According to M.Isselhardt, "appearance can affect the value of maple syrup and customer's perception of quality". This makes the raw syrup filtration process extremely crucial in achieving desired product with high quality and appealing form, which again suggested how highly appreciated filter press methods are in industry. Assessment of important characteristics. Here are some typical filter press calculation used for handling operation applied in waste water treatment: Solids loading rate. S=&lt;templatestyles src="Fraction/styles.css" /&gt;(B x 8.34 lb/gal x s)⁄A Where, S is the solid loadings rate in &lt;templatestyles src="Fraction/styles.css" /&gt;lb+h⁄ft2.&lt;r /&gt; B is biosolids in &lt;templatestyles src="Fraction/styles.css" /&gt;gal⁄h s is the % solids/ 100. A is the plate area in ft2. Net filter yield. formula_0 Where: (S × P) gives the filter run time. Flow rate of filtrate. formula_1 Where: Those are the most important factors that affect the rate of filtration. When filtrate pass through the filter plate, deposition of solids are formed and increases the cake thickness, which also increase Rc while Rf is assumed to be constant. The flow resistance from cake and filter medium can be studied by calculating the flow rate of filtration through them. If the flow rate is constant, the relationship between pressure and time can be obtained. The filtration must be operated by increasing pressure difference to cope with the increase in flow resistance resulting from pore clogging. The filtration rate is mainly affected by viscosity of the filtrate as well as resistance of the filter plate and cake. Optimum time cycle. High filtration rate can be obtained from producing thin cake. However, a conventional filter press is a batch system and the process must be stopped to discharge the filter cake and reassemble the press, which is time-consuming. Practically, maximum filtration rate is obtained when the filtration time is greater than the time taken to discharge the cake and reassemble the press to allow for cloth's resistance. Properties of the filter cake affect the filtration rate, and it is desirable for the particle's size to be as large as possible to prevent pore blockage by using a coagulant. From experimental work, flow rate of liquid through the filter medium is proportional to the pressure difference. As the cake layer forms, pressure applies to the system increases and the flow rate of filtrate decreases. If the solid is desired, the purity of the solid can be increased by cake washing and air drying. Sample of filter cake can be taken from different locations and weighed to determine the moisture content by using overall material balance. Possible heuristics to be used during design of the process. The selecting of filter press type depends on the value of liquid phase or the solid phase. If extracting liquid phase is desired, then filter press is among the most appropriate methods to be used. Materials. Nowadays, filter plates are made from polymers or steel coated with polymer. They give good drainage surface for filter cloths. The plate sizes are ranged from 10 by 10 cm to 2.4 by 2.4 m and 0.3 to 20 cm for the frame thickness. Filter medium. Typical cloth areas can range from 1 m2 or less on laboratory scale to 1000 m2 in a production environment, even though plates can provide filter areas up to 2000 m2. Normally, plate and frame filter press can form up to 50 mm of cake thickness, however, it can be push up to 200 mm for extreme cases. Recessed plate press can form up to 32 mm of cake thickness. In the early days of press use in the municipal waste biosolids treatment industry, issues with cake sticking to the cloth was problematic and many treatment plants adopted less effective centrifuge or belt filter press technologies. Since then, there have been great enhancements in fabric quality and manufacturing technology that have made this issue obsolete. Unlike the US, automatic membrane filter technology is the most common method to dewater municipal waste biosolids in Asia. Moisture is typically 10-15% lower and less polymer is required—which saves on trucking and overall disposal cost. Operating condition. The operating pressure is commonly up to 7 bars for metal. The improvement of the technology makes it possible to remove large amount of moisture at 16 bar of pressure and operate at 30 bars. However, the pressure is 4-5 bars for wood or plastic frames. If the concentration of solids in the feed tank increase until the solid particles are attached to each other. It is possible to install moving blades in the filter press to reduce resistance to flow of liquid through the slurry. For the process prior to cake discharge, air blowing is used for cakes that have permeability of 10−11 to 10−15 m2. Pre-treatment. Pre-treatment of the slurries before filtration is required if the solid suspension has settled down. Coagulation as pre-treatment can improve the performance of filter press because it increases the porosity of the filter cake leading to faster filtration. Varying the temperature, concentration and pH can control the size of the flocs. Moreover, if the filter cake is impermeable and difficult for the flow of filtrate, filter aid chemical can be added to the pre-treatment process to increase the porosity of the cake, reduce the cake resistance and obtain thicker cake. However, filter aids need to be able to remove from the filter cake either by physical or chemical treatment. A common filter aid is Kieselguhr, which give 0.85 voidage. In terms of cake handling, batch filter press requires large discharge tray size in order to contain large amount of cake and the system is more expensive compared to continuous filter press with the same output. Washing. There are two possible methods of washing that are being employed, the "simple washing" and the "thorough washing". For simple washing, the wash liquor flows through the same channel as the slurry with high velocity, causing erosion of the cakes near the point of entry. Thus the channels formed are constantly enlarged and therefore uneven cleaning is normally obtained. A better technique is by thorough washing in which the wash liquor is introduced through a different channel behind the filter cloth called washing plates. It flows through the whole thickness of the cakes in opposite direction first and then with the same direction as the filtrate. The wash liquor is normally discharged through the same channel as the filtrate. After washing, the cakes can be easily removed by supplying compressed air to remove the excess liquid. Waste. Nowadays filter presses are widely used in many industries, they would also produce different types of wastes. Harmful wastes such as toxic chemical from dye industries, as well as pathogen from waste stream might accumulate in the waste cakes; hence the requirement for treating those wastes would be different. Therefore, before discharge waste stream into the environment, application of post-treatment would be an important disinfection stage. It is to prevent health risks to the local population and the workers that are dealing with the waste (filter cakes) as well as preventing negative impacts to our ecosystem. Since filter press would produce large amount of waste, if it was to be disposed by land reclamation, it is recommended to dispose to the areas that are drastically altered like mining areas where development and fixation of vegetation are not possible. Another method is by incineration, which would destroy the organic pollutants and decrease the mass of the waste. It is usually done in a closed device by using a controlled flame. Advantages and disadvantages compared to other competitive methods. Many debates have been discussed about whether or not filter presses are sufficient to compete with modern equipment currently as well as in the future, since filter presses were one of the oldest machine-driven dewatering devices. Efficiency improvements are possible in many applications where modern filter presses have the best characteristics for the job, however, despite the fact that many mechanical improvements have been made, filter presses still remain to operate on the same concept as when first invented. A lack of progress in efficiency improvements as well as a lack of research on conquering associated issues surrounding filter presses have suggested a possibility of performance inadequacy. At the same time, many other types of filter could do the same or better job as press filters. In certain cases, it is crucial to compare characteristics and performances. Batch filter press versus a continuous vacuum belt filter. Filter presses offer a wide range of application, one of its main propositions is the ability to provide a large filter area in a relatively small footprint. Surface area available is one of the most important dimensions in any filtering process, since it maximises filter flow rate and capacity. A standard size filter press offers a filter area of 216 m2, whereas a standard belt filter only offers approximately 15 m2. High-solids slurries: continuous pressure operation. Filter presses are commonly used to dewater high-solids slurries in metal processing plants, one of the press filter technology that could deliver the job is the Rotary Pressure Filter method, which provides continuous production in a single unit, where filtration is directed via pressure. However, in cases where solids concentration in high-solids slurries is too high (50%+), it is better to handle these slurries using vacuum filtration, such as a continuous Indexing Vacuum Belt Filter, since high concentration of solids in slurries will increase pressure and if pressure is too high, the equipment might be damaged and/or less efficient operation. Current development. In the future, market demands for modern filtration industry are going to become finer and higher degree in separation, and particularly on the purpose of material recycling, energy saving, and green technology. In order to meet increasing demands for higher degree of dewatering from difficult-to-filter material, super-high pressure filters are required. Therefore, the trend in increasing the pressure for the automatic filter press will keep on developing in the future. The conventional filter press mechanisms usually use mechanical compression and air to de-liquoring; however, the efficiency of producing low-moisture cake is limited. An alternative method has been introduced by using steam instead of air for cake dewatering. Steam dewatering technique can be a competitive method since it offers product of low-moisture cake. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "NFY=\\frac{S\\times P}{TCT}" }, { "math_id": 1, "text": "u=\\frac{1}{A}\\frac{dV}{dt}=\\frac{\\Delta P}{\\mu\\times(R_c+R_f)}" } ]
https://en.wikipedia.org/wiki?curid=12591223
1259318
Helmholtz coil
Two circular coil device which creates a homogeneous magnetic field A Helmholtz coil is a device for producing a region of nearly uniform magnetic field, named after the German physicist Hermann von Helmholtz. It consists of two electromagnets on the same axis, carrying an equal electric current in the same direction. Besides creating magnetic fields, Helmholtz coils are also used in scientific apparatus to cancel external magnetic fields, such as the Earth's magnetic field. When the pair of two electromagnetics of a Helmholtz coil carry an equal electric current in the opposite direction, it is known as anti-Helmholtz coil, which creates a region of nearly uniform magnetic field gradient, and is used for creating magnetic traps for atomic physics experiments. Description. A Helmholtz pair consists of two identical circular magnetic coils that are placed symmetrically along a common axis, one on each side of the experimental area, and separated by a distance formula_0 equal to the radius formula_1 of the coil. Each coil carries an equal electric current in the same direction. Setting formula_2, which is what defines a Helmholtz pair, minimizes the nonuniformity of the field at the center of the coils, in the sense of setting formula_3 (meaning that the first nonzero derivative is formula_4 as explained below), but leaves about 7% variation in field strength between the center and the planes of the coils. A slightly larger value of formula_0 reduces the difference in field between the center and the planes of the coils, at the expense of worsening the field's uniformity in the region near the center, as measured by formula_5. When a Helmholtz pair of coils carry an equal electric current in the opposite direction, they create a region of nearly uniform magnetic field gradient. This is known as anti-Helmholtz coil, and is used for creating magnetic traps for atomic physics experiments. In some applications, a Helmholtz coil is used to cancel out the Earth's magnetic field, producing a region with a magnetic field intensity much closer to zero. Mathematics. The calculation of the exact magnetic field at any point in space is mathematically complex and involves the study of Bessel functions. Things are simpler along the axis of the coil-pair, and it is convenient to think about the Taylor series expansion of the field strength as a function of formula_6, the distance from the central point of the coil-pair along the axis. By symmetry, the odd-order terms in the expansion are zero. By arranging the coils so that the origin formula_7 is an inflection point for the field strength due to each coil separately, one can guarantee that the order formula_8 term is also zero, and hence the leading non-constant term is of order formula_9. The inflection point for a simple coil is located along the coil axis at a distance formula_10 from its centre. Thus the locations for the two coils are formula_11. The calculation detailed below gives the exact value of the magnetic field at the center point. If the radius is "R", the number of turns in each coil is "n" and the current through the coils is "I", then the magnetic field B at the midpoint between the coils will be given by formula_12 where formula_13 is the permeability of free space (formula_14). Derivation. Start with the formula for the on-axis field due to a single wire loop which is itself derived from the Biot–Savart law: formula_15 Here formula_16 = the permeability constant = formula_17 formula_18 = coil current, in amperes, formula_19 = coil radius, in meters, formula_20 = coil distance, on axis, to point, in meters, formula_21is the distance dependent, dimensionless coefficient. The Helmholtz coils consists of "n" turns of wire, so the equivalent current in a one-turn coil is "n" times the current "I" in the "n"-turn coil. Substituting "nI" for "I" in the above formula gives the field for an "n"-turn coil: formula_22 For formula_23, the distance coefficient formula_24can be expanded in Taylor series as:formula_25In a Helmholtz pair, the two coils are located at formula_11, so the B-field strength at any formula_6 would be: formula_26 The points near the center (halfway between the two coils) have formula_23, and the Taylor series of formula_27 is:formula_28.In an anti-Helmholtz pair, the B-field strength at any formula_6 would be: formula_29 The points near the center (halfway between the two coils) have formula_23, and the Taylor series of formula_30 is:formula_31. Time-varying magnetic field. Most Helmholtz coils use DC (direct) current to produce a static magnetic field. Many applications and experiments require a time-varying magnetic field. These applications include magnetic field susceptibility tests, scientific experiments, and biomedical studies (the interaction between magnetic field and living tissue). The required magnetic fields are usually either pulse or continuous sinewave. The magnetic field frequency range can be anywhere from near DC (0 Hz) to many kilohertz or even megahertz (MHz). An AC Helmholtz coil driver is needed to generate the required time-varying magnetic field. The waveform amplifier driver must be able to output high AC current to produce the magnetic field. Driver voltage and current. formula_32 Use the above equation in the mathematics section to calculate the coil current for a desired magnetic field, B. where formula_13 is the permeability of free space or formula_17 formula_18 = coil current, in amperes, formula_19 = coil radius, in meters, n = number of turns in each coil. Then calculate the required Helmholtz coil driver amplifier voltage: formula_33 where High-frequency series resonant. Generating a static magnetic field is relatively easy; the strength of the field is proportional to the current. Generating a high-frequency magnetic field is more challenging. The coils are inductors, and their impedance increases proportionally with frequency. To provide the same field intensity at twice the frequency requires twice the voltage across the coil. Instead of directly driving the coil with a high voltage, a series resonant circuit may be used to provide the high voltage. A series capacitor is added in series with the coils. The capacitance is chosen to resonate the coil at the desired frequency. Only the coils parasitic resistance remains. This method only works at frequencies close to the resonant frequency; to generate the field at other frequencies requires different capacitors. The Helmholtz coil resonant frequency, formula_34, and capacitor value, C, are given below. formula_35 formula_36 Maxwell coils. To improve the uniformity of the field in the space inside the coils, additional coils can be added around the outside. James Clerk Maxwell showed in 1873 that a third larger-diameter coil located midway between the two Helmholtz coils with the coil distance increased from coil radius formula_1 to formula_37 can reduce the variance of the field on the axis to zero up to the sixth derivative of position. This is sometimes called a Maxwell coil.
[ { "math_id": 0, "text": "h" }, { "math_id": 1, "text": "R" }, { "math_id": 2, "text": "h=R" }, { "math_id": 3, "text": "\\partial^{2}B/\\partial x^{2} = 0" }, { "math_id": 4, "text": "\\partial^{4}B/\\partial x^{4}" }, { "math_id": 5, "text": "\\partial^{2}B/\\partial x^{2}" }, { "math_id": 6, "text": "x" }, { "math_id": 7, "text": "x=0" }, { "math_id": 8, "text": "x^2" }, { "math_id": 9, "text": "x^4" }, { "math_id": 10, "text": "R/2" }, { "math_id": 11, "text": "x=\\pm R/2" }, { "math_id": 12, "text": " B = {\\left ( \\frac{4}{5} \\right )}^{3/2} \\frac{\\mu_0 n I}{R}," }, { "math_id": 13, "text": "\\mu_0" }, { "math_id": 14, "text": "4\\pi \\times 10^{-7} \\text{ T}\\cdot\\text{m/A}" }, { "math_id": 15, "text": " B_1(x) = \\frac{\\mu_0 I R^2}{2(R^2+x^2)^{3/2}}=\\xi(x) \\frac{\\mu_0 I}{2R}." }, { "math_id": 16, "text": "\\mu_0\\;" }, { "math_id": 17, "text": " 4\\pi \\times 10^{-7} \\text{ T}\\cdot\\text{m/A} = 1.257 \\times 10^{-6} \\text{ T}\\cdot\\text{m/A}," }, { "math_id": 18, "text": "I\\;" }, { "math_id": 19, "text": "R\\;" }, { "math_id": 20, "text": "x\\;" }, { "math_id": 21, "text": "\\xi(x)=[1+(x/R)^2]^{-3/2}\\;" }, { "math_id": 22, "text": " B_1(x) = \\xi(x)\\frac{\\mu_0 n I}{2R}." }, { "math_id": 23, "text": "x\\ll R" }, { "math_id": 24, "text": "\\xi(x)=[1+(x/R)^2)]^{-3/2}\\;" }, { "math_id": 25, "text": "\\xi(x)=1-\\frac{3}{2}(x/R)^2+\\mathcal{O}((x/R)^4)." }, { "math_id": 26, "text": "\n\\begin{align}\nB(x) &= \\frac{\\mu_0 n I}{2R}\\left[\\xi(x-R/2)+\\xi(x+R/2)\\right] \\\\\n&=\\frac{\\mu_0 n I}{2R}\\left([1+(x/R-1/2)^2]^{-3/2}+[1+(x/R+1/2)^2]^{-3/2}\\right) \\\\\n\\end{align}" }, { "math_id": 27, "text": "\\xi(x-R/2)+\\xi(x+R/2)" }, { "math_id": 28, "text": "(16 \\sqrt 5)/25-(x/R)^4(2304 \\sqrt 5)/3125+\\mathcal O ((x/R)^6)\\approx 1.43-1.65(x/R)^4+\\mathcal O ((x/R)^6)" }, { "math_id": 29, "text": "\n\\begin{align}\nB(x) &= \\frac{\\mu_0 n I}{2R}\\left[\\xi(x-R/2)-\\xi(x+R/2)\\right] \\\\\n&=\\frac{\\mu_0 n I}{2R}\\left([1+(x/R-1/2)^2]^{-3/2}-[1+(x/R+1/2)^2]^{-3/2}\\right) \\\\\n\\end{align}" }, { "math_id": 30, "text": "\\xi(x-R/2)-\\xi(x+R/2)" }, { "math_id": 31, "text": "(x/R)(96 \\sqrt 5)/125-(x/R)^3(512 \\sqrt 5)/625+\\mathcal O ((x/R)^5)\\approx 1.72(x/R)-1.83(x/R)^3+\\mathcal O ((x/R)^5)" }, { "math_id": 32, "text": "I=\\left ( \\frac{5}{4} \\right )^{3/2}\\left ( \\frac{BR}{\\mu_0n} \\right )" }, { "math_id": 33, "text": "V=I\\sqrt{\\bigl[\\omega\\bigl(L_1+L_2\\bigr)\\bigr]^2+\\bigl(R_1+R_2\\bigr)^2}" }, { "math_id": 34, "text": "f_0" }, { "math_id": 35, "text": "f_0=\\frac{1}{2\\pi\\sqrt{\\left (L_1+L_2\\right )C}}" }, { "math_id": 36, "text": "C=\\frac{1}{\\left ( 2\\pi f_0\\right )^2\\left ( L_1 +L_2\\right )}" }, { "math_id": 37, "text": "\\sqrt{3}R" } ]
https://en.wikipedia.org/wiki?curid=1259318
12599267
Square root of 5
Positive real number which when multiplied by itself gives 5 The square root of 5 is the positive real number that, when multiplied by itself, gives the prime number 5. It is more precisely called the principal square root of 5, to distinguish it from the negative number with the same property. This number appears in the fractional expression for the golden ratio. It can be denoted in surd form as: formula_1 It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are: (sequence in the OEIS). which can be rounded down to 2.236 to within 99.99% accuracy. The approximation (≈ 2.23611) for the square root of five can be used. Despite having a denominator of only 72, it differs from the correct value by less than (approx. ). As of January 2022, the numerical value in decimal of the square root of 5 has been computed to at least 2,250,000,000,000 digits. Rational approximations. The square root of 5 can be expressed as the continued fraction formula_2 (sequence in the OEIS) The successive partial evaluations of the continued fraction, which are called its "convergents", approach formula_0: formula_3 Their numerators are 2, 9, 38, 161, … (sequence in the OEIS), and their denominators are 1, 4, 17, 72, … (sequence in the OEIS). Each of these is a best rational approximation of formula_0; in other words, it is closer to formula_0 than any rational number with a smaller denominator. The convergents, expressed as , satisfy alternately the Pell's equations formula_4 When formula_0 is approximated with the Babylonian method, starting with "x"0 2 and using "x""n"+1 "x""n" + , the "n"th approximant "x""n" is equal to the 2"n"th convergent of the continued fraction: formula_5 The Babylonian method is equivalent to Newton's method for root finding applied to the polynomial formula_6. The Newton's method update, formula_7, is equal to formula_8 when formula_9. The method therefore converges quadratically. Relation to the golden ratio and Fibonacci numbers. The golden ratio φ is the arithmetic mean of 1 and formula_0. The algebraic relationship between formula_0, the golden ratio and the conjugate of the golden ratio (Φ 1 − "φ") is expressed in the following formulae: formula_11 formula_0 then naturally figures in the closed form expression for the Fibonacci numbers, a formula which is usually written in terms of the golden ratio: formula_12 The quotient of formula_0 and "φ" (or the product of formula_0 and Φ), and its reciprocal, provide an interesting pattern of continued fractions and are related to the ratios between the Fibonacci numbers and the Lucas numbers: formula_13 The series of convergents to these values feature the series of Fibonacci numbers and the series of Lucas numbers as numerators and denominators, and vice versa, respectively: formula_14 In fact, the limit of the quotient of the formula_15 Lucas number formula_16 and the formula_15 Fibonacci number formula_17 is directly equal to the square root of formula_18: formula_19 Geometry. Geometrically, formula_0 corresponds to the diagonal of a rectangle whose sides are of length 1 and 2, as is evident from the Pythagorean theorem. Such a rectangle can be obtained by halving a square, or by placing two equal squares side by side. This can be used to subdivide a square grid into a tilted square grid with five times as many squares, forming the basis for a subdivision surface. Together with the algebraic relationship between formula_0 and "φ", this forms the basis for the geometrical construction of a golden rectangle from a square, and for the construction of a regular pentagon given its side (since the side-to-diagonal ratio in a regular pentagon is "φ"). Since two adjacent faces of a cube would unfold into a 1:2 rectangle, the ratio between the length of the cube's edge and the shortest distance from one of its vertices to the opposite one, when traversing the cube "surface", is formula_0. By contrast, the shortest distance when traversing through the "inside" of the cube corresponds to the length of the cube diagonal, which is the square root of three times the edge. A rectangle with side proportions 1:formula_0 is called a "root-five rectangle" and is part of the series of root rectangles, a subset of dynamic rectangles, which are based on formula_20 (= 1), formula_21, formula_22, formula_23 (= 2), formula_0... and successively constructed using the diagonal of the previous root rectangle, starting from a square. A root-5 rectangle is particularly notable in that it can be split into a square and two equal golden rectangles (of dimensions ), or into two golden rectangles of different sizes (of dimensions and ). It can also be decomposed as the union of two equal golden rectangles (of dimensions ) whose intersection forms a square. All this is can be seen as the geometric interpretation of the algebraic relationships between formula_0, "φ" and Φ mentioned above. The root-5 rectangle can be constructed from a 1:2 rectangle (the root-4 rectangle), or directly from a square in a manner similar to the one for the golden rectangle shown in the illustration, but extending the arc of length formula_10 to both sides. Trigonometry. Like formula_21 and formula_22, the square root of 5 appears extensively in the formulae for exact trigonometric constants, including in the sines and cosines of every angle whose measure in degrees is divisible by 3 but not by 15. The simplest of these are formula_24 As such, the computation of its value is important for generating trigonometric tables. Since formula_0 is geometrically linked to half-square rectangles and to pentagons, it also appears frequently in formulae for the geometric properties of figures derived from them, such as in the formula for the volume of a dodecahedron. Diophantine approximations. Hurwitz's theorem in Diophantine approximations states that every irrational number "x" can be approximated by infinitely many rational numbers in lowest terms in such a way that formula_25 and that formula_0 is best possible, in the sense that for any larger constant than formula_0, there are some irrational numbers "x" for which only finitely many such approximations exist. Closely related to this is the theorem that of any three consecutive convergents , , , of a number "α", at least one of the three inequalities holds: formula_26 And the formula_0 in the denominator is the best bound possible since the convergents of the golden ratio make the difference on the left-hand side arbitrarily close to the value on the right-hand side. In particular, one cannot obtain a tighter bound by considering sequences of four or more consecutive convergents. Algebra. The ring formula_27 contains numbers of the form formula_28, where "a" and "b" are integers and formula_29 is the imaginary number formula_30. This ring is a frequently cited example of an integral domain that is not a unique factorization domain. The number 6 has two inequivalent factorizations within this ring: formula_31 On the other hand, the real quadratic integer ring formula_32, adjoining the Golden ratio formula_33, was shown to be Euclidean, and hence a unique factorization domain, by Dedekind. The field formula_34 like any other quadratic field, is an abelian extension of the rational numbers. The Kronecker–Weber theorem therefore guarantees that the square root of five can be written as a rational linear combination of roots of unity: formula_35 Identities of Ramanujan. The square root of 5 appears in various identities discovered by Srinivasa Ramanujan involving continued fractions. For example, this case of the Rogers–Ramanujan continued fraction: formula_36 formula_37 formula_38 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt{5}" }, { "math_id": 1, "text": "\\sqrt{5}. \\, " }, { "math_id": 2, "text": " [2; 4, 4, 4, 4, 4,\\ldots] = 2 + \\cfrac 1 {4 + \\cfrac 1 {4 + \\cfrac 1 {4 + \\cfrac 1 {4 + {{} \\atop \\displaystyle\\ddots}}}}}. " }, { "math_id": 3, "text": "\\frac{2}{1}, \\frac{9}{4}, \\frac{38}{17}, \\frac{161}{72}, \\frac{682}{305}, \\frac{2889}{1292}, \\frac{12238}{5473}, \\frac{51841}{23184}, \\dots" }, { "math_id": 4, "text": "x^2 - 5y^2 = -1\\quad \\text{and} \\quad x^2 - 5y^2 = 1" }, { "math_id": 5, "text": "x_0 = 2.0,\\quad x_1 = \\frac{9}{4} = 2.25,\\quad x_2 = \\frac{161}{72} = 2.23611\\dots,\\quad x_3 = \\frac{51841}{23184} = 2.2360679779 \\ldots,\\quad x_4 = \\frac{5374978561}{2403763488} = 2.23606797749979 \\ldots" }, { "math_id": 6, "text": "x^2-5" }, { "math_id": 7, "text": "x_{n+1} = x_n - f(x_n)/f'(x_n)" }, { "math_id": 8, "text": "(x_n + 5/x_n)/2" }, { "math_id": 9, "text": "f(x) = x^2 - 5" }, { "math_id": 10, "text": "\\sqrt{5}/2" }, { "math_id": 11, "text": "\n\\begin{align}\n\\sqrt{5} & = \\varphi - \\Phi = 2\\varphi - 1 = 1 - 2\\Phi \\\\[5pt]\n\\varphi & = \\frac{1 + \\sqrt{5}}{2} \\\\[5pt]\n\\Phi & = \\frac{1 - \\sqrt{5}}{2}.\n\\end{align}\n" }, { "math_id": 12, "text": "F(n) = \\frac{\\varphi^n-(1-\\varphi)^n}{\\sqrt 5}." }, { "math_id": 13, "text": "\n\\begin{align}\n\\frac{\\sqrt{5}}{\\varphi} = \\Phi \\cdot \\sqrt{5} = \\frac{5 - \\sqrt{5}}{2} & = 1.3819660112501051518\\dots \\\\\n& = [1; 2, 1, 1, 1, 1, 1, 1, 1, \\ldots] \\\\[5pt]\n\\frac{\\varphi}{\\sqrt{5}} = \\frac{1}{\\Phi \\cdot \\sqrt{5}} = \\frac{5 + \\sqrt{5}}{10} & = 0.72360679774997896964\\ldots \\\\\n& = [0; 1, 2, 1, 1, 1, 1, 1, 1, \\ldots].\n\\end{align}\n" }, { "math_id": 14, "text": "\n\\begin{align}\n& {1, \\frac{3}{2}, \\frac{4}{3}, \\frac{7}{5}, \\frac{11}{8}, \\frac{18}{13}, \\frac{29}{21}, \\frac{47}{34}, \\frac{76}{55}, \\frac{123}{89}}, \\ldots \\ldots [1; 2, 1, 1, 1, 1, 1, 1, 1, \\ldots] \\\\[8pt]\n& {1, \\frac{2}{3}, \\frac{3}{4}, \\frac{5}{7}, \\frac{8}{11}, \\frac{13}{18}, \\frac{21}{29}, \\frac{34}{47}, \\frac{55}{76}, \\frac{89}{123}}, \\dots \\dots [0; 1, 2, 1, 1, 1, 1, 1, 1,\\dots].\n\\end{align}\n" }, { "math_id": 15, "text": "n^{th}" }, { "math_id": 16, "text": "L_n" }, { "math_id": 17, "text": "F_n" }, { "math_id": 18, "text": "5" }, { "math_id": 19, "text": "\\lim_{n\\to\\infty} \\frac{L_n}{F_n}=\\sqrt{5}." }, { "math_id": 20, "text": "\\sqrt{1}" }, { "math_id": 21, "text": "\\sqrt{2}" }, { "math_id": 22, "text": "\\sqrt{3}" }, { "math_id": 23, "text": "\\sqrt{4}" }, { "math_id": 24, "text": "\\begin{align}\n\\sin\\frac{\\pi}{10} = \\sin 18^\\circ &= \\tfrac{1}{4}(\\sqrt5-1) = \\frac{1}{\\sqrt5+1}, \\\\[5pt]\n\\sin\\frac{\\pi}{5} = \\sin 36^\\circ &= \\tfrac{1}{4}\\sqrt{2(5-\\sqrt5)}, \\\\[5pt]\n\\sin\\frac{3\\pi}{10} = \\sin 54^\\circ &= \\tfrac{1}{4}(\\sqrt5+1) = \\frac{1}{\\sqrt5-1}, \\\\[5pt]\n\\sin\\frac{2\\pi}{5} = \\sin 72^\\circ &= \\tfrac{1}{4}\\sqrt{2(5+\\sqrt5)}\\, . \\end{align}" }, { "math_id": 25, "text": " \\left|x - \\frac{m}{n}\\right| < \\frac{1}{\\sqrt{5}\\,n^2} " }, { "math_id": 26, "text": "\\left|\\alpha - {p_i\\over q_i}\\right| < {1\\over \\sqrt5 q_i^2}, \\quad\n\\left|\\alpha - {p_{i+1}\\over q_{i+1}}\\right| < {1\\over \\sqrt5 q_{i+1}^2}, \\quad\n\\left|\\alpha - {p_{i+2}\\over q_{i+2}}\\right| < {1\\over \\sqrt5 q_{i+2}^2}." }, { "math_id": 27, "text": "\\mathbb{Z}[\\sqrt{-5}]" }, { "math_id": 28, "text": "a + b\\sqrt{-5}" }, { "math_id": 29, "text": "\\sqrt{-5}" }, { "math_id": 30, "text": "i\\sqrt{5}" }, { "math_id": 31, "text": "6 = 2 \\cdot 3 = (1 - \\sqrt{-5})(1 + \\sqrt{-5}). \\, " }, { "math_id": 32, "text": "\\Z[\\tfrac{\\sqrt{5}+1}2]" }, { "math_id": 33, "text": "\\phi = \\tfrac{\\sqrt{5}+1}2" }, { "math_id": 34, "text": "\\mathbb{Q}[\\sqrt{-5}]," }, { "math_id": 35, "text": "\\sqrt5 = e^{\\frac{2\\pi}{5}i} - e^{\\frac{4\\pi}{5}i} - e^{\\frac{6\\pi}{5}i} + e^{\\frac{8\\pi}{5}i}. \\, " }, { "math_id": 36, "text": "\\cfrac{1}{1 + \\cfrac{e^{-2\\pi}}{1 + \\cfrac{e^{-4\\pi}}{1 + \\cfrac{e^{-6\\pi}}{1 + { {} \\atop \\displaystyle \\ddots}}}}}\n= \\left( \\sqrt{\\frac{5 + \\sqrt{5}}{2}} - \\frac{\\sqrt{5} + 1}{2} \\right)e^{\\frac{2\\pi}{5}} = e^{\\frac{2\\pi}{5}}\\left( \\sqrt{\\varphi\\sqrt{5}} - \\varphi \\right)." }, { "math_id": 37, "text": "\\cfrac{1}{1 + \\cfrac{e^{-2\\pi\\sqrt{5}}}{1 + \\cfrac{e^{-4\\pi\\sqrt{5}}}{1 + \\cfrac{e^{-6\\pi\\sqrt{5}}}{1 + { {} \\atop \\displaystyle \\ddots}}}}}\n= \\left( {\\sqrt{5} \\over 1 + \\sqrt[5]{5^{\\frac34}(\\varphi - 1)^{\\frac52} - 1}} - \\varphi \\right)e^{\\frac{2\\pi}{\\sqrt{5}}}." }, { "math_id": 38, "text": "4\\int_0^\\infty\\frac{xe^{-x\\sqrt{5}}}{\\cosh x}\\,dx\n= \\cfrac{1}{1 + \\cfrac{1^2}{1 + \\cfrac{1^2}{1 + \\cfrac{2^2}{1 + \\cfrac{2^2}{1 + \\cfrac{3^2}{1 + \\cfrac{3^2}{1 + {{} \\atop \\displaystyle \\ddots} }}}}}}} \\, ." } ]
https://en.wikipedia.org/wiki?curid=12599267
12599909
Oloid
Three-dimensional curved geometric object An oloid is a three-dimensional curved geometric object that was discovered by Paul Schatz in 1929. It is the convex hull of a skeletal frame made by placing two linked congruent circles in perpendicular planes, so that the center of each circle lies on the edge of the other circle. The distance between the circle centers equals the radius of the circles. One third of each circle's perimeter lies inside the convex hull, so the same shape may be also formed as the convex hull of the two remaining circular arcs each spanning an angle of 4π/3. Surface area and volume. The surface area of an oloid is given by: formula_0 exactly the same as the surface area of a sphere with the same radius. In closed form, the enclosed volume is formula_1, where formula_2 and formula_3 denote the complete elliptic integrals of the first and second kind respectively. A numerical calculation gives formula_4. Kinetics. The surface of the oloid is a developable surface, meaning that patches of the surface can be flattened into a plane. While rolling, it develops its entire surface: every point of the surface of the oloid touches the plane on which it is rolling, at some point during the rolling movement. Unlike most axial symmetric objects (cylinder, sphere etc.), while rolling on a flat surface, its center of mass performs a meander motion rather than a linear one. In each rolling cycle, the distance between the oloid's center of mass and the rolling surface has two minima and two maxima. The difference between the maximum and the minimum height is given by formula_5, where formula_6 is the oloid's circular arcs radius. Since this difference is fairly small, the oloid's rolling motion is relatively smooth. At each point during this rolling motion, the oloid touches the plane in a line segment. The length of this segment stays unchanged throughout the motion, and is given by: formula_7. Related shapes. The sphericon is the convex hull of two semicircles on perpendicular planes, with centers at a single point. Its surface consists of the pieces of four cones. It resembles the oloid in shape and, like it, is a developable surface that can be developed by rolling. However, its equator is a square with four sharp corners, unlike the oloid which does not have sharp corners. Another object called the two circle roller is defined from two perpendicular circles for which the distance between their centers is √2 times their radius, farther apart than the oloid. It can either be formed (like the oloid) as the convex hull of the circles, or by using only the two disks bounded by the two circles. Unlike the oloid its center of gravity stays at a constant distance from the floor, so it rolls more smoothly than the oloid. In popular culture. In 1979, modern dancer Alan Boeding designed his "Circle Walker" sculpture from two crosswise semicircles, forming a skeletal version of the sphericon, a shape with a similar rolling motion to the oloid. He began dancing with a scaled-up version of the sculpture in 1980 as part of an MFA program in sculpture at Indiana University, and after he joined the MOMIX dance company in 1984 the piece became incorporated into the company's performances. The company's later piece "Dream Catcher" is based around another Boeding sculpture whose linked teardrop shapes incorporate the skeleton and rolling motion of the oloid. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Literature. Tobias Langscheid, Tilo Richter (Ed.): Oloid – Form of the Future. With contributions by Dirk Böttcher, Andreas Chiquet, Heinrich Frontzek a.o., niggli Verlag 2023, ISBN 978-3-7212-1025-5
[ { "math_id": 0, "text": "A = 4\\pi r^2" }, { "math_id": 1, "text": "V = \\frac{2}{3} \\left(2 E\\left(\\frac{3}{4}\\right) + K\\left(\\frac{3}{4}\\right)\\right)r^{3}" }, { "math_id": 2, "text": "K" }, { "math_id": 3, "text": "E" }, { "math_id": 4, "text": "V \\approx 3.0524184684r^{3}" }, { "math_id": 5, "text": "\\Delta h=r\\left(\\frac{\\sqrt{2}}{2}-{3}\\frac{\\sqrt{3}}{8}\\right)\\approx 0.0576r" }, { "math_id": 6, "text": "r" }, { "math_id": 7, "text": "l = \\sqrt{3} r" } ]
https://en.wikipedia.org/wiki?curid=12599909
1260
Advanced Encryption Standard
Standard for the encryption of electronic data The Advanced Encryption Standard (AES), also known by its original name Rijndael (), is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology (NIST) in 2001. AES is a variant of the Rijndael block cipher developed by two Belgian cryptographers, Joan Daemen and Vincent Rijmen, who submitted a proposal to NIST during the AES selection process. Rijndael is a family of ciphers with different key and block sizes. For AES, NIST selected three members of the Rijndael family, each with a block size of 128 bits, but three different key lengths: 128, 192 and 256 bits. AES has been adopted by the U.S. government. It supersedes the Data Encryption Standard (DES), which was published in 1977. The algorithm described by AES is a symmetric-key algorithm, meaning the same key is used for both encrypting and decrypting the data. In the United States, AES was announced by the NIST as U.S. FIPS PUB 197 (FIPS 197) on November 26, 2001. This announcement followed a five-year standardization process in which fifteen competing designs were presented and evaluated, before the Rijndael cipher was selected as the most suitable. AES is included in the ISO/IEC 18033-3 standard. AES became effective as a U.S. federal government standard on May 26, 2002, after approval by U.S. Secretary of Commerce Donald Evans. AES is available in many different encryption packages, and is the first (and only) publicly accessible cipher approved by the U.S. National Security Agency (NSA) for top secret information when used in an NSA approved cryptographic module. Definitive standards. The Advanced Encryption Standard (AES) is defined in each of: Description of the ciphers. AES is based on a design principle known as a substitution–permutation network, and is efficient in both software and hardware. Unlike its predecessor DES, AES does not use a Feistel network. AES is a variant of Rijndael, with a fixed block size of 128 bits, and a key size of 128, 192, or 256 bits. By contrast, Rijndael "per se" is specified with block and key sizes that may be any multiple of 32 bits, with a minimum of 128 and a maximum of 256 bits. Most AES calculations are done in a particular finite field. AES operates on a 4 × 4 column-major order array of 16 bytes termed the "state": formula_0 The key size used for an AES cipher specifies the number of transformation rounds that convert the input, called the plaintext, into the final output, called the ciphertext. The number of rounds are as follows: Each round consists of several processing steps, including one that depends on the encryption key itself. A set of reverse rounds are applied to transform ciphertext back into the original plaintext using the same encryption key. The SubBytes step. In the SubBytes step, each byte formula_1 in the "state" array is replaced with a SubByte formula_2 using an 8-bit substitution box. Before round 0, the "state" array is simply the plaintext/input. This operation provides the non-linearity in the cipher. The S-box used is derived from the multiplicative inverse over GF(28), known to have good non-linearity properties. To avoid attacks based on simple algebraic properties, the S-box is constructed by combining the inverse function with an invertible affine transformation. The S-box is also chosen to avoid any fixed points (and so is a derangement), i.e., formula_3, and also any opposite fixed points, i.e., formula_4. While performing the decryption, the InvSubBytes step (the inverse of SubBytes) is used, which requires first taking the inverse of the affine transformation and then finding the multiplicative inverse. The ShiftRows step. The ShiftRows step operates on the rows of the state; it cyclically shifts the bytes in each row by a certain offset. For AES, the first row is left unchanged. Each byte of the second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets of two and three respectively. In this way, each column of the output state of the ShiftRows step is composed of bytes from each column of the input state. The importance of this step is to avoid the columns being encrypted independently, in which case AES would degenerate into four independent block ciphers. The MixColumns step. In the MixColumns step, the four bytes of each column of the state are combined using an invertible linear transformation. The MixColumns function takes four bytes as input and outputs four bytes, where each input byte affects all four output bytes. Together with ShiftRows, MixColumns provides diffusion in the cipher. During this operation, each column is transformed using a fixed matrix (matrix left-multiplied by column gives new value of column in the state): formula_5 Matrix multiplication is composed of multiplication and addition of the entries. Entries are bytes treated as coefficients of polynomial of order formula_6. Addition is simply XOR. Multiplication is modulo irreducible polynomial formula_7. If processed bit by bit, then, after shifting, a conditional XOR with 1B16 should be performed if the shifted value is larger than FF16 (overflow must be corrected by subtraction of generating polynomial). These are special cases of the usual multiplication in formula_8. In more general sense, each column is treated as a polynomial over formula_8 and is then multiplied modulo formula_9 with a fixed polynomial formula_10. The coefficients are displayed in their hexadecimal equivalent of the binary representation of bit polynomials from formula_11. The MixColumns step can also be viewed as a multiplication by the shown particular MDS matrix in the finite field formula_8. This process is described further in the article Rijndael MixColumns. The AddRoundKey. In the AddRoundKey step, the subkey is combined with the state. For each round, a subkey is derived from the main key using Rijndael's key schedule; each subkey is the same size as the state. The subkey is added by combining of the state with the corresponding byte of the subkey using bitwise XOR. Optimization of the cipher. On systems with 32-bit or larger words, it is possible to speed up execution of this cipher by combining the SubBytes and ShiftRows steps with the MixColumns step by transforming them into a sequence of table lookups. This requires four 256-entry 32-bit tables (together occupying 4096 bytes). A round can then be performed with 16 table lookup operations and 12 32-bit exclusive-or operations, followed by four 32-bit exclusive-or operations in the AddRoundKey step. Alternatively, the table lookup operation can be performed with a single 256-entry 32-bit table (occupying 1024 bytes) followed by circular rotation operations. Using a byte-oriented approach, it is possible to combine the SubBytes, ShiftRows, and MixColumns steps into a single round operation. Security. The National Security Agency (NSA) reviewed all the AES finalists, including Rijndael, and stated that all of them were secure enough for U.S. Government non-classified data. In June 2003, the U.S. Government announced that AES could be used to protect classified information: The design and strength of all key lengths of the AES algorithm (i.e., 128, 192 and 256) are sufficient to protect classified information up to the SECRET level. TOP SECRET information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect national security systems and/or information must be reviewed and certified by NSA prior to their acquisition and use. AES has 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys. By 2006, the best known attacks were on 7 rounds for 128-bit keys, 8 rounds for 192-bit keys, and 9 rounds for 256-bit keys. Known attacks. For cryptographers, a cryptographic "break" is anything faster than a brute-force attack‍—i.e., performing one trial decryption for each possible key in sequence &lt;templatestyles src="Crossreference/styles.css" /&gt;. A break can thus include results that are infeasible with current technology. Despite being impractical, theoretical breaks can sometimes provide insight into vulnerability patterns. The largest successful publicly known brute-force attack against a widely implemented block-cipher encryption algorithm was against a 64-bit RC5 key by distributed.net in 2006. The key space increases by a factor of 2 for each additional bit of key length, and if every possible value of the key is equiprobable; this translates into a doubling of the average brute-force key search time with every additional bit of key length. This implies that the effort of a brute-force search increases exponentially with key length. Key length in itself does not imply security against attacks, since there are ciphers with very long keys that have been found to be vulnerable. AES has a fairly simple algebraic framework. In 2002, a theoretical attack, named the "XSL attack", was announced by Nicolas Courtois and Josef Pieprzyk, purporting to show a weakness in the AES algorithm, partially due to the low complexity of its nonlinear components. Since then, other papers have shown that the attack, as originally presented, is unworkable; see XSL attack on block ciphers. During the AES selection process, developers of competing algorithms wrote of Rijndael's algorithm "we are concerned about [its] use ... in security-critical applications." In October 2000, however, at the end of the AES selection process, Bruce Schneier, a developer of the competing algorithm Twofish, wrote that while he thought successful academic attacks on Rijndael would be developed someday, he "did not believe that anyone will ever discover an attack that will allow someone to read Rijndael traffic." Until May 2009, the only successful published attacks against the full AES were side-channel attacks on some specific implementations. In 2009, a new related-key attack was discovered that exploits the simplicity of AES's key schedule and has a complexity of 2119. In December 2009 it was improved to 299.5. This is a follow-up to an attack discovered earlier in 2009 by Alex Biryukov, Dmitry Khovratovich, and Ivica Nikolić, with a complexity of 296 for one out of every 235 keys. However, related-key attacks are not of concern in any properly designed cryptographic protocol, as a properly designed protocol (i.e., implementational software) will take care not to allow related keys, essentially by constraining an attacker's means of selecting keys for relatedness. Another attack was blogged by Bruce Schneier on July 30, 2009, and released as a preprint on August 3, 2009. This new attack, by Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry Khovratovich, and Adi Shamir, is against AES-256 that uses only two related keys and 239 time to recover the complete 256-bit key of a 9-round version, or 245 time for a 10-round version with a stronger type of related subkey attack, or 270 time for an 11-round version. 256-bit AES uses 14 rounds, so these attacks are not effective against full AES. The practicality of these attacks with stronger related keys has been criticized, for instance, by the paper on chosen-key-relations-in-the-middle attacks on AES-128 authored by Vincent Rijmen in 2010. In November 2009, the first known-key distinguishing attack against a reduced 8-round version of AES-128 was released as a preprint. This known-key distinguishing attack is an improvement of the rebound, or the start-from-the-middle attack, against AES-like permutations, which view two consecutive rounds of permutation as the application of a so-called Super-S-box. It works on the 8-round version of AES-128, with a time complexity of 248, and a memory complexity of 232. 128-bit AES uses 10 rounds, so this attack is not effective against full AES-128. The first key-recovery attacks on full AES were by Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger, and were published in 2011. The attack is a biclique attack and is faster than brute force by a factor of about four. It requires 2126.2 operations to recover an AES-128 key. For AES-192 and AES-256, 2190.2 and 2254.6 operations are needed, respectively. This result has been further improved to 2126.0 for AES-128, 2189.9 for AES-192 and 2254.3 for AES-256, which are the current best results in key recovery attack against AES. This is a very small gain, as a 126-bit key (instead of 128 bits) would still take billions of years to brute force on current and foreseeable hardware. Also, the authors calculate the best attack using their technique on AES with a 128-bit key requires storing 288 bits of data. That works out to about 38 trillion terabytes of data, which was more than all the data stored on all the computers on the planet in 2016. A paper in 2015 later improved the space complexity to 256 bits, which is 9007 terabytes (while still keeping a time complexity of 2126.2). According to the Snowden documents, the NSA is doing research on whether a cryptographic attack based on tau statistic may help to break AES. At present, there is no known practical attack that would allow someone without knowledge of the key to read data encrypted by AES when correctly implemented. Side-channel attacks. Side-channel attacks do not attack the cipher as a black box, and thus are not related to cipher security as defined in the classical context, but are important in practice. They attack implementations of the cipher on hardware or software systems that inadvertently leak data. There are several such known attacks on various implementations of AES. In April 2005, D. J. Bernstein announced a cache-timing attack that he used to break a custom server that used OpenSSL's AES encryption. The attack required over 200 million chosen plaintexts. The custom server was designed to give out as much timing information as possible (the server reports back the number of machine cycles taken by the encryption operation). However, as Bernstein pointed out, "reducing the precision of the server's timestamps, or eliminating them from the server's responses, does not stop the attack: the client simply uses round-trip timings based on its local clock, and compensates for the increased noise by averaging over a larger number of samples." In October 2005, Dag Arne Osvik, Adi Shamir and Eran Tromer presented a paper demonstrating several cache-timing attacks against the implementations in AES found in OpenSSL and Linux's codice_0 partition encryption function. One attack was able to obtain an entire AES key after only 800 operations triggering encryptions, in a total of 65 milliseconds. This attack requires the attacker to be able to run programs on the same system or platform that is performing AES. In December 2009 an attack on some hardware implementations was published that used differential fault analysis and allows recovery of a key with a complexity of 232. In November 2010 Endre Bangerter, David Gullasch and Stephan Krenn published a paper which described a practical approach to a "near real time" recovery of secret keys from AES-128 without the need for either cipher text or plaintext. The approach also works on AES-128 implementations that use compression tables, such as OpenSSL. Like some earlier attacks, this one requires the ability to run unprivileged code on the system performing the AES encryption, which may be achieved by malware infection far more easily than commandeering the root account. In March 2016, Ashokkumar C., Ravi Prakash Giri and Bernard Menezes presented a side-channel attack on AES implementations that can recover the complete 128-bit AES key in just 6–7 blocks of plaintext/ciphertext, which is a substantial improvement over previous works that require between 100 and a million encryptions. The proposed attack requires standard user privilege and key-retrieval algorithms run under a minute. Many modern CPUs have built-in hardware instructions for AES, which protect against timing-related side-channel attacks. Quantum attacks. AES-256 is considered to be quantum resistant, as it has similar quantum resistance to AES-128's resistance against traditional, non-quantum, attacks at 128 bits of security. AES-192 and AES-128 are not considered quantum resistant due to their smaller key sizes. AES-192 has a strength of 96 bits against quantum attacks and AES-128 has 64 bits of strength against quantum attacks, making them both insecure. NIST/CSEC validation. The Cryptographic Module Validation Program (CMVP) is operated jointly by the United States Government's National Institute of Standards and Technology (NIST) Computer Security Division and the Communications Security Establishment (CSE) of the Government of Canada. The use of cryptographic modules validated to NIST FIPS 140-2 is required by the United States Government for encryption of all data that has a classification of Sensitive but Unclassified (SBU) or above. From NSTISSP #11, National Policy Governing the Acquisition of Information Assurance: "Encryption products for protecting classified information will be certified by NSA, and encryption products intended for protecting sensitive information will be certified in accordance with NIST FIPS 140-2." The Government of Canada also recommends the use of FIPS 140 validated cryptographic modules in unclassified applications of its departments. Although NIST publication 197 ("FIPS 197") is the unique document that covers the AES algorithm, vendors typically approach the CMVP under FIPS 140 and ask to have several algorithms (such as Triple DES or SHA1) validated at the same time. Therefore, it is rare to find cryptographic modules that are uniquely FIPS 197 validated and NIST itself does not generally take the time to list FIPS 197 validated modules separately on its public web site. Instead, FIPS 197 validation is typically just listed as an "FIPS approved: AES" notation (with a specific FIPS 197 certificate number) in the current list of FIPS 140 validated cryptographic modules. The Cryptographic Algorithm Validation Program (CAVP) allows for independent validation of the correct implementation of the AES algorithm. Successful validation results in being listed on the NIST validations page. This testing is a pre-requisite for the FIPS 140-2 module validation. However, successful CAVP validation in no way implies that the cryptographic module implementing the algorithm is secure. A cryptographic module lacking FIPS 140-2 validation or specific approval by the NSA is not deemed secure by the US Government and cannot be used to protect government data. FIPS 140-2 validation is challenging to achieve both technically and fiscally. There is a standardized battery of tests as well as an element of source code review that must be passed over a period of a few weeks. The cost to perform these tests through an approved laboratory can be significant (e.g., well over $30,000 US) and does not include the time it takes to write, test, document and prepare a module for validation. After validation, modules must be re-submitted and re-evaluated if they are changed in any way. This can vary from simple paperwork updates if the security functionality did not change to a more substantial set of re-testing if the security functionality was impacted by the change. Test vectors. Test vectors are a set of known ciphers for a given input and key. NIST distributes the reference of AES test vectors as AES Known Answer Test (KAT) Vectors. Performance. High speed and low RAM requirements were some of the criteria of the AES selection process. As the chosen algorithm, AES performed well on a wide variety of hardware, from 8-bit smart cards to high-performance computers. On a Pentium Pro, AES encryption requires 18 clock cycles per byte (cpb), equivalent to a throughput of about 11 MiB/s for a 200 MHz processor. On Intel Core and AMD Ryzen CPUs supporting AES-NI instruction set extensions, throughput can be multiple GiB/s. On a Intel Westmere CPU, AES encryption using AES-NI takes about 1.3 cpb for AES-128 and 1.8 cpb for AES-256. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{bmatrix}\nb_0 & b_4 & b_8 & b_{12} \\\\\nb_1 & b_5 & b_9 & b_{13} \\\\\nb_2 & b_6 & b_{10} & b_{14} \\\\\nb_3 & b_7 & b_{11} & b_{15}\n\\end{bmatrix}\n" }, { "math_id": 1, "text": "a_{i,j}" }, { "math_id": 2, "text": "S(a_{i,j})" }, { "math_id": 3, "text": " S(a_{i,j}) \\neq a_{i,j} " }, { "math_id": 4, "text": " S(a_{i,j}) \\oplus a_{i,j} \\neq \\text{FF}_{16} " }, { "math_id": 5, "text": "\n\\begin{bmatrix}\nb_{0,j} \\\\ b_{1,j} \\\\ b_{2,j} \\\\ b_{3,j}\n\\end{bmatrix} = \\begin{bmatrix}\n2 & 3 & 1 & 1 \\\\\n1 & 2 & 3 & 1 \\\\\n1 & 1 & 2 & 3 \\\\\n3 & 1 & 1 & 2\n\\end{bmatrix} \\begin{bmatrix}\na_{0,j} \\\\ a_{1,j} \\\\ a_{2,j} \\\\ a_{3,j}\n\\end{bmatrix}\n\\qquad 0 \\le j \\le 3\n" }, { "math_id": 6, "text": "x^7" }, { "math_id": 7, "text": "x^8+x^4+x^3+x+1" }, { "math_id": 8, "text": "\\operatorname{GF}(2^8)" }, { "math_id": 9, "text": "{01}_{16} \\cdot z^4+{01}_{16}" }, { "math_id": 10, "text": "c(z) = {03}_{16} \\cdot z^3 + {01}_{16} \\cdot z^2 +{01}_{16} \\cdot z + {02}_{16}" }, { "math_id": 11, "text": "\\operatorname{GF}(2)[x]" } ]
https://en.wikipedia.org/wiki?curid=1260
12600251
Erdős–Anning theorem
On sets of points with integer distances The Erdős–Anning theorem states that, whenever an infinite number of points in the plane all have integer distances, the points lie on a straight line. The same result holds in higher dimensional Euclidean spaces. The theorem cannot be strengthened to give a finite bound on the number of points: there exist arbitrarily large finite sets of points that are not on a line and have integer distances. The theorem is named after Paul Erdős and Norman H. Anning, who published a proof of it in 1945. Erdős later supplied a simpler proof, which can also be used to check whether a point set forms an Erdős–Diophantine graph, an inextensible system of integer points with integer distances. The Erdős–Anning theorem inspired the Erdős–Ulam problem on the existence of dense point sets with rational distances. Rationality versus integrality. Although there can be no infinite non-collinear set of points with integer distances, there are infinite non-collinear sets of points whose distances are rational numbers. For instance, the subset of points on a unit circle obtained as the even multiples of one of the acute angles of an integer-sided right triangle (such as the triangle with side lengths 3, 4, and 5) has this property. This construction forms a dense set in the circle. The (still unsolved) Erdős–Ulam problem asks whether there can exist a set of points at rational distances from each other that forms a dense set for the whole Euclidean plane. According to Erdős, Stanisław Ulam was inspired to ask this question after hearing from Erdős about the Erdős–Anning theorem. For any finite set "S" of points at rational distances from each other, it is possible to find a similar set of points at integer distances from each other, by expanding "S" by a factor of the least common denominator of the distances in "S". By expanding in this way a finite subset of the unit circle construction, one can construct arbitrarily large finite sets of non-collinear points with integer distances from each other. However, including more points into "S" may cause the expansion factor to increase, so this construction does not allow infinite sets of points at rational distances to be transformed into infinite sets of points at integer distances. Proof. Shortly after the original publication of the Erdős–Anning theorem, Erdős provided the following simpler proof. The proof assumes a given set of points with integer distances, not all on a line. It then proves that this set must be finite, using a system of curves for which each point of the given set lies on a crossing of two of the curves. In more detail, it consists of the following steps: The same proof shows that, when the diameter of a set of points with integer distances is formula_14, there are at most formula_15 points. The quadratic dependence of this bound on formula_14 can be improved, using a similar proof but with a more careful choice of the pairs of points used to define families of hyperbolas: every point set with integer distances and diameter formula_14 has size formula_16, where the formula_17 uses big O notation. However, it is not possible to replace formula_14 by the minimum distance between the points: there exist arbitrarily large non-collinear point sets with integer distances and with minimum distance two. Maximal point sets with integral distances. An alternative way of stating the theorem is that a non-collinear set of points in the plane with integer distances can only be extended by adding finitely many additional points, before no more points can be added. A set of points with both integer coordinates and integer distances, to which no more can be added while preserving both properties, forms an Erdős–Diophantine graph. The proof of the Erdős–Anning theorem can be used in an algorithm to check whether a given set of integer points with integer distances forms an Erdős–Diophantine graph: merely find all of the crossing points of the hyperbolas used in the proof, and check whether any of the resulting points also have integer coordinates and integer distances from the given set. Higher dimensions. As Anning and Erdős wrote in their original paper on this theorem, "by a similar argument we can show that we cannot have infinitely many points in formula_18-dimensional space not all on a line, with all the distances being integral." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ABC" }, { "math_id": 1, "text": "d" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "|d(A,X)-d(B,X)|" }, { "math_id": 4, "text": "d(A,B)" }, { "math_id": 5, "text": "|d(A,X)-d(B,X)|=i" }, { "math_id": 6, "text": "0\\le i< d(A,B)" }, { "math_id": 7, "text": "i=d(A,B)" }, { "math_id": 8, "text": "A" }, { "math_id": 9, "text": "B" }, { "math_id": 10, "text": "d(B,C)+1" }, { "math_id": 11, "text": "|d(B,X)-d(C,X)|=j" }, { "math_id": 12, "text": "0\\le j\\le d(B,C)" }, { "math_id": 13, "text": "4\\bigl(d(A,B)+1\\bigr)\\bigl(d(B,C)+1\\bigr)" }, { "math_id": 14, "text": "\\delta" }, { "math_id": 15, "text": "4(\\delta+1)^2" }, { "math_id": 16, "text": "O(\\delta)" }, { "math_id": 17, "text": "O(\\,)" }, { "math_id": 18, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=12600251
12600886
Erdős–Diophantine graph
Complete graph on the integer plane which cannot be expanded An Erdős–Diophantine graph is an object in the mathematical subject of Diophantine equations consisting of a set of integer points at integer distances in the plane that cannot be extended by any additional points. Equivalently, in geometric graph theory, it can be described as a complete graph with vertices located on the integer square grid formula_0 such that all mutual distances between the vertices are integers, while all other grid points have a non-integer distance to at least one vertex. Erdős–Diophantine graphs are named after Paul Erdős and Diophantus of Alexandria. They form a subset of the set of Diophantine figures, which are defined as complete graphs in the Diophantine plane for which the length of all edges are integers (unit distance graphs). Thus, Erdős–Diophantine graphs are exactly the Diophantine figures that cannot be extended. The existence of Erdős–Diophantine graphs follows from the Erdős–Anning theorem, according to which infinite Diophantine figures must be collinear in the Diophantine plane. Hence, any process of extending a non-collinear Diophantine figure by adding vertices must eventually reach a figure that can no longer be extended. Examples. Any set of zero or one point can be trivially extended, and any Diophantine set of two points can be extended by more points on the same line. Therefore, all Diophantine sets with fewer than three nodes can be extended, so Erdős–Diophantine graphs on fewer than three nodes cannot exist. By numerical search, have shown that three-node Erdős–Diophantine graphs do exist. The smallest Erdős–Diophantine triangle is characterised by edge lengths 2066, 1803, and 505. The next larger Erdős–Diophantine triangle has edges 2549, 2307 and 1492. In both cases, the sum of the three edge-lengths is even. Brancheva has proven that this property holds for all Erdős–Diophantine triangles. More generally, the total length of any closed path in an Erdős–Diophantine graph is always even. An example of a 4-node Erdős–Diophantine graph is provided by the complete graph formed by the four nodes located on the vertices of a rectangle with sides 4 and 3.
[ { "math_id": 0, "text": "\\mathbb{Z}^2" } ]
https://en.wikipedia.org/wiki?curid=12600886
1260217
Metapopulation
Group of separated yet interacting ecological populations A metapopulation consists of a group of spatially separated populations of the same species which interact at some level. The term metapopulation was coined by Richard Levins in 1969 to describe a model of population dynamics of insect pests in agricultural fields, but the idea has been most broadly applied to species in naturally or artificially fragmented habitats. In Levins' own words, it consists of "a population of populations". A metapopulation is generally considered to consist of several distinct populations together with areas of suitable habitat which are currently unoccupied. In classical metapopulation theory, each population cycles in relative independence of the other populations and eventually goes extinct as a consequence of demographic stochasticity (fluctuations in population size due to random demographic events); the smaller the population, the more chances of inbreeding depression and prone to extinction. Although individual populations have finite life-spans, the metapopulation as a whole is often stable because immigrants from one population (which may, for example, be experiencing a population boom) are likely to re-colonize habitat which has been left open by the extinction of another population. They may also emigrate to a small population and rescue that population from extinction (called the "rescue effect"). Such a rescue effect may occur because declining populations leave niche opportunities open to the "rescuers". The development of metapopulation theory, in conjunction with the development of source–sink dynamics, emphasised the importance of connectivity between seemingly isolated populations. Although no single population may be able to guarantee the long-term survival of a given species, the combined effect of many populations may be able to do this. Metapopulation theory was first developed for terrestrial ecosystems, and subsequently applied to the marine realm. In fisheries science, the term "sub-population" is equivalent to the metapopulation science term "local population". Most marine examples are provided by relatively sedentary species occupying discrete patches of habitat, with both local recruitment and recruitment from other local populations in the larger metapopulation. Kritzer &amp; Sale have argued against strict application of the metapopulation definitional criteria that extinction risks to local populations must be non-negligible. Finnish biologist Ilkka Hanski of the University of Helsinki was an important contributor to metapopulation theory. Predation and oscillations. The first experiments with predation and spatial heterogeneity were conducted by G. F. Gause in the 1930s, based on the Lotka–Volterra equation, which was formulated in the mid-1920s, but no further application had been conducted. The Lotka-Volterra equation suggested that the relationship between predators and their prey would result in population oscillations over time based on the initial densities of predator and prey. Gause's early experiments to prove the predicted oscillations of this theory failed because the predator–prey interactions were not influenced by immigration. However, once immigration was introduced, the population cycles accurately depicted the oscillations predicted by the Lotka-Volterra equation, with the peaks in prey abundance shifted slightly to the left of the peaks of the predator densities. Huffaker's experiments expanded on those of Gause by examining how both the factors of migration and spatial heterogeneity lead to predator–prey oscillations. Huffaker's experiments on predator–prey interactions (1958). In order to study predation and population oscillations, Huffaker used mite species, one being the predator and the other being the prey. He set up a controlled experiment using oranges, which the prey fed on, as the spatially structured habitat in which the predator and prey would interact. At first, Huffaker experienced difficulties similar to those of Gause in creating a stable predator–prey interaction. By using oranges only, the prey species quickly became extinct followed consequently with predator extinction. However, he discovered that by modifying the spatial structure of the habitat, he could manipulate the population dynamics and allow the overall survival rate for both species to increase. He did this by altering the distance between the prey and oranges (their food), establishing barriers to predator movement, and creating corridors for the prey to disperse. These changes resulted in increased habitat patches and in turn provided more areas for the prey to seek temporary protection. When the prey would become extinct locally at one habitat patch, they were able to reestablish by migrating to new patches before being attacked by predators. This habitat spatial structure of patches allowed for coexistence between the predator and prey species and promoted a stable population oscillation model. Although the term metapopulation had not yet been coined, the environmental factors of spatial heterogeneity and habitat patchiness would later describe the conditions of a metapopulation relating to how groups of spatially separated populations of species interact with one another. Huffaker's experiment is significant because it showed how metapopulations can directly affect the predator–prey interactions and in turn influence population dynamics. The Levins model. Levins' original model applied to a metapopulation distributed over many patches of suitable habitat with significantly less interaction between patches than within a patch. Population dynamics within a patch were simplified to the point where only presence and absence were considered. Each patch in his model is either populated or not. Let "N" be the fraction of patches occupied at a given time. During a time "dt", each occupied patch can become unoccupied with an extinction probability "edt". Additionally, 1 − "N" of the patches are unoccupied. Assuming a constant rate "c" of propagule generation from each of the "N" occupied patches, during a time "dt", each unoccupied patch can become occupied with a colonization probability "cNdt" . Accordingly, the time rate of change of occupied patches, "dN/dt", is formula_0 This equation is mathematically equivalent to the logistic model, with a carrying capacity "K" given by formula_1 and growth rate "r" formula_2 At equilibrium, therefore, some fraction of the species's habitat will always be unoccupied. Stochasticity and metapopulations. Huffaker's studies of spatial structure and species interactions are an example of early experimentation in metapopulation dynamics. Since the experiments of Huffaker and Levins, models have been created which integrate stochastic factors. These models have shown that the combination of environmental variability (stochasticity) and relatively small migration rates cause indefinite or unpredictable persistence. However, Huffaker's experiment almost guaranteed infinite persistence because of the controlled immigration variable. Stochastic patch occupancy models (SPOMs). One major drawback of the Levins model is that it is deterministic, whereas the fundamental metapopulation processes are stochastic. Metapopulations are particularly useful when discussing species in disturbed habitats, and the viability of their populations, i.e., how likely they are to become extinct in a given time interval. The Levins model cannot address this issue. A simple way to extend the Levins' model to incorporate space and stochastic considerations is by using the contact process. Simple modifications to this model can also incorporate for patch dynamics. At a given percolation threshold, habitat fragmentation effects take place in these configurations predicting more drastic extinction thresholds. For conservation biology purposes, metapopulation models must include (a) the finite nature of metapopulations (how many patches are suitable for habitat), and (b) the probabilistic nature of extinction and colonisation. Also, in order to apply these models, the extinctions and colonisations of the patches must be asynchronous. Microhabitat patches (MHPs) and bacterial metapopulations. Combining nanotechnology with landscape ecology, synthetic habitat landscapes have been fabricated on a chip by building a collection of bacterial mini-habitats with nano-scale channels providing them with nutrients for habitat renewal, and connecting them by corridors in different topological arrangements, generating a spatial mosaic of patches of opportunity distributed in time. This can be used for landscape experiments by studying the bacteria metapopulations on the chip, for example their evolutionary ecology. Life history evolution. Metapopulation models have been used to explain life-history evolution, such as the ecological stability of amphibian metamorphosis in small vernal ponds. Alternative ecological strategies have evolved. For example, some salamanders forgo metamorphosis and sexually mature as aquatic neotenes. The seasonal duration of wetlands and the migratory range of the species determines which ponds are connected and if they form a metapopulation. The duration of the life history stages of amphibians relative to the duration of the vernal pool before it dries up regulates the ecological development of metapopulations connecting aquatic patches to terrestrial patches. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{dN}{dt} = cN(1-N) - eN.\\," }, { "math_id": 1, "text": " K = 1 - \\frac{e}{c}\\," }, { "math_id": 2, "text": " r = c - e.\\," } ]
https://en.wikipedia.org/wiki?curid=1260217
12604761
Lehmer sieve
Lehmer sieves are mechanical devices that implement sieves in number theory. Lehmer sieves are named for Derrick Norman Lehmer and his son Derrick Henry Lehmer. The father was a professor of mathematics at the University of California, Berkeley at the time, and his son followed in his footsteps as a number theorist and professor at Berkeley. A sieve in general is intended to find the numbers which are remainders when a set of numbers are divided by a second set. Generally, they are used in finding solutions of Diophantine equations or to factor numbers. A Lehmer sieve will signal that such solutions are found in a variety of ways depending on the particular construction. Construction. The first Lehmer sieve in 1926 was made using bicycle chains of varying length, with rods at appropriate points in the chains. As the chains turned, the rods would close electrical switches, and when all the switches were closed simultaneously, creating a complete electrical circuit, a solution had been found. Lehmer sieves were very fast, in one particular case factoring formula_0 in 3 seconds. Built in 1932, a device using gears was shown at the Century of Progress Exposition in Chicago. These had gears representing numbers, just as the chains had before, with holes. Holes left open were the remainders sought. When the holes lined up, a light at one end of the device shone on a photocell at the other, which could stop the machine allowing for the observation of a solution. This incarnation allowed checking of five thousand combinations a second. In 1936, a version was built using 16 mm film instead of chains, with holes in the film instead of rods. Brushes against the rollers would make electrical contact when the hole reached the top. Again, a full sequence of holes created a complete circuit, indicating a solution. Several Lehmer sieves are on display at the Computer History Museum. Since then, the same basic idea has been used to design sieves in integrated circuits or software.
[ { "math_id": 0, "text": "2^{93} + 1 = 3 \\times 3 \\times 529510939 \\times 715827883 \\times 2903110321" } ]
https://en.wikipedia.org/wiki?curid=12604761
12606685
Oriented coloring
Special type of graph coloring In graph theory, oriented graph coloring is a special type of graph coloring. Namely, it is an assignment of colors to vertices of an oriented graph that Equivalently, an oriented graph coloring of a graph "G" is an oriented graph "H" (whose vertices represent colors and whose arcs represent valid orientations between colors) such that there exists a homomorphism from "G" to "H". An "oriented chromatic number" of a graph "G" is the fewest colors needed in an oriented coloring; it is usually denoted by formula_6. The same definition can be extended to undirected graphs, as well, by defining the oriented chromatic number of an undirected graph to be the largest oriented chromatic number of any of its orientations. Examples. The oriented chromatic number of a directed 5-cycle is five. If the cycle is colored by four or fewer colors, then either two adjacent vertices have the same color, or two vertices two steps apart have the same color. In the latter case, the edges connecting these two vertices to the vertex between them are inconsistently oriented: both have the same pair of colors but with opposite orientations. Thus, no coloring with four or fewer colors is possible. However, giving each vertex its own unique color leads to a valid oriented coloring. Properties. An oriented coloring can exist only for a directed graph with no loops or directed 2-cycles. For, a loop cannot have different colors at its endpoints, and a 2-cycle cannot have both of its edges consistently oriented between the same two colors. If these conditions are satisfied, then there always exists an oriented coloring, for instance the coloring that assigns a different color to each vertex. If an oriented coloring is complete, in the sense that no two colors can be merged to produce a coloring with fewer colors, then it corresponds uniquely to a graph homomorphism into a tournament. The tournament has one vertex for each color in the coloring. For each pair of colors, there is an edge in the colored graph with those two colors at its endpoints, which lends its orientation to the edge in the tournament between the vertices corresponding to the two colors. Incomplete colorings may also be represented by homomorphisms into tournaments but in this case the correspondence between colorings and homomorphisms is not one-to-one. Undirected graphs of bounded genus, bounded degree, or bounded acyclic chromatic number also have bounded oriented chromatic number. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "u" }, { "math_id": 1, "text": "u'" }, { "math_id": 2, "text": "v" }, { "math_id": 3, "text": "v'" }, { "math_id": 4, "text": "(u, v)" }, { "math_id": 5, "text": "(v', u')" }, { "math_id": 6, "text": "\\scriptstyle\\chi_o(G)" } ]
https://en.wikipedia.org/wiki?curid=12606685
12606884
Carrier lifetime
Semiconductor physics terminology A definition in semiconductor physics, carrier lifetime is defined as the average time it takes for a minority carrier to recombine. The process through which this is done is typically known as minority carrier recombination. The energy released due to recombination can be either thermal, thereby heating up the semiconductor ("thermal recombination" or non-radiative recombination, one of the sources of waste heat in semiconductors), or released as photons ("optical recombination", used in LEDs and semiconductor lasers). The carrier lifetime can vary significantly depending on the materials and construction of the semiconductor. Carrier lifetime plays an important role in bipolar transistors and solar cells. In indirect band gap semiconductors, the carrier lifetime strongly depends on the concentration of recombination centers. Gold atoms act as highly efficient recombination centers, silicon for some high switching speed diodes and transistors is therefore alloyed with a small amount of gold. Many other atoms, e.g. iron or nickel, have similar effect. Overview. In practical applications, the electronic band structure of a semiconductor is typically found in a non-equilibrium state. Therefore, processes that tend towards thermal equilibrium, namely mechanisms of carrier recombination, always play a role. Additionally, semiconductors used in devices are very rarely pure semiconductors. Oftentimes, a dopant is used, giving an excess of electrons (in so-called "n-type doping") or holes (in so-called "p-type doping") within the band structure. This introduces a majority carrier and a minority carrier. As a result of this, the carrier lifetime plays a vital role in many semiconductor devices that have dopants. Recombination mechanisms. There are several mechanisms by which minority carriers can recombine, each of which subtract from the carrier lifetime. The main mechanisms that play a role in modern devices are band-to-band recombination and stimulated emission, which are forms of radiative recombination, and Shockley-Read-Hall (SRH), Auger, Langevin, and surface recombination, which are forms of non-radiative recombination. Depending on the system, certain mechanisms may play a greater role than others. For example, surface recombination plays a significant role in solar cells, where much of the effort goes into passivating surfaces to minimize non-radiative recombination. As opposed to this, Langevin recombination plays a major role in organic solar cells, where the semiconductors are characterized by low mobility. In these systems, maximizing the carrier lifetime is synonymous to maximizing the efficiency of the device. Applications. Solar cells. A solar cell is an electrical device in which a semiconductor is exposed to light that is converted into electricity through the photovoltaic effect. Electrons are either excited through the absorption of light, or if the band-gap energy of the material can be bridged, electron-hole pairs are created. Simultaneously, a voltage potential is created. The charge carriers within the solar cell move through the semiconductor in order to cancel said potential, which is the drifting force that moves the electrons. Also, the electrons can be forced to move by diffusion from higher concentration to lower concentration of electrons. In order to maximize the efficiency of the solar cell, it is desirable to have as many charge carriers as possible collected at the electrodes of the solar cell. Thus, recombination of electrons (among other factors that influence efficiency) must be avoided. This corresponds to an increase in the carrier lifetime. Surface recombination occurs at the top of the solar cell, which makes it preferable to have layers of material that have great surface passivation properties so as not to become affected by exposure to light over longer periods of time. Additionally, the same method of layering different semiconductor materials is used to reduce the capture probability of the electrons, which results in a decrease in trap-assisted SRH recombination, and an increase in carrier lifetime. Radiative (band-to-band) recombination is negligible in solar cells that have semiconductor materials with indirect bandgap structure. Auger recombination occurs as a limiting factor for solar cells when the concentration of excess electrons grows large at low doping rates. Otherwise, the doping-dependent SRH recombination is one of the primary mechanisms that reduces the electrons’ carrier lifetime in solar cells. Bipolar junction transistors. A bipolar junction transistor is a type of transistor that is able to use electrons and electron holes as charge carriers. A BJT uses a single crystal of material in its circuit that is divided into two types of semiconductor, an n-type and p-type. These two types of doped semiconductors are spread over three different regions in respective order: the emitter region, the base region and the collector region. The emitter region and collector region are quantitively doped differently, but are of the same type of doping and share a base region, which is why the system is different from two diodes connected in series with each other. For a PNP-transistor, these regions are, respectively, p-type, n-type and p-type, and for a NPN-transistor, these regions are, respectively, n-type, p-type and n-type. For NPN-transistors in typical forward-active operation, given an injection of charge carriers through the first junction from the emitter into the base region, electrons are the charge carriers that are transported diffusively through the base region towards the collector region. These are the minority carriers of the base region. Analogously, for PNP-transistors, electronic holes are the minority carriers of the base region. The carrier lifetime of these minority carriers plays a crucial role in the charge flow of minority carriers in the base region, which is found between the two junctions. Depending on the BJT's mode of operation, recombination is either preferred, or to be avoided in the base region. In particular, for the aforementioned forward-active mode of operation, recombination is not preferable. Thus, in order to get as many minority carriers as possible from the base region into the collecting region before these recombine, the width of the base region must be small enough such that the minority carriers can diffuse in a smaller amount of time than the semiconductor's minority carrier lifetime. Equivalently, the width of the base region must be smaller than the diffusion length, which is the average length a charge carrier travels before recombining. Additionally, in order to prevent high rates of recombination, the base is only lightly doped with respect to the emitter and collector region. As a result of this, the charge carriers do not have a high probability of staying in the base region, which is their preferable region of occupation when recombining into a lower-energy state. For other modes of operation, like that of fast switching, a high recombination rate (and thus a short carrier lifetime) is desirable. The desired mode of operation, and the associated properties of the doped base region must be considered in order to facilitate the appropriate carrier lifetime. Presently, silicon and silicon carbide are the materials used in most BJTs. The recombination mechanisms that must be considered in the base region are surface recombination near the base-emitter junction, as well as SRH- and Auger recombination in the base region. Specifically, Auger recombination increases when the amount of injected charge carriers grows, hence decreasing the efficiency of the current gain with growing injection numbers. Semiconductor lasers. In semiconductor lasers, the carrier lifetime is the time it takes an electron before recombining via non-radiative processes in the laser cavity. In the frame of the rate equations model, carrier lifetime is used in the charge conservation equation as the time constant of the exponential decay of carriers. The dependence of carrier lifetime on the carrier density is expressed as: formula_0 where A, B and C are the non-radiative, radiative and Auger recombination coefficients and formula_1 is the carrier lifetime. Measurement. Because the efficiency of a semiconductor device generally depends on its carrier lifetime, it is important to be able to measure this quantity. The method by which this is done depends on the device, but is usually dependent on measuring the current and voltage. In solar cells, the carrier lifetime can be calculated by illuminating the surface of the cell, which induces carrier generation and increases the voltage until it reaches an equilibrium, and subsequently turning off the light source. This causes the voltage to decay at a consistent rate. The rate at which the voltage decays is determined by the amount of minority carriers that recombine per unit time, with a higher amount of recombining carriers resulting in a faster decay. Subsequently, a lower carrier lifetime will result in a faster decay of the voltage. This means that the carrier lifetime of a solar cell can be calculated by studying its voltage decay rate. This carrier lifetime is generally expressed as: formula_2 where formula_3 is the Boltzmann constant, q is the elementary charge, T is the temperature, and formula_4 is the time derivative of the open-circuit voltage. In bipolar junction transistors (BJTs), determining the carrier lifetime is rather more complicated. Namely, one must measure the output conductance and reverse transconductance, both of which are variables that depend on the voltage and flow of current through the BJT, and calculate the minority carrier transit time, which is determined by the width of the quasi-neutral base (QNB) of the BJT, and the diffusion coefficient; a constant that quantifies the atomic migration within the BJT. This carrier lifetime is expressed as: formula_5 where formula_6 and formula_7 are the output conductance, reverse transconductance, width of the QNB and diffusion coefficient, respectively. Current research. Because a longer carrier lifetime is often synonymous to a more efficient device, research tends to focus on minimizing processes that contribute to the recombination of minority carriers. In practice, this generally implies reducing structural defects within the semiconductors, or introducing novel methods that do not suffer from the same recombination mechanisms. In crystalline silicon solar cells, which are particularly common, an important limiting factor is the structural damage done to the cell when the transparent conducting film is applied. This is done with "reactive plasma deposition", a form of sputter deposition. In the process of applying this film, defects appear on the silicon layer, which degrades the carrier lifetime. Reducing the amount of damage done during this process is therefore important to increase the efficiency of the solar cell, and a focus of current research. In addition to research that seeks to optimize currently favoured technologies, there is a great deal of research surrounding other, less-utilized technologies, like the Perovskite solar cell (PSC). This solar cell is preferable due to its comparatively cheap and simple manufacturing process. Modern advancements suggest that there is still ample room to improve on the carrier lifetime of this solar cell, with most of the issues surrounding it being construction-related. In addition to solar cells, perovskites can be utilized to manufacture LEDs, lasers, and transistors. As a result of this, lead and halide perovskites are of particular interest in modern research. Current problems include the structural defects that appear when semiconductor devices are manufactured with the material, as the dislocation density associated with the crystals is a detriment to their carrier lifetime. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{1}{\\tau_n(N)}= A + BN + CN^2" }, { "math_id": 1, "text": "\\tau_n(N)" }, { "math_id": 2, "text": "\\tau = -\\frac{k_B T}{q}\\left(\\frac{dV_{oc}}{dt}\\right)^{-1}" }, { "math_id": 3, "text": "k_B" }, { "math_id": 4, "text": "\\frac{dV_{oc}}{dt}" }, { "math_id": 5, "text": "\\tau_{BF} = -\\frac{W_B^2}{2D_n}\\cdot\\frac{G_o}{G_r}" }, { "math_id": 6, "text": "G_o, G_r, W_B" }, { "math_id": 7, "text": "D_n" } ]
https://en.wikipedia.org/wiki?curid=12606884
12608
Geodesy
Science of measuring the shape, orientation, and gravity of Earth Geodesy or geodetics is the science of measuring and representing the geometry, gravity, and spatial orientation of the Earth in temporally varying 3D. It is called planetary geodesy when studying other astronomical bodies, such as planets or circumplanetary systems. Geodesy is an earth science and many consider the study of Earth's shape and gravity to be central to that science. It is also a discipline of applied mathematics. Geodynamical phenomena, including crustal motion, tides, and polar motion, can be studied by designing global and national control networks, applying space geodesy and terrestrial geodetic techniques, and relying on datums and coordinate systems. Geodetic job titles include geodesist and geodetic surveyor. History. Geodesy began in pre-scientific antiquity, so the very word geodesy comes from the Ancient Greek word or "geodaisia" (literally, "division of Earth"). Early ideas about the figure of the Earth held the Earth to be flat and the heavens a physical dome spanning over it. Two early arguments for a spherical Earth were that lunar eclipses appear to an observer as circular shadows and that Polaris appears lower and lower in the sky to a traveler headed South. Definition. In English, geodesy refers to the science of measuring and representing geospatial information, while geomatics encompasses practical applications of geodesy on local and regional scales, including surveying. In German, geodesy can refer to either "higher geodesy" ( or , literally "geomensuration") — concerned with measuring Earth on the global scale, or "engineering geodesy" () that includes surveying — measuring parts or regions of Earth. For the longest time, geodesy was the science of measuring and understanding Earth's geometric shape, orientation in space, and gravitational field; however, geodetic science and operations are applied to other astronomical bodies in our Solar System also. To a large extent, Earth's shape is the result of rotation, which causes its equatorial bulge, and the competition of geological processes such as the collision of plates, as well as of volcanism, resisted by Earth's gravitational field. This applies to the solid surface, the liquid surface (dynamic sea surface topography), and Earth's atmosphere. For this reason, the study of Earth's gravitational field is called physical geodesy. Geoid and reference ellipsoid. The geoid essentially is the figure of Earth abstracted from its topographical features. It is an idealized equilibrium surface of seawater, the mean sea level surface in the absence of currents and air pressure variations, and continued under the continental masses. Unlike a reference ellipsoid, the geoid is irregular and too complicated to serve as the computational surface for solving geometrical problems like point positioning. The geometrical separation between the geoid and a reference ellipsoid is called "geoidal undulation", and it varies globally between ±110 m based on the GRS 80 ellipsoid. A reference ellipsoid, customarily chosen to be the same size (volume) as the geoid, is described by its semi-major axis (equatorial radius) "a" and flattening "f". The quantity "f" = , where "b" is the semi-minor axis (polar radius), is purely geometrical. The mechanical ellipticity of Earth (dynamical flattening, symbol "J"2) can be determined to high precision by observation of satellite orbit perturbations. Its relationship with geometrical flattening is indirect and depends on the internal density distribution or, in simplest terms, the degree of central concentration of mass. The 1980 Geodetic Reference System (GRS 80), adopted at the XVII General Assembly of the International Union of Geodesy and Geophysics (IUGG), posited a 6,378,137 m semi-major axis and a 1:298.257 flattening. GRS 80 essentially constitutes the basis for geodetic positioning by the Global Positioning System (GPS) and is thus also in widespread use outside the geodetic community. Numerous systems used for mapping and charting are becoming obsolete as countries increasingly move to global, geocentric reference systems utilizing the GRS 80 reference ellipsoid. The geoid is a "realizable" surface, meaning it can be consistently located on Earth by suitable simple measurements from physical objects like a tide gauge. The geoid can, therefore, be considered a physical ("real") surface. The reference ellipsoid, however, has many possible instantiations and is not readily realizable, so it is an abstract surface. The third primary surface of geodetic interest — the topographic surface of Earth — is also realizable. Coordinate systems in space. The locations of points in 3D space most conveniently are described by three cartesian or rectangular coordinates, "X", "Y", and "Z". Since the advent of satellite positioning, such coordinate systems are typically geocentric, with the Z-axis aligned to Earth's (conventional or instantaneous) rotation axis. Before the era of satellite geodesy, the coordinate systems associated with a geodetic datum attempted to be geocentric, but with the origin differing from the geocenter by hundreds of meters due to regional deviations in the direction of the plumbline (vertical). These regional geodetic datums, such as ED 50 (European Datum 1950) or NAD 27 (North American Datum 1927), have ellipsoids associated with them that are regional "best fits" to the geoids within their areas of validity, minimizing the deflections of the vertical over these areas. It is only because GPS satellites orbit about the geocenter that this point becomes naturally the origin of a coordinate system defined by satellite geodetic means, as the satellite positions in space themselves get computed within such a system. Geocentric coordinate systems used in geodesy can be divided naturally into two classes: The coordinate transformation between these two systems to good approximation is described by (apparent) sidereal time, which accounts for variations in Earth's axial rotation (length-of-day variations). A more accurate description also accounts for polar motion as a phenomenon closely monitored by geodesists. Coordinate systems in the plane. In geodetic applications like surveying and mapping, two general types of coordinate systems in the plane are in use: One can intuitively use rectangular coordinates in the plane for one's current location, in which case the "x"-axis will point to the local north. More formally, such coordinates can be obtained from 3D coordinates using the artifice of a map projection. It is impossible to map the curved surface of Earth onto a flat map surface without deformation. The compromise most often chosen — called a conformal projection — preserves angles and length ratios so that small circles get mapped as small circles and small squares as squares. An example of such a projection is UTM (Universal Transverse Mercator). Within the map plane, we have rectangular coordinates "x" and "y". In this case, the north direction used for reference is the "map" north, not the "local" north. The difference between the two is called meridian convergence. It is easy enough to "translate" between polar and rectangular coordinates in the plane: let, as above, direction and distance be "α" and "s" respectively, then we have formula_0 The reverse transformation is given by: formula_1 Heights. In geodesy, point or terrain "heights" are "above sea level" as an irregular, physically defined surface. Height systems in use are: Each system has its advantages and disadvantages. Both orthometric and normal heights are expressed in metres above sea level, whereas geopotential numbers are measures of potential energy (unit: m2 s−2) and not metric. The reference surface is the geoid, an equigeopotential surface approximating the mean sea level as described above. For normal heights, the reference surface is the so-called "quasi-geoid", which has a few-metre separation from the geoid due to the density assumption in its continuation under the continental masses. One can relate these heights through the geoid undulation concept to "ellipsoidal heights" (also known as "geodetic heights"), representing the height of a point above the reference ellipsoid. Satellite positioning receivers typically provide ellipsoidal heights unless fitted with special conversion software based on a model of the geoid. Geodetic datums. Because coordinates and heights of geodetic points always get obtained within a system that itself was constructed based on real-world observations, geodesists introduced the concept of a "geodetic datum" (plural "datums"): a physical (real-world) realization of a coordinate system used for describing point locations. This realization follows from "choosing" (therefore conventional) coordinate values for one or more datum points. In the case of height data, it suffices to choose "one" datum point — the reference benchmark, typically a tide gauge at the shore. Thus we have vertical datums, such as the NAVD 88 (North American Vertical Datum 1988), NAP (Normaal Amsterdams Peil), the Kronstadt datum, the Trieste datum, and numerous others. In both mathematics and geodesy, a coordinate system is a "coordinate system" per ISO terminology, whereas the International Earth Rotation and Reference Systems Service (IERS) uses the term "reference system" for the same. When coordinates are realized by choosing datum points and fixing a geodetic datum, ISO speaks of a "coordinate reference system", whereas IERS uses a "reference frame" for the same. The ISO term for a datum transformation again is a "coordinate transformation". Positioning. General geopositioning, or simply positioning, is the determination of the location of points on Earth, by myriad techniques. Geodetic positioning employs geodetic methods to determine a set of precise geodetic coordinates of a point on land, at sea, or in space. It may be done within a coordinate system (point positioning or absolute positioning) or relative to another point (relative positioning). One computes the position of a point in space from measurements linking terrestrial or extraterrestrial points of known location ("known points") with terrestrial ones of unknown location ("unknown points"). The computation may involve transformations between or among astronomical and terrestrial coordinate systems. Known points used in point positioning can be GNSS continuously operating reference stations or triangulation points of a higher-order network. Traditionally, geodesists built a hierarchy of networks to allow point positioning within a country. The highest in this hierarchy were triangulation networks, densified into the networks of traverses (polygons) into which local mapping and surveying measurements, usually collected using a measuring tape, a corner prism, and the red-and-white poles, are tied. Commonly used nowadays is GPS, except for specialized measurements (e.g., in underground or high-precision engineering). The higher-order networks are measured with static GPS, using differential measurement to determine vectors between terrestrial points. These vectors then get adjusted in a traditional network fashion. A global polyhedron of permanently operating GPS stations under the auspices of the IERS is the basis for defining a single global, geocentric reference frame that serves as the "zero-order" (global) reference to which national measurements are attached. Real-time kinematic positioning (RTK GPS) is employed frequently in survey mapping. In that measurement technique, unknown points can get quickly tied into nearby terrestrial known points. One purpose of point positioning is the provision of known points for mapping measurements, also known as (horizontal and vertical) control. There can be thousands of those geodetically determined points in a country, usually documented by national mapping agencies. Surveyors involved in real estate and insurance will use these to tie their local measurements. Geodetic problems. In geometrical geodesy, there are two main problems: "Given the coordinates of a point and the directional (azimuth) and distance to a second point, determine the coordinates of that second point." "Given the coordinates of two points, determine the azimuth and length of the (straight, curved, or geodesic) line connecting those points." The solutions to both problems in plane geometry reduce to simple trigonometry and are valid for small areas on Earth's surface; on a sphere, solutions become significantly more complex as, for example, in the inverse problem, the azimuths differ going between the two end points along the arc of the connecting great circle. The general solution is called the geodesic for the surface considered, and the differential equations for the geodesic are solvable numerically. On the ellipsoid of revolution, geodesics are expressible in terms of elliptic integrals, which are usually evaluated in terms of a series expansion — see, for example, Vincenty's formulae. Observational concepts. As defined in geodesy (and also astronomy), some basic observational concepts like angles and coordinates include (most commonly from the viewpoint of a local observer): Measurements. The reference surface (level) used to determine height differences and height reference systems is known as mean sea level. The traditional spirit level directly produces such (for practical purposes most useful) heights above sea level; the more economical use of GPS instruments for height determination requires precise knowledge of the figure of the geoid, as GPS only gives heights above the GRS80 reference ellipsoid. As geoid determination improves, one may expect that the use of GPS in height determination shall increase, too. The theodolite is an instrument used to measure horizontal and vertical (relative to the local vertical) angles to target points. In addition, the tachymeter determines, electronically or electro-optically, the distance to a target and is highly automated or even robotic in operations. Widely used for the same purpose is the method of free station position. Commonly for local detail surveys, tachymeters are employed, although the old-fashioned rectangular technique using an angle prism and steel tape is still an inexpensive alternative. As mentioned, also there are quick and relatively accurate real-time kinematic (RTK) GPS techniques. Data collected are tagged and recorded digitally for entry into Geographic Information System (GIS) databases. Geodetic GNSS (most commonly GPS) receivers directly produce 3D coordinates in a geocentric coordinate frame. One such frame is WGS84, as well as frames by the International Earth Rotation and Reference Systems Service (IERS). GNSS receivers have almost completely replaced terrestrial instruments for large-scale base network surveys. To monitor the Earth's rotation irregularities and plate tectonic motions and for planet-wide geodetic surveys, methods of very-long-baseline interferometry (VLBI) measuring distances to quasars, lunar laser ranging (LLR) measuring distances to prisms on the Moon, and satellite laser ranging (SLR) measuring distances to prisms on artificial satellites, are employed. Gravity is measured using gravimeters, of which there are two kinds. First are "absolute gravimeter"s, based on measuring the acceleration of free fall (e.g., of a reflecting prism in a vacuum tube). They are used to establish vertical geospatial control or in the field. Second, "relative gravimeter"s are spring-based and more common. They are used in gravity surveys over large areas — to establish the figure of the geoid over these areas. The most accurate relative gravimeters are called "superconducting gravimeter"s, which are sensitive to one-thousandth of one-billionth of Earth-surface gravity. Twenty-some superconducting gravimeters are used worldwide in studying Earth's tides, rotation, interior, oceanic and atmospheric loading, as well as in verifying the Newtonian constant of gravitation. In the future, gravity and altitude might become measurable using the special-relativistic concept of time dilation as gauged by optical clocks. Units and measures on the ellipsoid. Geographical latitude and longitude are stated in the units degree, minute of arc, and second of arc. They are "angles", not metric measures, and describe the "direction" of the local normal to the reference ellipsoid of revolution. This direction is "approximately" the same as the direction of the plumbline, i.e., local gravity, which is also the normal to the geoid surface. For this reason, astronomical position determination – measuring the direction of the plumbline by astronomical means – works reasonably well when one also uses an ellipsoidal model of the figure of the Earth. One geographical mile, defined as one minute of arc on the equator, equals 1,855.32571922 m. One nautical mile is one minute of astronomical latitude. The radius of curvature of the ellipsoid varies with latitude, being the longest at the pole and the shortest at the equator same as with the nautical mile. A metre was originally defined as the 10-millionth part of the length from the equator to the North Pole along the meridian through Paris (the target was not quite reached in actual implementation, as it is off by 200 ppm in the current definitions). This situation means that one kilometre roughly equals (1/40,000) * 360 * 60 meridional minutes of arc, or 0.54 nautical miles. (This is not exactly so as the two units had been defined on different bases, so the international nautical mile is 1,852 m exactly, which corresponds to the rounding of 1,000/0.54 m to four digits). Temporal changes. Various techniques are used in geodesy to study temporally changing surfaces, bodies of mass, physical fields, and dynamical systems. Points on Earth's surface change their location due to a variety of mechanisms: Geodynamics is the discipline that studies deformations and motions of Earth's crust and its solidity as a whole. Often the study of Earth's irregular rotation is included in the above definition. Geodynamical studies require terrestrial reference frames realized by the stations belonging to the Global Geodetic Observing System (GGOS). Techniques for studying geodynamic phenomena on global scales include: See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. at Wikibooks Media related to at Wikimedia Commons
[ { "math_id": 0, "text": "\\begin{align}\nx &= s \\cos \\alpha\\\\\ny &= s \\sin \\alpha\n\\end{align}" }, { "math_id": 1, "text": "\\begin{align}\ns &= \\sqrt{x^2 + y^2}\\\\\n\\alpha &= \\arctan\\frac{y}{x}.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=12608
1260802
Cosmological decade
Division of the lifetime of the cosmos A cosmological decade (CÐ) is a division of the lifetime of the cosmos. The divisions are logarithmic in size, with base 10. Each successive cosmological decade represents a ten-fold increase in the total age of the universe. As expressed in log (seconds per Ðecade). When CÐ is measured in log( seconds/Р), "CÐ" 1 begins at 10 seconds and lasts 90 seconds (until 100 seconds after Time Zero). "CÐ" 100, the 100th cosmological decade, lasts from 10100 to 10101 seconds after Time Zero. CРformula_0 is Time Zero. The epoch "CÐ" −43.2683 was 10(−43.2683) seconds, which represents the Planck time since the Big Bang (Time Zero). There were an infinite number of cosmological decades between the Big Bang and the Planck epoch (or any other point in time). The current epoch, "CÐ" 17.6389, is 10(17.6389) seconds, or 13.799(21) billion years, since the Big Bang. There have been 60.9 cosmological decades between the Planck epoch, CР−43.2683, and the current epoch, CР17.6389. As expressed in log (years per Ðecade). The cosmological decade can be expressed in log years per decade. In this definition, the 100th cosmological decade lasts from 10100 years to 10101 years after Time Zero. To convert to this format, simply divide by seconds per year; or in logarithmic terms, subtract 7.4991116 from the values listed above. Thus when CÐ is expressed in log( years/Ð ), the Planck time could also be expressed as 10(−43.2683 − 7.4991116) years = 10(−50.7674) years. In this definition, the current epoch is CÐ (17.6355 − 7.4991116), or CР10.1364. As before, there have been 60.9 cosmological decades between the Planck epoch and the current epoch. In theories of physical cosmology, the history of the universe can be segmented into five eras:
[ { "math_id": 0, "text": "-\\infty" } ]
https://en.wikipedia.org/wiki?curid=1260802
12610
Grand Unified Theory
Comprehensive physical model Grand Unified Theory (GUT) is any model in particle physics that merges the electromagnetic, weak, and strong forces (the three gauge interactions of the Standard Model) into a single force at high energies. Although this unified force has not been directly observed, many GUT models theorize its existence. If the unification of these three interactions is possible, it raises the possibility that there was a grand unification epoch in the very early universe in which these three fundamental interactions were not yet distinct. Experiments have confirmed that at high energy, the electromagnetic interaction and weak interaction unify into a single combined electroweak interaction. GUT models predict that at even higher energy, the strong and electroweak interactions will unify into one electronuclear interaction. This interaction is characterized by one larger gauge symmetry and thus several force carriers, but one unified coupling constant. Unifying gravity with the electronuclear interaction would provide a more comprehensive theory of everything (TOE) rather than a Grand Unified Theory. Thus, GUTs are often seen as an intermediate step towards a TOE. The novel particles predicted by GUT models are expected to have extremely high masses—around the GUT scale of formula_0 GeV (just three orders of magnitude below the Planck scale of formula_1 GeV)—and so are well beyond the reach of any foreseen particle hadron collider experiments. Therefore, the particles predicted by GUT models will be unable to be observed directly, and instead the effects of grand unification might be detected through indirect observations of the following: Some GUTs, such as the Pati–Salam model, predict the existence of magnetic monopoles. While GUTs might be expected to offer simplicity over the complications present in the Standard Model, realistic models remain complicated because they need to introduce additional fields and interactions, or even additional dimensions of space, in order to reproduce observed fermion masses and mixing angles. This difficulty, in turn, may be related to the existence of family symmetries beyond the conventional GUT models. Due to this and the lack of any observed effect of grand unification so far, there is no generally accepted GUT model. Models that do not unify the three interactions using one simple group as the gauge symmetry but do so using semisimple groups can exhibit similar properties and are sometimes referred to as Grand Unified Theories as well. &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in physics: Are the three forces of the Standard Model unified at high energies? By which symmetry is this unification governed? Can the Grand Unification Theory explain the number of fermion generations and their masses? History. Historically, the first true GUT, which was based on the simple Lie group SU(5), was proposed by Howard Georgi and Sheldon Glashow in 1974. The Georgi–Glashow model was preceded by the semisimple Lie algebra Pati–Salam model by Abdus Salam and Jogesh Pati also in 1974, who pioneered the idea to unify gauge interactions. The acronym GUT was first coined in 1978 by CERN researchers John Ellis, Andrzej Buras, Mary K. Gaillard, and Dimitri Nanopoulos, however in the final version of their paper they opted for the less anatomical GUM (Grand Unification Mass). Nanopoulos later that year was the first to use the acronym in a paper. Motivation. The fact that the electric charges of electrons and protons seem to cancel each other exactly to extreme precision is essential for the existence of the macroscopic world as we know it, but this important property of elementary particles is not explained in the Standard Model of particle physics. While the description of strong and weak interactions within the Standard Model is based on gauge symmetries governed by the simple symmetry groups SU(3) and SU(2) which allow only discrete charges, the remaining component, the weak hypercharge interaction is described by an abelian symmetry U(1) which in principle allows for arbitrary charge assignments. The observed charge quantization, namely the postulation that all known elementary particles carry electric charges which are exact multiples of one-third of the "elementary" charge, has led to the idea that hypercharge interactions and possibly the strong and weak interactions might be embedded in one Grand Unified interaction described by a single, larger simple symmetry group containing the Standard Model. This would automatically predict the quantized nature and values of all elementary particle charges. Since this also results in a prediction for the relative strengths of the fundamental interactions which we observe, in particular, the weak mixing angle, grand unification ideally reduces the number of independent input parameters but is also constrained by observations. Grand unification is reminiscent of the unification of electric and magnetic forces by Maxwell's field theory of electromagnetism in the 19th century, but its physical implications and mathematical structure are qualitatively different. Unification of matter particles. SU(5). SU(5) is the simplest GUT. The smallest simple Lie group which contains the standard model, and upon which the first Grand Unified Theory was based, is formula_2. Such group symmetries allow the reinterpretation of several known particles, including the photon, W and Z bosons, and gluon, as different states of a single particle field. However, it is not obvious that the simplest possible choices for the extended "Grand Unified" symmetry should yield the correct inventory of elementary particles. The fact that all currently known matter particles fit perfectly into three copies of the smallest group representations of SU(5) and immediately carry the correct observed charges, is one of the first and most important reasons why people believe that a Grand Unified Theory might actually be realized in nature. The two smallest irreducible representations of SU(5) are 5 (the defining representation) and 10. (These bold numbers indicate the dimension of the representation.) In the standard assignment, the 5 contains the charge conjugates of the right-handed down-type quark color triplet and a left-handed lepton isospin doublet, while the 10 contains the six up-type quark components, the left-handed down-type quark color triplet, and the right-handed electron. This scheme has to be replicated for each of the three known generations of matter. It is notable that the theory is anomaly free with this matter content. The hypothetical right-handed neutrinos are a singlet of SU(5), which means its mass is not forbidden by any symmetry; it doesn't need a spontaneous electroweak symmetry breaking which explains why its mass would be heavy (see seesaw mechanism). SO(10). The next simple Lie group which contains the standard model is formula_3. Here, the unification of matter is even more complete, since the irreducible spinor representation 16 contains both the and 10 of SU(5) and a right-handed neutrino, and thus the complete particle content of one generation of the extended standard model with neutrino masses. This is already the largest simple group that achieves the unification of matter in a scheme involving only the already known matter particles (apart from the Higgs sector). Since different standard model fermions are grouped together in larger representations, GUTs specifically predict relations among the fermion masses, such as between the electron and the down quark, the muon and the strange quark, and the tau lepton and the bottom quark for SU(5) and SO(10). Some of these mass relations hold approximately, but most don't (see Georgi-Jarlskog mass relation). The boson matrix for SO(10) is found by taking the 15 × 15 matrix from the 10 + 5 representation of SU(5) and adding an extra row and column for the right-handed neutrino. The bosons are found by adding a partner to each of the 20 charged bosons (2 right-handed W bosons, 6 massive charged gluons and 12 X/Y type bosons) and adding an extra heavy neutral Z-boson to make 5 neutral bosons in total. The boson matrix will have a boson or its new partner in each row and column. These pairs combine to create the familiar 16D Dirac spinor matrices of SO(10). E6. In some forms of string theory, including E8 × E8 heterotic string theory, the resultant four-dimensional theory after spontaneous compactification on a six-dimensional Calabi–Yau manifold resembles a GUT based on the group E6. Notably E6 is the only exceptional simple Lie group to have any complex representations, a requirement for a theory to contain chiral fermions (namely all weakly-interacting fermions). Hence the other four (G2, F4, E7, and E8) can't be the gauge group of a GUT. Extended Grand Unified Theories. Non-chiral extensions of the Standard Model with vectorlike split-multiplet particle spectra which naturally appear in the higher SU(N) GUTs considerably modify the desert physics and lead to the realistic (string-scale) grand unification for conventional three quark-lepton families even without using supersymmetry (see below). On the other hand, due to a new missing VEV mechanism emerging in the supersymmetric SU(8) GUT the simultaneous solution to the gauge hierarchy (doublet-triplet splitting) problem and problem of unification of flavor can be argued. GUTs with four families / generations, SU(8): Assuming 4 generations of fermions instead of 3 makes a total of 64 types of particles. These can be put into 64 8 + 56 representations of SU(8). This can be divided into SU(5) × SU(3)F × U(1) which is the SU(5) theory together with some heavy bosons which act on the generation number. GUTs with four families / generations, O(16): Again assuming 4 generations of fermions, the 128 particles and anti-particles can be put into a single spinor representation of O(16). Symplectic groups and quaternion representations. Symplectic gauge groups could also be considered. For example, Sp(8) (which is called Sp(4) in the article symplectic group) has a representation in terms of 4 × 4 quaternion unitary matrices which has a 16 dimensional real representation and so might be considered as a candidate for a gauge group. Sp(8) has 32 charged bosons and 4 neutral bosons. Its subgroups include SU(4) so can at least contain the gluons and photon of SU(3) × U(1). Although it's probably not possible to have weak bosons acting on chiral fermions in this representation. A quaternion representation of the fermions might be: formula_4 A further complication with quaternion representations of fermions is that there are two types of multiplication: left multiplication and right multiplication which must be taken into account. It turns out that including left and right-handed 4 × 4 quaternion matrices is equivalent to including a single right-multiplication by a unit quaternion which adds an extra SU(2) and so has an extra neutral boson and two more charged bosons. Thus the group of left- and right-handed 4 × 4 quaternion matrices is Sp(8) × SU(2) which does include the standard model bosons: formula_5 If formula_6 is a quaternion valued spinor, formula_7 is quaternion hermitian 4 × 4 matrix coming from Sp(8) and formula_8 is a pure vector quaternion (both of which are 4-vector bosons) then the interaction term is: formula_9 Octonion representations. It can be noted that a generation of 16 fermions can be put into the form of an octonion with each element of the octonion being an 8-vector. If the 3 generations are then put in a 3x3 hermitian matrix with certain additions for the diagonal elements then these matrices form an exceptional (Grassmann) Jordan algebra, which has the symmetry group of one of the exceptional Lie groups (F4, E6, E7, or E8) depending on the details. formula_10 formula_11 Because they are fermions the anti-commutators of the Jordan algebra become commutators. It is known that E6 has subgroup O(10) and so is big enough to include the Standard Model. An E8 gauge group, for example, would have 8 neutral bosons, 120 charged bosons and 120 charged anti-bosons. To account for the 248 fermions in the lowest multiplet of E8, these would either have to include anti-particles (and so have baryogenesis), have new undiscovered particles, or have gravity-like (spin connection) bosons affecting elements of the particles spin direction. Each of these possesses theoretical problems. Beyond Lie groups. Other structures have been suggested including Lie 3-algebras and Lie superalgebras. Neither of these fit with Yang–Mills theory. In particular Lie superalgebras would introduce bosons with incorrect statistics. Supersymmetry, however, does fit with Yang–Mills. Unification of forces and the role of supersymmetry. The unification of forces is possible due to the energy scale dependence of force coupling parameters in quantum field theory called renormalization group "running", which allows parameters with vastly different values at usual energies to converge to a single value at a much higher energy scale. The renormalization group running of the three gauge couplings in the Standard Model has been found to nearly, but not quite, meet at the same point if the hypercharge is normalized so that it is consistent with SU(5) or SO(10) GUTs, which are precisely the GUT groups which lead to a simple fermion unification. This is a significant result, as other Lie groups lead to different normalizations. However, if the supersymmetric extension MSSM is used instead of the Standard Model, the match becomes much more accurate. In this case, the coupling constants of the strong and electroweak interactions meet at the grand unification energy, also known as the GUT scale: formula_12. It is commonly believed that this matching is unlikely to be a coincidence, and is often quoted as one of the main motivations to further investigate supersymmetric theories despite the fact that no supersymmetric partner particles have been experimentally observed. Also, most model builders simply assume supersymmetry because it solves the hierarchy problem—i.e., it stabilizes the electroweak Higgs mass against radiative corrections. Neutrino masses. Since Majorana masses of the right-handed neutrino are forbidden by SO(10) symmetry, SO(10) GUTs predict the Majorana masses of right-handed neutrinos to be close to the GUT scale where the symmetry is spontaneously broken in those models. In supersymmetric GUTs, this scale tends to be larger than would be desirable to obtain realistic masses of the light, mostly left-handed neutrinos (see neutrino oscillation) via the seesaw mechanism. These predictions are independent of the Georgi–Jarlskog mass relations, wherein some GUTs predict other fermion mass ratios. Proposed theories. Several theories have been proposed, but none is currently universally accepted. An even more ambitious theory that includes "all" fundamental forces, including gravitation, is termed a theory of everything. Some common mainstream GUT models are: Not quite GUTs: &lt;templatestyles src="Div col/styles.css"/&gt; "Note": These models refer to Lie algebras not to Lie groups. The Lie group could be formula_13 just to take a random example. The most promising candidate is SO(10). (Minimal) SO(10) does not contain any exotic fermions (i.e. additional fermions besides the Standard Model fermions and the right-handed neutrino), and it unifies each generation into a single irreducible representation. A number of other GUT models are based upon subgroups of SO(10). They are the minimal left-right model, SU(5), flipped SU(5) and the Pati–Salam model. The GUT group E6 contains SO(10), but models based upon it are significantly more complicated. The primary reason for studying E6 models comes from E8 × E8 heterotic string theory. GUT models generically predict the existence of topological defects such as monopoles, cosmic strings, domain walls, and others. But none have been observed. Their absence is known as the monopole problem in cosmology. Many GUT models also predict proton decay, although not the Pati–Salam model. As of now, proton decay has never been experimentally observed. The minimal experimental limit on the proton's lifetime pretty much rules out minimal SU(5) and heavily constrains the other models. The lack of detected supersymmetry to date also constrains many models. Some GUT theories like SU(5) and SO(10) suffer from what is called the doublet-triplet problem. These theories predict that for each electroweak Higgs doublet, there is a corresponding colored Higgs triplet field with a very small mass (many orders of magnitude smaller than the GUT scale here). In theory, unifying quarks with leptons, the Higgs doublet would also be unified with a Higgs triplet. Such triplets have not been observed. They would also cause extremely rapid proton decay (far below current experimental limits) and prevent the gauge coupling strengths from running together in the renormalization group. Most GUT models require a threefold replication of the matter fields. As such, they do not explain why there are three generations of fermions. Most GUT models also fail to explain the little hierarchy between the fermion masses for different generations. Ingredients. A GUT model consists of a gauge group which is a compact Lie group, a connection form for that Lie group, a Yang–Mills action for that connection given by an invariant symmetric bilinear form over its Lie algebra (which is specified by a coupling constant for each factor), a Higgs sector consisting of a number of scalar fields taking on values within real/complex representations of the Lie group and chiral Weyl fermions taking on values within a complex rep of the Lie group. The Lie group contains the Standard Model group and the Higgs fields acquire VEVs leading to a spontaneous symmetry breaking to the Standard Model. The Weyl fermions represent matter. Current evidence. The discovery of neutrino oscillations indicates that the Standard Model is incomplete, but there is currently no clear evidence that nature is described by any Grand Unified Theory. Neutrino oscillations have led to renewed interest toward certain GUT such as SO(10). One of the few possible experimental tests of certain GUT is proton decay and also fermion masses. There are a few more special tests for supersymmetric GUT. However, minimum proton lifetimes from research (at or exceeding the 1034~1035 year range) have ruled out simpler GUTs and most non-SUSY models. The maximum upper limit on proton lifetime (if unstable), is calculated at 6×1039 years for SUSY models and 1.4×1036 years for minimal non-SUSY GUTs. The gauge coupling strengths of QCD, the weak interaction and hypercharge seem to meet at a common length scale called the GUT scale and equal approximately to 1016 GeV (slightly less than the Planck energy of 1019 GeV), which is somewhat suggestive. This interesting numerical observation is called the "gauge coupling unification", and it works particularly well if one assumes the existence of superpartners of the Standard Model particles. Still, it is possible to achieve the same by postulating, for instance, that ordinary (non supersymmetric) SO(10) models break with an intermediate gauge scale, such as the one of Pati–Salam group. Ultra unification. In 2020, physicist Juven Wang introduced a concept known as "ultra unification". It combines the Standard Model and grand unification, particularly for the models with 15 Weyl fermions per generation, without the necessity of right-handed sterile neutrinos, by adding new gapped topological phase sectors or new gapless interacting conformal sectors consistent with the nonperturbative global anomaly cancellation and cobordism constraints (especially from the mixed gauge-gravitational anomaly, such as a Z/"16"Z class anomaly, associated with the baryon minus lepton number B−L and the electroweak hypercharge Y). Gapped topological phase sectors are constructed via the symmetry extension (in contrast to the symmetry breaking in the Standard Model's Anderson-Higgs mechanism), whose low energy contains unitary Lorentz invariant topological quantum field theories (TQFTs), such as 4-dimensional noninvertible, 5-dimensional noninvertible, or 5-dimensional invertible entangled gapped phase TQFTs. Alternatively, Wang's theory suggests there could also be right-handed sterile neutrinos, gapless unparticle physics, or some combination of more general interacting conformal field theories (CFTs), to together cancel the mixed gauge-gravitational anomaly. This proposal can also be understood as coupling the Standard Model (as quantum field theory) to the Beyond the Standard Model sector (as TQFTs or CFTs being dark matter) via the discrete gauged B−L topological force. In either TQFT or CFT scenarios, the implication is that a new high-energy physics frontier beyond the conventional 0-dimensional particle physics relies on new types of topological forces and matter. This includes gapped extended objects such as 1-dimensional line and 2-dimensional surface operators or conformal defects, whose open ends carry deconfined fractionalized particle or anyonic string excitations. Understanding and characterizing these gapped extended objects requires mathematical concepts such as cohomology, cobordism, or category into particle physics. The topological phase sectors proposed by Wang signify a departure from the conventional particle physics paradigm, indicating a frontier in beyond-the-Standard-Model physics. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " 10^{16} " }, { "math_id": 1, "text": " 10^{19} " }, { "math_id": 2, "text": " \\rm SU(5) \\supset SU(3)\\times SU(2)\\times U(1)" }, { "math_id": 3, "text": "\\rm SO(10)\\supset SU(5)\\supset SU(3)\\times SU(2)\\times U(1)" }, { "math_id": 4, "text": "\n\\begin{bmatrix}\ne + i\\ \\overline{e} + j\\ v + k\\ \\overline{v} \\\\\nu_r + i\\ \\overline{u}_\\mathrm{\\overline r} + j\\ d_\\mathrm{r} + k\\ \\overline{d}_\\mathrm{\\overline r} \\\\\nu_g + i\\ \\overline{u}_\\mathrm{\\overline g} + j\\ d_\\mathrm{g} + k\\ \\overline{d}_\\mathrm{\\overline g} \\\\\nu_b + i\\ \\overline{u}_\\mathrm{\\overline b} + j\\ d_\\mathrm{b} + k\\ \\overline{d}_\\mathrm{\\overline b} \\\\\n\\end{bmatrix}_\\mathrm{L}\n" }, { "math_id": 5, "text": " \\mathrm{ SU(4,\\mathbb{H})_L\\times \\mathbb{H}_R = Sp(8)\\times SU(2) \\supset SU(4)\\times SU(2) \\supset SU(3)\\times SU(2)\\times U(1) }" }, { "math_id": 6, "text": "\\psi" }, { "math_id": 7, "text": "\\ A^{ab}_\\mu\\ " }, { "math_id": 8, "text": "\\ B_\\mu\\ " }, { "math_id": 9, "text": "\\ \\overline{\\psi^{a}} \\gamma_\\mu\\left( A^{ab}_\\mu\\psi^b + \\psi^a B_\\mu \\right)\\ " }, { "math_id": 10, "text": "\n\\psi=\n\\begin{bmatrix}\na & e & \\mu \\\\\n\\overline{e} & b & \\tau \\\\\n\\overline{\\mu} & \\overline{\\tau} & c\n\\end{bmatrix}\n" }, { "math_id": 11, "text": "\\ [\\psi_A,\\psi_B] \\subset \\mathrm{J}_3(\\mathbb{O})\\ " }, { "math_id": 12, "text": "\\Lambda_{\\text{GUT}} \\approx 10^{16}\\,\\text{GeV}" }, { "math_id": 13, "text": "[\\text{SU}(4) \\times \\text{SU}(2) \\times \\text{SU}(2)] / \\mathbb{Z}_2," } ]
https://en.wikipedia.org/wiki?curid=12610
1261170
State observer
System in control theory In control theory, a state observer or state estimator is a system that provides an estimate of the internal state of a given real system, from measurements of the input and output of the real system. It is typically computer-implemented, and provides the basis of many practical applications. Knowing the system state is necessary to solve many control theory problems; for example, stabilizing a system using state feedback. In most practical cases, the physical state of the system cannot be determined by direct observation. Instead, indirect effects of the internal state are observed by way of the system outputs. A simple example is that of vehicles in a tunnel: the rates and velocities at which vehicles enter and leave the tunnel can be observed directly, but the exact state inside the tunnel can only be estimated. If a system is observable, it is possible to fully reconstruct the system state from its output measurements using the state observer. Typical observer model. Linear, delayed, sliding mode, high gain, Tau, homogeneity-based, extended and cubic observers are among several observer structures used for state estimation of linear and nonlinear systems. A linear observer structure is described in the following sections. Discrete-time case. The state of a linear, time-invariant discrete-time system is assumed to satisfy formula_0 formula_1 where, at time formula_2, formula_3 is the plant's state; formula_4 is its inputs; and formula_5 is its outputs. These equations simply say that the plant's current outputs and its future state are both determined solely by its current states and the current inputs. (Although these equations are expressed in terms of discrete time steps, very similar equations hold for continuous systems). If this system is observable then the output of the plant, formula_5, can be used to steer the state of the state observer. The observer model of the physical system is then typically derived from the above equations. Additional terms may be included in order to ensure that, on receiving successive measured values of the plant's inputs and outputs, the model's state converges to that of the plant. In particular, the output of the observer may be subtracted from the output of the plant and then multiplied by a matrix formula_6; this is then added to the equations for the state of the observer to produce a so-called "Luenberger observer", defined by the equations below. Note that the variables of a state observer are commonly denoted by a "hat": formula_7 and formula_8 to distinguish them from the variables of the equations satisfied by the physical system. formula_9 formula_10 The observer is called asymptotically stable if the observer error formula_11 converges to zero when formula_12. For a Luenberger observer, the observer error satisfies formula_13. The Luenberger observer for this discrete-time system is therefore asymptotically stable when the matrix formula_14 has all the eigenvalues inside the unit circle. For control purposes the output of the observer system is fed back to the input of both the observer and the plant through the gains matrix formula_15. formula_16 The observer equations then become: formula_17 formula_18 or, more simply, formula_19 formula_20 Due to the separation principle we know that we can choose formula_15 and formula_6 independently without harm to the overall stability of the systems. As a rule of thumb, the poles of the observer formula_21 are usually chosen to converge 10 times faster than the poles of the system formula_22. Continuous-time case. The previous example was for an observer implemented in a discrete-time LTI system. However, the process is similar for the continuous-time case; the observer gains formula_6 are chosen to make the continuous-time error dynamics converge to zero asymptotically (i.e., when formula_21 is a Hurwitz matrix). For a continuous-time linear system formula_23 formula_24 where formula_25, the observer looks similar to discrete-time case described above: formula_26. formula_27 The observer error formula_28 satisfies the equation formula_29. The eigenvalues of the matrix formula_21 can be chosen arbitrarily by appropriate choice of the observer gain formula_6 when the pair formula_30 is observable, i.e. observability condition holds. In particular, it can be made Hurwitz, so the observer error formula_31 when formula_32. Peaking and other observer methods. When the observer gain formula_6 is high, the linear Luenberger observer converges to the system states very quickly. However, high observer gain leads to a peaking phenomenon in which initial estimator error can be prohibitively large (i.e., impractical or unsafe to use). As a consequence, nonlinear high-gain observer methods are available that converge quickly without the peaking phenomenon. For example, sliding mode control can be used to design an observer that brings one estimated state's error to zero in finite time even in the presence of measurement error; the other states have error that behaves similarly to the error in a Luenberger observer after peaking has subsided. Sliding mode observers also have attractive noise resilience properties that are similar to a Kalman filter. Another approach is to apply multi observer, that significantly improves transients and reduces observer overshoot. Multi-observer can be adapted to every system where high-gain observer is applicable. State observers for nonlinear systems. High gain, sliding mode and extended observers are the most common observers for nonlinear systems. To illustrate the application of sliding mode observers for nonlinear systems, first consider the no-input non-linear system: formula_33 where formula_34. Also assume that there is a measurable output formula_35 given by formula_36 There are several non-approximate approaches for designing an observer. The two observers given below also apply to the case when the system has an input. That is, formula_37 formula_36 Linearizable error dynamics. One suggestion by Krener and Isidori and Krener and Respondek can be applied in a situation when there exists a linearizing transformation (i.e., a diffeomorphism, like the one used in feedback linearization) formula_38 such that in new variables the system equations read formula_39 formula_40 The Luenberger observer is then designed as formula_41. The observer error for the transformed variable formula_42 satisfies the same equation as in classical linear case. formula_29. As shown by Gauthier, Hammouri, and Othman and Hammouri and Kinnaert, if there exists transformation formula_38 such that the system can be transformed into the form formula_43 formula_44 then the observer is designed as formula_45, where formula_46 is a time-varying observer gain. Ciccarella, Dalla Mora, and Germani obtained more advanced and general results, removing the need for a nonlinear transform and proving global asymptotic convergence of the estimated state to the true state using only simple assumptions on regularity. Switched observers. As discussed for the linear case above, the peaking phenomenon present in Luenberger observers justifies the use of switched observers. A switched observer encompasses a relay or binary switch that acts upon detecting minute changes in the measured output. Some common types of switched observers include the sliding mode observer, nonlinear extended state observer, fixed time observer, switched high gain observer and uniting observer. The sliding mode observer uses non-linear high-gain feedback to drive estimated states to a hypersurface where there is no difference between the estimated output and the measured output. The non-linear gain used in the observer is typically implemented with a scaled switching function, like the signum (i.e., sgn) of the estimated – measured output error. Hence, due to this high-gain feedback, the vector field of the observer has a crease in it so that observer trajectories "slide along" a curve where the estimated output matches the measured output exactly. So, if the system is observable from its output, the observer states will all be driven to the actual system states. Additionally, by using the sign of the error to drive the sliding mode observer, the observer trajectories become insensitive to many forms of noise. Hence, some sliding mode observers have attractive properties similar to the Kalman filter but with simpler implementation. As suggested by Drakunov, a sliding mode observer can also be designed for a class of non-linear systems. Such an observer can be written in terms of original variable estimate formula_47 and has the form formula_48 where: The idea can be briefly explained as follows. According to the theory of sliding modes, in order to describe the system behavior, once sliding mode starts, the function formula_70 should be replaced by equivalent values (see "equivalent control" in the theory of sliding modes). In practice, it switches (chatters) with high frequency with slow component being equal to the equivalent value. Applying appropriate lowpass filter to get rid of the high frequency component on can obtain the value of the equivalent control, which contains more information about the state of the estimated system. The observer described above uses this method several times to obtain the state of the nonlinear system ideally in finite time. The modified observation error can be written in the transformed states formula_71. In particular, formula_72 and so formula_73 So: So, for sufficiently large formula_90 gains, all observer estimated states reach the actual states in finite time. In fact, increasing formula_90 allows for convergence in any desired finite time so long as each formula_91 function can be bounded with certainty. Hence, the requirement that the map formula_92 is a diffeomorphism (i.e., that its Jacobian linearization is invertible) asserts that convergence of the estimated output implies convergence of the estimated state. That is, the requirement is an observability condition. In the case of the sliding mode observer for the system with the input, additional conditions are needed for the observation error to be independent of the input. For example, that formula_93 does not depend on time. The observer is then formula_94 Multi-observer. Multi-observer extends the high-gain observer structure from single to multi observer, with many models working simultaneously. This has two layers: the first consists of multiple high-gain observers with different estimation states, and the second determines the importance weights of the first layer observers. The algorithm is simple to implement and does not contain any risky operations like differentiation. The idea of multiple models was previously applied to obtain information in adaptive control. Assuming that the number of high-gain observers equals formula_95, formula_96 formula_97 where formula_98 is the observer index. The first layer observers consists of the same gain formula_99 but they differ with the initial state formula_100. In the second layer all formula_101 from formula_102 observers are combined into one to obtain single state vector estimation formula_103 where formula_104 are weight factors. These factors are changed to provide the estimation in the second layer and to improve the observation process. Let assume that formula_105 and formula_106 where formula_107 is some vector that depends on formula_108 observer error formula_109. Some transformation yields to linear regression problem formula_110 This formula gives possibility to estimate formula_111. To construct manifold we need mapping formula_112 between formula_113 and ensurance that formula_114 is calculable relying on measurable signals. First thing is to eliminate parking phenomenon for formula_115 from observer error formula_116. Calculate formula_117 times derivative on formula_118 to find mapping m lead to formula_119 defined as formula_120 where formula_121 is some time constant. Note that formula_122 relays on both formula_123 and its integrals hence it is easily available in the control system. Further formula_115 is specified by estimation law; and thus it proves that manifold is measurable. In the second layer formula_124 for formula_125 is introduced as estimates of formula_126 coefficients. The mapping error is specified as formula_127 where formula_128. If coefficients formula_129 are equal to formula_126 , then mapping error formula_130 Now it is possible to calculate formula_131 from above equation and hence the peaking phenomenon is reduced thanks to properties of manifold. The created mapping gives a lot of flexibility in the estimation process. Even it is possible to estimate the value of formula_132 in the second layer and to calculate the state formula_133. Bounding observers. Bounding or interval observers constitute a class of observers that provide two estimations of the state simultaneously: one of the estimations provides an upper bound on the real value of the state, whereas the second one provides a lower bound. The real value of the state is then known to be always within these two estimations. These bounds are very important in practical applications, as they make possible to know at each time the precision of the estimation. Mathematically, two Luenberger observers can be used, if formula_99 is properly selected, using, for example, positive systems properties: one for the upper bound formula_134 (that ensures that formula_135 converges to zero from above when formula_12, in the absence of noise and uncertainty), and a lower bound formula_136 (that ensures that formula_137 converges to zero from below). That is, always formula_138 References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "x(k+1) = A x(k) + B u(k)" }, { "math_id": 1, "text": "y(k) = C x(k) + D u(k)" }, { "math_id": 2, "text": "k" }, { "math_id": 3, "text": "x(k)" }, { "math_id": 4, "text": "u(k)" }, { "math_id": 5, "text": "y(k)" }, { "math_id": 6, "text": "L" }, { "math_id": 7, "text": "\\hat{x}(k)" }, { "math_id": 8, "text": "\\hat{y}(k)" }, { "math_id": 9, "text": "\\hat{x}(k+1) = A \\hat{x}(k) + L \\left[y(k) - \\hat{y}(k)\\right] + B u(k)" }, { "math_id": 10, "text": "\\hat{y}(k) = C \\hat{x}(k) + D u(k)" }, { "math_id": 11, "text": "e(k) = \\hat{x}(k) - x(k)" }, { "math_id": 12, "text": " k \\to \\infty " }, { "math_id": 13, "text": " e(k+1) = (A - LC) e(k)" }, { "math_id": 14, "text": " A - LC " }, { "math_id": 15, "text": "K" }, { "math_id": 16, "text": "u(k)= -K \\hat{x}(k)" }, { "math_id": 17, "text": "\\hat{x}(k+1) = A \\hat{x}(k) + L \\left(y(k) - \\hat{y}(k)\\right) - B K \\hat{x}(k)" }, { "math_id": 18, "text": "\\hat{y}(k) = C \\hat{x}(k) - D K \\hat{x}(k)" }, { "math_id": 19, "text": "\\hat{x}(k+1) = \\left(A - B K \\right) \\hat{x}(k) + L \\left(y(k) - \\hat{y}(k)\\right)" }, { "math_id": 20, "text": "\\hat{y}(k) = \\left(C - D K\\right) \\hat{x}(k)" }, { "math_id": 21, "text": "A-LC" }, { "math_id": 22, "text": "A-BK" }, { "math_id": 23, "text": "\\dot{x} = A x + B u, " }, { "math_id": 24, "text": "y = C x + D u, " }, { "math_id": 25, "text": "x \\in \\mathbb{R}^n, u \\in \\mathbb{R}^m ,y \\in \\mathbb{R}^r" }, { "math_id": 26, "text": "\\dot{\\hat{x}} = A \\hat{x}+ B u + L \\left(y - \\hat{y}\\right) " }, { "math_id": 27, "text": "\\hat{y} = C \\hat{x} + D u, " }, { "math_id": 28, "text": "e=x-\\hat{x}" }, { "math_id": 29, "text": " \\dot{e} = (A - LC) e" }, { "math_id": 30, "text": "[A,C]" }, { "math_id": 31, "text": "e(t) \\to 0" }, { "math_id": 32, "text": "t \\to \\infty" }, { "math_id": 33, "text": "\\dot{x} = f(x)" }, { "math_id": 34, "text": "x \\in \\mathbb{R}^n" }, { "math_id": 35, "text": "y \\in \\mathbb{R}" }, { "math_id": 36, "text": "y = h(x)." }, { "math_id": 37, "text": "\\dot{x} = f(x) + B(x) u " }, { "math_id": 38, "text": "z=\\Phi(x)" }, { "math_id": 39, "text": "\\dot{z} = A z+ \\phi(y), " }, { "math_id": 40, "text": "y = Cz. " }, { "math_id": 41, "text": "\\dot{\\hat{z}} = A \\hat{z}+ \\phi(y) - L \\left(C \\hat{z}-y \\right) " }, { "math_id": 42, "text": "e=\\hat{z}-z" }, { "math_id": 43, "text": "\\dot{z} = A(u(t)) z+ \\phi(y,u(t) ), " }, { "math_id": 44, "text": "y = Cz, " }, { "math_id": 45, "text": "\\dot{\\hat{z}} = A(u(t)) \\hat{z}+ \\phi(y,u(t) ) - L(t) \\left(C \\hat{z}-y \\right) " }, { "math_id": 46, "text": "L(t)" }, { "math_id": 47, "text": "\\hat{x}" }, { "math_id": 48, "text": " \\dot{\\hat{x}} =\n\\left [ \\frac{\\partial H(\\hat{x})}{\\partial x}\\right]^{-1} M(\\hat{x})\n\\sgn( V(t) - H(\\hat{x}) )" }, { "math_id": 49, "text": "\\sgn(\\mathord{\\cdot})" }, { "math_id": 50, "text": "n" }, { "math_id": 51, "text": "\\sgn(z) = \\begin{bmatrix}\n\\sgn(z_1)\\\\\n\\sgn(z_2)\\\\\n\\vdots\\\\\n\\sgn(z_i)\\\\\n\\vdots\\\\\n\\sgn(z_n)\n\\end{bmatrix}" }, { "math_id": 52, "text": "z \\in \\mathbb{R}^n" }, { "math_id": 53, "text": "H(x)" }, { "math_id": 54, "text": "h(x)" }, { "math_id": 55, "text": "H(x) \\triangleq\n\\begin{bmatrix}\nh_1(x)\\\\\nh_2(x)\\\\\nh_3(x)\\\\\n\\vdots\\\\\nh_n(x)\n\\end{bmatrix}\n\\triangleq\n\\begin{bmatrix}\nh(x)\\\\\nL_{f}h(x)\\\\\nL_{f}^2 h(x)\\\\\n\\vdots\\\\\nL_{f}^{n-1}h(x)\n\\end{bmatrix}" }, { "math_id": 56, "text": "L^i_f h" }, { "math_id": 57, "text": "h" }, { "math_id": 58, "text": "f" }, { "math_id": 59, "text": "x" }, { "math_id": 60, "text": "H(x(t))" }, { "math_id": 61, "text": "y(t)=h(x(t))" }, { "math_id": 62, "text": "n-1" }, { "math_id": 63, "text": "M(\\hat{x})" }, { "math_id": 64, "text": "M(\\hat{x}) \\triangleq\n\\operatorname{diag}( m_1(\\hat{x}), m_2(\\hat{x}), \\ldots, m_n(\\hat{x}) )\n=\n\\begin{bmatrix}\nm_1(\\hat{x}) & & & & & \\\\\n& m_2(\\hat{x}) & & & & \\\\\n& & \\ddots & & & \\\\\n& & & m_i(\\hat{x}) & &\\\\\n& & & & \\ddots &\\\\\n& & & & & m_n(\\hat{x})\n\\end{bmatrix}" }, { "math_id": 65, "text": "i \\in \\{1,2,\\dots,n\\}" }, { "math_id": 66, "text": "m_i(\\hat{x}) > 0" }, { "math_id": 67, "text": "V(t)" }, { "math_id": 68, "text": "V(t)\n\\triangleq\n\\begin{bmatrix}v_{1}(t)\\\\\nv_2(t)\\\\\nv_3(t)\\\\\n\\vdots\\\\\nv_i(t)\\\\\n\\vdots\\\\\nv_{n}(t)\n\\end{bmatrix}\n\\triangleq\n\\begin{bmatrix}\ny(t)\\\\\n\\{ m_1(\\hat{x}) \\sgn( v_1(t) - h_1(\\hat{x}(t)) ) \\}_{\\text{eq}}\\\\\n\\{ m_2(\\hat{x}) \\sgn( v_2(t) - h_2(\\hat{x}(t)) ) \\}_{\\text{eq}}\\\\\n\\vdots\\\\\n\\{ m_{i-1}(\\hat{x}) \\sgn( v_{i-1}(t) - h_{i-1}(\\hat{x}(t)) ) \\}_{\\text{eq}}\\\\\n\\vdots\\\\\n\\{ m_{n-1}(\\hat{x}) \\sgn( v_{n-1}(t) - h_{n-1}(\\hat{x}(t)) ) \\}_{\\text{eq}}\n\\end{bmatrix}\n" }, { "math_id": 69, "text": "\\{ \\ldots \\}_{\\text{eq}}" }, { "math_id": 70, "text": "\\sgn( v_{i}(t)\\!-\\! h_{i}(\\hat{x}(t)) )" }, { "math_id": 71, "text": "e=H(x)-H(\\hat{x})" }, { "math_id": 72, "text": "\\begin{align}\n\\dot{e}\n&=\n\\frac{\\mathrm{d}}{\\mathrm{d}t} H(x)\n-\n\\frac{\\mathrm{d}}{\\mathrm{d}t} H(\\hat{x})\\\\\n&=\n\\frac{\\mathrm{d}}{\\mathrm{d}t} H(x)\n-\nM(\\hat{x}) \\, \\sgn( V(t) - H(\\hat{x}(t)) ),\n\\end{align}" }, { "math_id": 73, "text": "\n\\begin{align}\n\\begin{bmatrix}\n\\dot{e}_1\\\\\n\\dot{e}_2\\\\\n\\vdots\\\\\n\\dot{e}_i\\\\\n\\vdots\\\\\n\\dot{e}_{n-1}\\\\\n\\dot{e}_n\n\\end{bmatrix}\n&=\n\\mathord{\\overbrace{\n\\begin{bmatrix}\n\\dot{h}_1(x)\\\\\n\\dot{h}_2(x)\\\\\n\\vdots\\\\\n\\dot{h}_i(x)\\\\\n\\vdots\\\\\n\\dot{h}_{n-1}(x)\\\\\n\\dot{h}_n(x)\n\\end{bmatrix}\n}^{\\tfrac{\\mathrm{d}}{\\mathrm{d}t} H(x)}}\n-\n\\mathord{\\overbrace{\nM(\\hat{x}) \\, \\sgn( V(t) - H(\\hat{x}(t)) )\n}^{\\tfrac{\\mathrm{d}}{\\mathrm{d}t} H(\\hat{x})}}\n=\n\\begin{bmatrix}\nh_2(x)\\\\\nh_3(x)\\\\\n\\vdots\\\\\nh_{i+1}(x)\\\\\n\\vdots\\\\\nh_n(x)\\\\\nL_f^n h(x)\n\\end{bmatrix}\n-\n\\begin{bmatrix}\nm_1 \\sgn( v_1(t) - h_1(\\hat{x}(t)) )\\\\\nm_2 \\sgn( v_2(t) - h_2(\\hat{x}(t)) )\\\\\n\\vdots\\\\\nm_i \\sgn( v_i(t) - h_i(\\hat{x}(t)) )\\\\\n\\vdots\\\\\nm_{n-1} \\sgn( v_{n-1}(t) - h_{n-1}(\\hat{x}(t)) )\\\\\nm_n \\sgn( v_n(t) - h_n(\\hat{x}(t)) )\n\\end{bmatrix}\\\\\n&=\n\\begin{bmatrix}\nh_2(x) - m_1(\\hat{x}) \\sgn( \\mathord{\\overbrace{ \\mathord{\\overbrace{v_1(t)}^{v_1(t) = y(t) = h_1(x)}} - h_1(\\hat{x}(t)) }^{e_1}} )\\\\\nh_3(x) - m_2(\\hat{x}) \\sgn( v_2(t) - h_2(\\hat{x}(t)) )\\\\\n\\vdots\\\\\nh_{i+1}(x) - m_i(\\hat{x}) \\sgn( v_i(t) - h_i(\\hat{x}(t)) )\\\\\n\\vdots\\\\\nh_n(x) - m_{n-1}(\\hat{x}) \\sgn( v_{n-1}(t) - h_{n-1}(\\hat{x}(t)) )\\\\\nL_f^n h(x) - m_n(\\hat{x}) \\sgn( v_n(t) - h_n(\\hat{x}(t)) )\n\\end{bmatrix}.\n\\end{align}\n" }, { "math_id": 74, "text": "m_1(\\hat{x}) \\geq |h_2(x(t))|" }, { "math_id": 75, "text": "\\dot{e}_1 = h_2(\\hat{x}) - m_1(\\hat{x}) \\sgn( e_1 )" }, { "math_id": 76, "text": "e_1 = 0" }, { "math_id": 77, "text": "v_2(t) = \\{m_1(\\hat{x}) \\sgn( e_1 )\\}_{\\text{eq}}" }, { "math_id": 78, "text": "h_2(x)" }, { "math_id": 79, "text": "v_2(t) - h_2(\\hat{x}) = h_2(x) - h_2(\\hat{x}) = e_2" }, { "math_id": 80, "text": "m_2(\\hat{x}) \\geq |h_3(x(t))|" }, { "math_id": 81, "text": "\\dot{e}_2 = h_3(\\hat{x}) - m_2(\\hat{x}) \\sgn( e_2 )" }, { "math_id": 82, "text": "e_2 = 0" }, { "math_id": 83, "text": "e_i = 0" }, { "math_id": 84, "text": "v_{i+1}(t) = \\{\\ldots\\}_{\\text{eq}}" }, { "math_id": 85, "text": "h_{i+1}(x)" }, { "math_id": 86, "text": "m_{i+1}(\\hat{x}) \\geq |h_{i+2}(x(t))|" }, { "math_id": 87, "text": "(i+1)" }, { "math_id": 88, "text": "\\dot{e}_{i+1} = h_{i+2}(\\hat{x}) - m_{i+1}(\\hat{x}) \\sgn( e_{i+1} )" }, { "math_id": 89, "text": "e_{i+1} = 0" }, { "math_id": 90, "text": "m_i" }, { "math_id": 91, "text": "|h_i(x(0))|" }, { "math_id": 92, "text": "H:\\mathbb{R}^n \\to \\mathbb{R}^n " }, { "math_id": 93, "text": " \\frac{\\partial H(x)}{\\partial x} B(x)" }, { "math_id": 94, "text": "\n\\dot{\\hat{x}} = \\left[ \\frac{\\partial H(\\hat{x})}{\\partial x}\n\\right]^{-1} M(\\hat{x}) \\sgn(V(t) - H(\\hat{x}))+B(\\hat{x})u.\n" }, { "math_id": 95, "text": "n+1" }, { "math_id": 96, "text": "\\dot{\\hat{x}}_k(t) = A \\hat{x_k}(t)+ B \\phi_0(\\hat{x}(t), u(t)) - L (\\hat{y_k}(t)-y(t)) " }, { "math_id": 97, "text": " \\hat{y_k}(t) = C \\hat{x_k}(t) " }, { "math_id": 98, "text": " k = 1, \\dots, n + 1 " }, { "math_id": 99, "text": " L " }, { "math_id": 100, "text": " x_k(0) " }, { "math_id": 101, "text": " x_k(t) " }, { "math_id": 102, "text": " k = 1...n + 1 " }, { "math_id": 103, "text": " \\hat{y_k}(t) = \\sum\\limits_{k=1}^{n+1} \\alpha_k(t) \\hat{x_k}(t) " }, { "math_id": 104, "text": " \\alpha_k \\in \\mathbb{R} " }, { "math_id": 105, "text": " \\sum\\limits_{k=1}^{n+1} \\alpha_k(t) \\xi_k(t) = 0 " }, { "math_id": 106, "text": " \\sum\\limits_{k=1}^{n+1} \\alpha_k(t) = 1 " }, { "math_id": 107, "text": " \\xi_k \\in \\mathbb{R}^{n \\times 1} " }, { "math_id": 108, "text": " kth " }, { "math_id": 109, "text": " e_k(t) " }, { "math_id": 110, "text": " [- \\xi_{n + 1} (t)] = [\\xi_{1}(t) - \\xi_{n + 1}(t)\\dots \\xi_{k}(t) - \\xi_{n + 1}(t)\\dots \\xi_{n}(t) - \\xi_{n + 1}(t)]^T \\begin{bmatrix} \\alpha_1(t)\\\\ \\vdots \\\\ \\alpha_k(t)\\\\ \\vdots\\\\ \\alpha_n(t) \\end{bmatrix}" }, { "math_id": 111, "text": " \\alpha_k (t) " }, { "math_id": 112, "text": " m: \\mathbb{R}^{n} \\to \\mathbb{R}^{n} " }, { "math_id": 113, "text": " \\xi_k (t) = m(e_k(t))" }, { "math_id": 114, "text": " \\xi_k (t) " }, { "math_id": 115, "text": " \\alpha_k(t) " }, { "math_id": 116, "text": " e_{\\sigma}(t) = \\sum\\limits_{k=1}^{n+1} \\alpha_k(t) e_k(t) " }, { "math_id": 117, "text": " n " }, { "math_id": 118, "text": "\\eta_k(t)=\\hat y_k (t) - y(t)" }, { "math_id": 119, "text": " \\xi_k(t) " }, { "math_id": 120, "text": " \\xi_k (t) = \\begin{bmatrix} \n1 & 0 & 0 & \\cdots & 0 \\\\ \nCL & 1 & 0 & \\cdots & 0 \\\\\nCAL & CL & 1 & \\cdots & 0 \\\\\nCA^{2}L & CAL & CL & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\vdots & \\ddots \\\\ \nCA^{n-2}L & CA^{n-3}L & CA^{n-4}L & \\cdots & 1\n\\end{bmatrix} \n\\begin{bmatrix} \n\\int\\limits^t_{t-t_d} {{n-1} \\atop \\cdots} \\int\\limits^t_{t-t_d} \\eta_k(\\tau) d\\tau\\\\ \n\\vdots \\\\\n\\eta(t) - \\eta(t-(n-1)t_d)\n\\end{bmatrix}\n" }, { "math_id": 121, "text": "t_d > 0" }, { "math_id": 122, "text": "\\xi_k(t)" }, { "math_id": 123, "text": "\\eta_k(t)" }, { "math_id": 124, "text": "\\hat\\alpha_k(t)" }, { "math_id": 125, "text": "k = 1 \\dots n + 1" }, { "math_id": 126, "text": "\\alpha_k(t)" }, { "math_id": 127, "text": "e_\\xi(t) = \\sum\\limits_{k=1}^{n+1} \\hat\\alpha_k(t) \\xi_k(t) " }, { "math_id": 128, "text": "e_\\xi(t) \\in \\mathbb{R}^{n \\times 1}, \\hat\\alpha_k(t) \\in \\mathbb{R} " }, { "math_id": 129, "text": "\\hat\\alpha(t) " }, { "math_id": 130, "text": " e_\\xi(t) = 0" }, { "math_id": 131, "text": " \\hat x" }, { "math_id": 132, "text": "x(t)" }, { "math_id": 133, "text": " x" }, { "math_id": 134, "text": " \\hat{x}_U(k) " }, { "math_id": 135, "text": " e(k) = \\hat{x}_U(k) - x(k) " }, { "math_id": 136, "text": " \\hat{x}_L(k) " }, { "math_id": 137, "text": " e(k) = \\hat{x}_L(k) - x(k) " }, { "math_id": 138, "text": " \\hat{x}_U(k) \\ge x(k) \\ge \\hat{x}_L(k) " } ]
https://en.wikipedia.org/wiki?curid=1261170
1261615
Rectangular function
Function whose graph is 0, then 1, then 0 again, in an almost-everywhere continuous way The rectangular function (also known as the rectangle function, rect function, Pi function, Heaviside Pi function, gate function, unit pulse, or the normalized boxcar function) is defined as formula_0 Alternative definitions of the function define formula_1 to be 0, 1, or undefined. Its periodic version is called a "rectangular wave". History. The "rect" function has been introduced by Woodward in as an ideal cutout operator, together with the "sinc" function as an ideal interpolation operator, and their counter operations which are sampling ("comb" operator) and replicating ("rep" operator), respectively. Relation to the boxcar function. The rectangular function is a special case of the more general boxcar function: formula_2 where formula_3 is the Heaviside step function; the function is centered at formula_4 and has duration formula_5, from formula_6 to formula_7 Fourier transform of the rectangular function. The unitary Fourier transforms of the rectangular function are formula_8 using ordinary frequency f, where formula_9 is the normalized form of the sinc function and formula_10 using angular frequency formula_11, where formula_12 is the unnormalized form of the sinc function. For formula_13, its Fourier transform isformula_14Note that as long as the definition of the pulse function is only motivated by its behavior in the time-domain experience, there is no reason to believe that the oscillatory interpretation (i.e. the Fourier transform function) should be intuitive, or directly understood by humans. However, some aspects of the theoretical result may be understood intuitively, as finiteness in time domain corresponds to an infinite frequency response. (Vice versa, a finite Fourier transform will correspond to infinite time domain response.) Relation to the triangular function. We can define the triangular function as the convolution of two rectangular functions: formula_15 Use in probability. Viewing the rectangular function as a probability density function, it is a special case of the continuous uniform distribution with formula_16 The characteristic function is formula_17 and its moment-generating function is formula_18 where formula_19 is the hyperbolic sine function. Rational approximation. The pulse function may also be expressed as a limit of a rational function: formula_20 Demonstration of validity. First, we consider the case where formula_21 Notice that the term formula_22 is always positive for integer formula_23 However, formula_24 and hence formula_22 approaches zero for large formula_23 It follows that: formula_25 Second, we consider the case where formula_26 Notice that the term formula_27 is always positive for integer formula_23 However, formula_28 and hence formula_27 grows very large for large formula_23 It follows that: formula_29 Third, we consider the case where formula_30 We may simply substitute in our equation: formula_31 We see that it satisfies the definition of the pulse function. Therefore, formula_32 Dirac delta function. The rectangle function can be used to represent the Dirac delta function formula_33. Specifically,formula_34For a function formula_35, its average over the width "formula_36" around 0 in the function domain is calculated as, formula_37 To obtain formula_38, the following limit is applied, formula_39 and this can be written in terms of the Dirac delta function as, formula_40The Fourier transform of the Dirac delta function formula_41 is formula_42 where the sinc function here is the normalized sinc function. Because the first zero of the sinc function is at formula_43 and formula_36 goes to infinity, the Fourier transform of formula_41 is formula_44 means that the frequency spectrum of the Dirac delta function is infinitely broad. As a pulse is shorten in time, it is larger in spectrum. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{rect}\\left(\\frac{t}{a}\\right) = \\Pi\\left(\\frac{t}{a}\\right) =\n\\left\\{\\begin{array}{rl}\n 0, & \\text{if } |t| > \\frac{a}{2} \\\\\n \\frac{1}{2}, & \\text{if } |t| = \\frac{a}{2} \\\\\n 1, & \\text{if } |t| < \\frac{a}{2}.\n\\end{array}\\right." }, { "math_id": 1, "text": "\\operatorname{rect}\\left(\\pm\\frac{1}{2}\\right)" }, { "math_id": 2, "text": "\\operatorname{rect}\\left(\\frac{t-X}{Y} \\right) = H(t - (X - Y/2)) - H(t - (X + Y/2)) = H(t - X + Y/2) - H(t - X - Y/2)" }, { "math_id": 3, "text": "H(x)" }, { "math_id": 4, "text": "X" }, { "math_id": 5, "text": "Y" }, { "math_id": 6, "text": "X-Y/2" }, { "math_id": 7, "text": "X+Y/2." }, { "math_id": 8, "text": "\\int_{-\\infty}^\\infty \\operatorname{rect}(t)\\cdot e^{-i 2\\pi f t} \\, dt\n=\\frac{\\sin(\\pi f)}{\\pi f} = \\operatorname{sinc}_\\pi(f)," }, { "math_id": 9, "text": "\\operatorname{sinc}_\\pi" }, { "math_id": 10, "text": "\\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty \\operatorname{rect}(t)\\cdot e^{-i \\omega t} \\, dt\n=\\frac{1}{\\sqrt{2\\pi}}\\cdot \\frac{\\sin\\left(\\omega/2 \\right)}{\\omega/2}\n=\\frac{1}{\\sqrt{2\\pi}} \\operatorname{sinc}\\left(\\omega/2 \\right),\n" }, { "math_id": 11, "text": "\\omega" }, { "math_id": 12, "text": "\\operatorname{sinc}" }, { "math_id": 13, "text": "\\operatorname{rect} (x/a)" }, { "math_id": 14, "text": "\\int_{-\\infty}^\\infty \\operatorname{rect}\\left(\\frac{t}{a}\\right)\\cdot e^{-i 2\\pi f t} \\, dt\n=a \\frac{\\sin(\\pi af)}{\\pi af} = a\\ \\operatorname{sinc}_\\pi{(a f)}." }, { "math_id": 15, "text": "\\operatorname{tri} = \\operatorname{rect} * \\operatorname{rect}.\\," }, { "math_id": 16, "text": "a = -1/2, b = 1/2." }, { "math_id": 17, "text": "\\varphi(k) = \\frac{\\sin(k/2)}{k/2}," }, { "math_id": 18, "text": "M(k) = \\frac{\\sinh(k/2)}{k/2}," }, { "math_id": 19, "text": "\\sinh(t)" }, { "math_id": 20, "text": "\\Pi(t) = \\lim_{n\\rightarrow \\infty, n\\in \\mathbb(Z)} \\frac{1}{(2t)^{2n}+1}." }, { "math_id": 21, "text": "|t|<\\frac{1}{2}." }, { "math_id": 22, "text": "(2t)^{2n}" }, { "math_id": 23, "text": "n." }, { "math_id": 24, "text": "2t<1" }, { "math_id": 25, "text": "\\lim_{n\\rightarrow \\infty, n\\in \\mathbb(Z)} \\frac{1}{(2t)^{2n}+1} = \\frac{1}{0+1} = 1, |t|<\\tfrac{1}{2}." }, { "math_id": 26, "text": "|t|>\\frac{1}{2}." }, { "math_id": 27, "text": "(2t)^{2n}" }, { "math_id": 28, "text": "2t>1" }, { "math_id": 29, "text": "\\lim_{n\\rightarrow \\infty, n\\in \\mathbb(Z)} \\frac{1}{(2t)^{2n}+1} = \\frac{1}{+\\infty+1} = 0, |t|>\\tfrac{1}{2}." }, { "math_id": 30, "text": "|t| = \\frac{1}{2}." }, { "math_id": 31, "text": "\\lim_{n\\rightarrow \\infty, n\\in \\mathbb(Z)} \\frac{1}{(2t)^{2n}+1} = \\lim_{n\\rightarrow \\infty, n\\in \\mathbb(Z)} \\frac{1}{1^{2n}+1} = \\frac{1}{1+1} = \\tfrac{1}{2}." }, { "math_id": 32, "text": "\\operatorname{rect}(t) = \\Pi(t) = \\lim_{n\\rightarrow \\infty, n\\in \\mathbb(Z)} \\frac{1}{(2t)^{2n}+1} = \\begin{cases}\n0 & \\mbox{if } |t| > \\frac{1}{2} \\\\\n\\frac{1}{2} & \\mbox{if } |t| = \\frac{1}{2} \\\\\n1 & \\mbox{if } |t| < \\frac{1}{2}. \\\\\n\\end{cases}" }, { "math_id": 33, "text": "\\delta (x)" }, { "math_id": 34, "text": "\\delta (x) = \\lim_{a \\to 0} \\frac{1}{a}\\operatorname{rect}\\left(\\frac{x}{a}\\right)." }, { "math_id": 35, "text": "g(x)" }, { "math_id": 36, "text": "a" }, { "math_id": 37, "text": "g_{avg}(0) = \\frac{1}{a} \\int\\limits_{- \\infty}^{\\infty} dx\\ g(x) \\operatorname{rect}\\left(\\frac{x}{a}\\right)." }, { "math_id": 38, "text": "g(0)" }, { "math_id": 39, "text": "g(0) = \\lim_{a \\to 0} \\frac{1}{a} \\int\\limits_{- \\infty}^{\\infty} dx\\ g(x) \\operatorname{rect}\\left(\\frac{x}{a}\\right)" }, { "math_id": 40, "text": "g(0) = \\int\\limits_{- \\infty}^{\\infty} dx\\ g(x) \\delta (x)." }, { "math_id": 41, "text": "\\delta (t)" }, { "math_id": 42, "text": "\\delta (f)\n= \\int_{-\\infty}^\\infty \\delta (t) \\cdot e^{-i 2\\pi f t} \\, dt\n= \\lim_{a \\to 0} \\frac{1}{a} \\int_{-\\infty}^\\infty \\operatorname{rect}\\left(\\frac{t}{a}\\right)\\cdot e^{-i 2\\pi f t} \\, dt\n= \\lim_{a \\to 0} \\operatorname{sinc}{(a f)}." }, { "math_id": 43, "text": "f = 1 / a" }, { "math_id": 44, "text": "\\delta (f) = 1," } ]
https://en.wikipedia.org/wiki?curid=1261615
12617694
Value function
The value function of an optimization problem gives the value attained by the objective function at a solution, while only depending on the parameters of the problem. In a controlled dynamical system, the value function represents the optimal payoff of the system over the interval [t, t1] when started at the time-t state variable x(t)=x. If the objective function represents some cost that is to be minimized, the value function can be interpreted as the cost to finish the optimal program, and is thus referred to as "cost-to-go function." In an economic context, where the objective function usually represents utility, the value function is conceptually equivalent to the indirect utility function. In a problem of optimal control, the value function is defined as the supremum of the objective function taken over the set of admissible controls. Given formula_0, a typical optimal control problem is to formula_1 subject to formula_2 with initial state variable formula_3. The objective function formula_4 is to be maximized over all admissible controls formula_5, where formula_6 is a Lebesgue measurable function from formula_7 to some prescribed arbitrary set in formula_8. The value function is then defined as formula_9 with formula_10, where formula_11 is the "scrap value". If the optimal pair of control and state trajectories is formula_12, then formula_13. The function formula_14 that gives the optimal control formula_15 based on the current state formula_16 is called a feedback control policy, or simply a policy function. Bellman's principle of optimality roughly states that any optimal policy at time formula_17, formula_18 taking the current state formula_19 as "new" initial condition must be optimal for the remaining problem. If the value function happens to be continuously differentiable, this gives rise to an important partial differential equation known as Hamilton–Jacobi–Bellman equation, formula_20 where the maximand on the right-hand side can also be re-written as the Hamiltonian, formula_21, as formula_22 with formula_23 playing the role of the costate variables. Given this definition, we further have formula_24, and after differentiating both sides of the HJB equation with respect to formula_16, formula_25 which after replacing the appropriate terms recovers the costate equation formula_26 where formula_27 is Newton notation for the derivative with respect to time. The value function is the unique viscosity solution to the Hamilton–Jacobi–Bellman equation. In an online closed-loop approximate optimal control, the value function is also a Lyapunov function that establishes global asymptotic stability of the closed-loop system. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(t_{0}, x_{0}) \\in [0, t_{1}] \\times \\mathbb{R}^{d}" }, { "math_id": 1, "text": " \\text{maximize} \\quad J(t_{0}, x_{0}; u) = \\int_{t_{0}}^{t_{1}} I(t,x(t), u(t)) \\, \\mathrm{d}t + \\phi(x(t_{1}))" }, { "math_id": 2, "text": "\\frac{\\mathrm{d}x(t)}{\\mathrm{d}t} = f(t, x(t), u(t))" }, { "math_id": 3, "text": "x(t_{0})=x_{0}" }, { "math_id": 4, "text": "J(t_{0}, x_{0}; u)" }, { "math_id": 5, "text": "u \\in U[t_{0},t_{1}]" }, { "math_id": 6, "text": "u" }, { "math_id": 7, "text": "[t_{0}, t_{1}]" }, { "math_id": 8, "text": "\\mathbb{R}^{m}" }, { "math_id": 9, "text": "V(t, x(t)) = \\max_{u \\in U} \\int_{t}^{t_{1}} I(\\tau,x(\\tau), u(\\tau)) \\, \\mathrm{d}\\tau + \\phi(x(t_{1}))" }, { "math_id": 10, "text": "V(t_{1}, x(t_{1})) = \\phi(x(t_{1}))" }, { "math_id": 11, "text": "\\phi(x(t_{1}))" }, { "math_id": 12, "text": "(x^\\ast, u^\\ast)" }, { "math_id": 13, "text": "V(t_{0}, x_{0}) = J(t_{0}, x_{0}; u^\\ast)" }, { "math_id": 14, "text": "h" }, { "math_id": 15, "text": "u^\\ast" }, { "math_id": 16, "text": "x" }, { "math_id": 17, "text": "t" }, { "math_id": 18, "text": "t_{0} \\leq t \\leq t_{1}" }, { "math_id": 19, "text": "x(t)" }, { "math_id": 20, "text": "-\\frac{\\partial V(t,x)}{\\partial t} = \\max_u \\left\\{ I(t,x,u) + \\frac{\\partial V(t,x)}{\\partial x} f(t, x, u) \\right\\}" }, { "math_id": 21, "text": "H \\left(t, x, u, \\lambda \\right) = I(t,x,u) + \\lambda(t) f(t, x, u)" }, { "math_id": 22, "text": "-\\frac{\\partial V(t,x)}{\\partial t} = \\max_u H(t,x,u,\\lambda)" }, { "math_id": 23, "text": "\\partial V(t,x)/\\partial x = \\lambda(t)" }, { "math_id": 24, "text": "\\mathrm{d} \\lambda(t) / \\mathrm{d}t = \\partial^{2} V(t,x) / \\partial x \\partial t + \\partial^{2} V(t,x) / \\partial x^{2} \\cdot f(x)" }, { "math_id": 25, "text": "- \\frac{\\partial^{2} V(t,x)}{\\partial t \\partial x} = \\frac{\\partial I}{\\partial x} + \\frac{\\partial^{2} V(t,x)}{\\partial x^{2}} f(x) + \\frac{\\partial V(t,x)}{\\partial x} \\frac{\\partial f(x)}{\\partial x}" }, { "math_id": 26, "text": "- \\dot{\\lambda}(t) = \\underbrace{\\frac{\\partial I}{\\partial x} + \\lambda(t) \\frac{\\partial f(x)}{\\partial x} }_{= \\frac{\\partial H}{\\partial x}}" }, { "math_id": 27, "text": "\\dot{\\lambda}(t)" } ]
https://en.wikipedia.org/wiki?curid=12617694
12618582
Burr distribution
In probability theory, statistics and econometrics, the Burr Type XII distribution or simply the Burr distribution is a continuous probability distribution for a non-negative random variable. It is also known as the Singh–Maddala distribution and is one of a number of different distributions sometimes called the "generalized log-logistic distribution". Definitions. Probability density function. The Burr (Type XII) distribution has probability density function: formula_0 The formula_1 parameter scales the underlying variate and is a positive real. Cumulative distribution function. The cumulative distribution function is: formula_2 formula_3 Applications. It is most commonly used to model household income, see for example: Household income in the U.S. and compare to magenta graph at right. Random variate generation. Given a random variable formula_4 drawn from the uniform distribution in the interval formula_5, the random variable formula_6 has a Burr Type XII distribution with parameters formula_7, formula_8 and formula_1. This follows from the inverse cumulative distribution function given above. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{align}\nf(x;c,k) & = ck\\frac{x^{c-1}}{(1+x^c)^{k+1}} \\\\[6pt]\nf(x;c,k,\\lambda) & = \\frac{ck}{\\lambda} \\left( \\frac{x}{\\lambda} \\right)^{c-1} \\left[1 + \\left(\\frac{x}{\\lambda}\\right)^c\\right]^{-k-1}\n\\end{align}\n" }, { "math_id": 1, "text": "\\lambda" }, { "math_id": 2, "text": "F(x;c,k) = 1-\\left(1+x^c\\right)^{-k}" }, { "math_id": 3, "text": "F(x;c,k,\\lambda) = 1 - \\left[1 + \\left(\\frac{x}{\\lambda}\\right)^c \\right]^{-k}" }, { "math_id": 4, "text": "U" }, { "math_id": 5, "text": "\\left(0, 1\\right)" }, { "math_id": 6, "text": "X=\\lambda \\left (\\frac{1}{\\sqrt[k]{1-U}}-1 \\right )^{1/c}" }, { "math_id": 7, "text": "c" }, { "math_id": 8, "text": "k" } ]
https://en.wikipedia.org/wiki?curid=12618582
1262296
Trouton's rule
Linear relation between entropy of vaporization and boiling point In thermodynamics, Trouton's rule states that the (molar) entropy of vaporization is almost the same value, about 85–88 J/(K·mol), for various kinds of liquids at their boiling points. The entropy of vaporization is defined as the ratio between the enthalpy of vaporization and the boiling temperature. It is named after Frederick Thomas Trouton. It is expressed as a function of the gas constant R: formula_0 A similar way of stating this (Trouton's ratio) is that the latent heat is connected to boiling point roughly as formula_1 Trouton’s rule can be explained by using Boltzmann's definition of entropy to the relative change in free volume (that is, space available for movement) between the liquid and vapour phases. It is valid for many liquids; for instance, the entropy of vaporization of toluene is 87.30 J/(K·mol), that of benzene is 89.45 J/(K·mol), and that of chloroform is 87.92 J/(K·mol). Because of its convenience, the rule is used to estimate the enthalpy of vaporization of liquids whose boiling points are known. The rule, however, has some exceptions. For example, the entropies of vaporization of water, ethanol, formic acid and hydrogen fluoride are far from the predicted values. The entropy of vaporization of at its boiling point has the extraordinarily high value of 136.9 J/(K·mol). The characteristic of those liquids to which Trouton’s rule cannot be applied is their special interaction between molecules, such as hydrogen bonding. The entropy of vaporization of water and ethanol shows positive deviance from the rule; this is because the hydrogen bonding in the liquid phase lessens the entropy of the phase. In contrast, the entropy of vaporization of formic acid has negative deviance. This fact indicates the existence of an orderly structure in the gas phase; it is known that formic acid forms a dimer structure even in the gas phase. Negative deviance can also occur as a result of a small gas-phase entropy owing to a low population of excited rotational states in the gas phase, particularly in small molecules such as methane – a small moment of inertia I giving rise to a large rotational constant B, with correspondingly widely separated rotational energy levels and, according to Maxwell–Boltzmann distribution, a small population of excited rotational states, and hence a low rotational entropy. The validity of Trouton's rule can be increased by considering formula_2 Here, if "T" = 400 K, the right hand side of the equation equals 10.5"R", and we find the original formulation for Trouton's rule. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta \\bar S_\\text{vap} \\approx 10.5 R." }, { "math_id": 1, "text": "\\frac{L_\\text{vap}}{T_\\text{boiling}} \\approx 85{-}88\\ \\frac{\\text{J}}{\\text{K} \\cdot \\text{mol}}." }, { "math_id": 2, "text": "\\Delta \\bar S_\\text{vap} \\approx 4.5R + R \\ln T." } ]
https://en.wikipedia.org/wiki?curid=1262296
12625857
Cooperative diversity
Cooperative diversity is a cooperative multiple antenna technique for improving or maximising total network channel capacities for any given set of bandwidths which exploits user diversity by decoding the combined signal of the relayed signal and the direct signal in wireless multihop networks. A conventional single hop system uses direct transmission where a receiver decodes the information only based on the direct signal while regarding the relayed signal as interference, whereas the cooperative diversity considers the other signal as contribution. That is, cooperative diversity decodes the information from the combination of two signals. Hence, it can be seen that cooperative diversity is an antenna diversity that uses distributed antennas belonging to each node in a wireless network. Note that user cooperation is another definition of cooperative diversity. "User cooperation" considers an additional fact that each user relays the other user's signal while cooperative diversity can be also achieved by multi-hop relay networking systems. The cooperative diversity technique is a kind of multi-user MIMO technique. Relaying Strategies. The simplest cooperative relaying network consists of three nodes, namely source, destination, and a third node supporting the direct communication between source and destination denoted as relay. If the direct transmission of a message from source to destination is not (fully) successful, the overheard information from the source is forwarded by the relay to reach the destination via a different path. Since the two communications took a different path and take place one after another, this example implements the concept of space diversity and time diversity. The relaying strategies can be further distinguished by the amplify-and-forward, decode-and-forward, and compress-and-forward strategies: Relay Transmission Topology. Serial relay transmission is used for long distance communication and range-extension in shadowy regions. It provides power gain. In this topology signals propagate from one relay to another relay and the channels of neighboring hop are orthogonal to avoid any interference. Parallel relay transmission may be used where serial relay transmission suffers from multi-path fading. For outdoors and non-line-of-sight propagation, signal wavelength may be large and installation of multiple antennas are not possible. To increase the robustness against multi-path fading, parallel relay transmission can be used. In this topology, signals propagate through multiple relay paths in the same hop and the destination combines the signals received with the help of various combining schemes. It provides power gain and diversity gain simultaneously. System model. We consider a wireless relay system that consists of source, relay and destination nodes. It is assumed that the channel is in a half-duplex, orthogonal and amplify-and-forward relaying mode. Differently to the conventional direct transmission system, we exploit a time division relaying function where this system can deliver information with two temporal phases. On the first phase, the source node broadcasts information formula_0 toward both the destination and the relay nodes. The received signal at the destination and the relay nodes are respectively written as: formula_1 formula_2 where formula_3 is the channel from the source to the destination nodes, formula_4 is the channel from the source to the relay node, formula_5 is the noise signal added to formula_4 and formula_6 is the noise signal added to formula_3. On the second phase, the relay can transmit its received signal to the destination node except the direct transmission mode. Signal Decoding. We introduce four schemes to decode the signal at the destination node which are the direct scheme, the non-cooperative scheme, the cooperative scheme and the adaptive scheme. Except the direct scheme, the destination node uses the relayed signal in all other schemes. Direct Scheme. In the direct scheme, the destination decodes the data using the signal received from the source node on the first phase where the second phase transmission is omitted so that the relay node is not involved in transmission. The decoding signal received from the source node is written as: formula_7 While the advantage of the direct scheme is its simplicity in terms of the decoding processing, the received signal power can be severely low if the distance between the source node and the destination node is large. Thus, in the following we consider non-cooperative scheme which exploits signal relaying to improve the signal quality. Non-cooperative Scheme. In the non-cooperative scheme, the destination decodes the data using the signal received from the relay on the second phase, which results in the signal power boosting gain. The signal received from the relay node which retransmits the signal received from the source node is written as: formula_8 where formula_9 is the channel from the relay to the destination nodes and formula_5 is the noise signal added to formula_9. The reliability of decoding can be low since the degree of freedom is not increased by signal relaying. There is no increase in the diversity order since this scheme exploits only the relayed signal and the direct signal from the source node is either not available or is not accounted for. When we can take advantage of such a signal and increase in diversity order results. Thus, in the following we consider the cooperative scheme which decodes the combined signal of both the direct and relayed signals. Cooperative Scheme. For cooperative decoding, the destination node combines two signals received from the source and the relay nodes which results in the diversity advantage. The whole received signal vector at the destination node can be modeled as: formula_10 where formula_11 and formula_12 are the signals received at the destination node from the source and relay nodes, respectively. As a linear decoding technique, the destination combines elements of the received signal vector as follows: formula_13 where formula_14 is the linear combining weight which can be obtained to maximize signal-to-noise ratio (SNR) of the combined signals subject to given the complexity level of the weight calculation. Adaptive Scheme. Adaptive scheme selects one of the three modes described above which are the direct, the non-cooperative, and the cooperative schemes relying on the network channel state information and other network parameters. Trade-off. It is noteworthy that cooperative diversity can increase the diversity gain at the cost of losing the wireless resource such as frequency, time and power resources for the relaying phase. Wireless resources are wasted since the relay node uses wireless resources to relay the signal from the source to the destination node. Hence, it is important to remark that there is trade-off between the diversity gain and the waste of the spectrum resource in cooperative diversity. Channel Capacity of Cooperative Diversity. In June 2005, A. Høst-Madsen published a paper in-depth analyzing the channel capacity of the cooperative relay network. We assume that the channel from the source node to the relay node, from the source node to the destination node, and from the relay node to the destination node are formula_15 where the source node, the relay node, and the destination node are denoted node 1, node 2, and node 3, subsequently. The capacity of cooperative relay channels. Using the max-flow min-cut theorem yields the upper bound of full duplex relaying formula_16 where formula_17 and formula_18 are transmit information at the source node and the relay node respectively and formula_19 and formula_20 are received information at the relay node and the destination node respectively. Note that the max-flow min-cut theorem states that the maximum amount of flow is equal to the capacity of a minimum cut, i.e., dictated by its bottleneck. The capacity of the broadcast channel from formula_17 to formula_19 and formula_20 with given formula_18 is formula_21 while the capacity of the multiple access channel from formula_17 and formula_18 to formula_20 is formula_22 where formula_23 is the amount of correlation between formula_17 and formula_18. Note that formula_18 copies some part of formula_17 for cooperative relaying capability. Using cooperative relaying capability at the relay node improves the performance of reception at the destination node. Thus, the upper bound is rewritten as formula_24 Achievable rate of a decode-and-forward relay. Using a relay which decodes and forwards its captured signal yields the achievable rate as follows: formula_25 where the broadcast channel is reduced to the point-to-point channel because of decoding at the relay node, i.e., formula_26 is reduced to formula_27. The capacity of the reduced broadcast channel is formula_28 Thus, the achievable rate is rewritten as formula_29 Time-Division Relaying. The capacity of the TD relay channel is upper-bounded by formula_30 with formula_31 formula_32 Applications. In a cognitive radio system, unlicensed secondary users can use the resources which is licensed for primary users. When primary users want to use their licensed resources, secondary users has to vacate these resources. Hence secondary users have to constantly sense the channel for detecting the presence of primary user. It is very challenging to sense the activity of spatially distributed primary users in wireless channel. Spatially distributed nodes can improve the channel sensing reliability by sharing the information and reduce the probability of false alarming. A wireless ad hoc network is an autonomous and self organizing network without any centralized controller or pre-established infrastructure. In this network randomly distributed nodes forms a temporary functional network and support seamless leaving or joining of nodes. Such networks have been successfully deployed for military communication and have lot of potential for civilian applications, to include commercial and educational use, disaster management, road vehicle network etc. A wireless sensor network can use cooperative relaying to reduce the energy consumption in sensor nodes, hence lifetime of sensor network increases. Due to nature of wireless medium, communication through weaker channels requires huge energy as compared to relatively stronger channels. Careful incorporation of relay cooperation into routing process can selects better communication links and precious battery power can be saved. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x_{s}" }, { "math_id": 1, "text": " \nr_{d,s} = h_{d,s} x_{s} + n_{d,s} \\quad\n" }, { "math_id": 2, "text": " \nr_{r,s} = h_{r,s} x_{s} + n_{r,s} \\quad\n" }, { "math_id": 3, "text": "h_{d,s}" }, { "math_id": 4, "text": "h_{r,s}" }, { "math_id": 5, "text": "n_{r,s}" }, { "math_id": 6, "text": "n_{d,s}" }, { "math_id": 7, "text": "\nr_{d,s} = h_{d,s} x_{s} + n_{d,s} \\quad\n" }, { "math_id": 8, "text": "\nr_{d,r} = h_{d,r} r_{r,s} + n_{d,r}\n= h_{d,r} h_{r,s} x_{s} + h_{d,r} n_{r,s} + n_{d,r} \\quad\n" }, { "math_id": 9, "text": "h_{d,r}" }, { "math_id": 10, "text": "\n\\mathbf{r} = [r_{d,s} \\quad r_{d,r}]^T \n = [h_{d,s} \\quad h_{d,r} h_{r,s}]^T x_{s} + \\left[1 \\quad \\sqrt{|h_{d,r}|^2+1} \\right]^T n_{d}\n = \\mathbf{h} x_{s} + \\mathbf{q} n_{d}\n" }, { "math_id": 11, "text": "r_{d,s}" }, { "math_id": 12, "text": "r_{d,r}" }, { "math_id": 13, "text": "\ny = \\mathbf{w}^H \\mathbf{r}\n" }, { "math_id": 14, "text": "\\mathbf{w}" }, { "math_id": 15, "text": "c_{21} e^{j\\varphi_{21}},c_{31} e^{j\\varphi_{31}},c_{32} e^{j\\varphi_{32}}" }, { "math_id": 16, "text": "\nC^+ = \\max_{f(X_1,X_2)} \\min \\{ I(X_1;Y_2,Y_3|X_2), I(X_1,X_2;Y_3)\\} \n" }, { "math_id": 17, "text": "X_1" }, { "math_id": 18, "text": "X_2" }, { "math_id": 19, "text": "Y_2" }, { "math_id": 20, "text": "Y_3" }, { "math_id": 21, "text": "\n\\max_{f(X_1,X_2)} I(X_1;Y_2,Y_3|X_2) = \\frac{1}{2} \\log(1 + (1 - \\beta) (c^2_{21} + c^2_{31})P_1 )\n" }, { "math_id": 22, "text": "\n\\max_{f(X_1,X_2)} I(X_2,X_2;Y_3) = \\frac{1}{2} \\log(1 + c^2_{31} P_1 + c^2_{32} P_2 + 2 \\sqrt{ \\beta c^2_{31} c^2_{32} P_1 P_2})\n" }, { "math_id": 23, "text": "\\beta" }, { "math_id": 24, "text": "\nC^+ = \\max_{0 \\leq \\beta \\leq 1} \\min \\left\\{ \\frac{1}{2} \\log(1 + (1 - \\beta) (c^2_{21} + c^2_{31}) P_1), \\frac{1}{2} \\log(1 + c^2_{31} P_1 + c^2_{32} P_2 + 2 \\sqrt{ \\beta c^2_{31} c^2_{32} P_1 P_2}) \\right\\}\n" }, { "math_id": 25, "text": "\nR_1 = \\max_{f(X_1,X_2)} \\min \\{ I(X_1;Y_2|X_2), I(X_1,X_2;Y_3)\\} \n" }, { "math_id": 26, "text": "I(X_1;Y_2,Y_3|X_2)" }, { "math_id": 27, "text": "I(X_1;Y_2|X_2)" }, { "math_id": 28, "text": "\n\\max_{f(X_1,X_2)} I(X_1;Y_2|X_2) = \\frac{1}{2} \\log(1 + (1 - \\beta) c^2_{21} P_1 ).\n" }, { "math_id": 29, "text": "\nR_1 = \\max_{0 \\leq \\beta \\leq 1} \\min \\left\\{ \\frac{1}{2} \\log(1 + (1 - \\beta) c^2_{21} P_1), \\frac{1}{2} \\log(1 + c^2_{31} P_1 + c^2_{32} P_2 + 2 \\sqrt{ \\beta c^2_{31} c^2_{32} P_1 P_2}) \\right\\}\n" }, { "math_id": 30, "text": "\nC^+ = \\max_{0 \\leq \\beta \\leq 1} \\min \\{ C_1^+(\\beta), C_2^+(\\beta) \\}\n" }, { "math_id": 31, "text": "\nC_1^+(\\beta) = \\frac{\\alpha}{2} \\log \\left( 1 + (c_{31}^2 + c_{21}^2) P_1^{(1)} \\right)\n + \\frac{1-\\alpha}{2} \\log \\left( 1 + (1-\\beta) c_{31}^2 P_1^{(2)} \\right)\n" }, { "math_id": 32, "text": "\nC_2^+(\\beta) = \\frac{\\alpha}{2} \\log \\left( 1 + c_{31}^2 P_1^{(1)} \\right)\n + \\frac{1-\\alpha}{2} \\log \\left( 1 + c_{31}^2 P_1^{(2)} + c_{32}^2 P_2 + 2 \\sqrt{ \\beta C_{31}^2 P_1^{(2)} C_{32}^2 P_2} \\right)\n" } ]
https://en.wikipedia.org/wiki?curid=12625857
12626504
Real net output ratio
The Real Net Output Ratio (or Vertical Range of Manufacture) describes in a value chain the fraction of the internal (company specific) production on the total production value of one company. The total production value of a company consists of internal production plus the sum of externally produced goods and services. formula_0 A Real Net Output Ratio of 0% relates to a company that does not have its own production and therefore only does trading.
[ { "math_id": 0, "text": "\\textstyle Real\\,Net\\,Output\\,Ratio = \\frac{internal\\,production}{total\\,production\\,value} = \\frac{internal\\,production}{internal\\,production\\,+\\,externally\\,produced\\,goods\\,+\\,externally\\,produced\\,services}" } ]
https://en.wikipedia.org/wiki?curid=12626504
1262820
Foreign exchange option
Derivative financial instrument In finance, a foreign exchange option (commonly shortened to just FX option or currency option) is a derivative financial instrument that gives the right but not the obligation to exchange money denominated in one currency into another currency at a pre-agreed exchange rate on a specified date. See Foreign exchange derivative. The foreign exchange options market is the deepest, largest and most liquid market for options of any kind. Most trading is over the counter (OTC) and is lightly regulated, but a fraction is traded on exchanges like the International Securities Exchange, Philadelphia Stock Exchange, or the Chicago Mercantile Exchange for options on futures contracts. The global market for exchange-traded currency options was notionally valued by the Bank for International Settlements at $158.3 trillion in 2005. Example. For example, a GBPUSD contract could give the owner the right to sell £1,000,000 and buy $2,000,000 on December 31. In this case the pre-agreed exchange rate, or strike price, is 2.0000 USD per GBP (or GBP/USD 2.00 as it is typically quoted) and the notional amounts (notionals) are £1,000,000 and $2,000,000. This type of contract is both a call on dollars and a put on sterling, and is typically called a "GBPUSD put", as it is a put on the "exchange rate"; although it could equally be called a "USDGBP call". If the rate is lower than 2.0000 on December 31 (say 1.9000), meaning that the dollar is stronger and the pound is weaker, then the option is exercised, allowing the owner to sell GBP at 2.0000 and immediately buy it back in the spot market at 1.9000, making a profit of (2.0000 GBPUSD − 1.9000 GBPUSD) × 1,000,000 GBP = 100,000 USD in the process. If instead they take the profit in GBP (by selling the USD on the spot market) this amounts to 100,000 / 1.9000 = 52,632 GBP. Trading. The difference between FX options and traditional options is that in the latter case the trade is to give an amount of money and receive the right to buy or sell a commodity, stock or other non-money asset. In FX options, the asset in question is also money, denominated in another currency. For example, a call option on oil allows the investor to buy oil at a given price and date. The investor on the other side of the trade is in effect selling a put option on the currency. To eliminate residual risk, traders match the "foreign" currency notionals, not the local currency notionals, else the foreign currencies received and delivered do not offset. In the case of an FX option on a "rate", as in the above example, an option on GBPUSD gives a USD value that is linear in GBPUSD using USD as the numéraire (a move from 2.0000 to 1.9000 yields a .10 × = $100,000 profit), but has a non-linear GBP value. Conversely, the GBP value is linear in the USDGBP rate, while the USD value is non-linear. This is because inverting a rate has the effect of formula_0, which is non-linear. Hedging. Corporations primarily use FX options to hedge "uncertain" future cash flows in a foreign currency. The general rule is to hedge "certain" foreign currency cash flows with "forwards", and "uncertain" foreign cash flows with "options". Suppose a United Kingdom manufacturing firm expects to be paid US$ for a piece of engineering equipment to be delivered in 90 days. If the GBP strengthens against the US$ over the next 90 days the UK firm loses money, as it will receive less GBP after converting the US$ into GBP. However, if the GBP weakens against the US$, then the UK firm receives more GBP. This uncertainty exposes the firm to FX risk. Assuming that the cash flow is certain, the firm can enter into a forward contract to deliver the US$ in 90 days time, in exchange for GBP at the current forward exchange rate. This forward contract is free, and, presuming the expected cash arrives, exactly matches the firm's exposure, perfectly hedging their FX risk. If the cash flow is uncertain, a forward FX contract exposes the firm to FX risk in the "opposite" direction, in the case that the expected USD cash is "not" received, typically making an option a better choice. Using options, the UK firm can purchase a GBP call/USD put option (the right to sell part or all of their expected income for pounds sterling at a predetermined rate), which: Valuation: the Garman–Kohlhagen model. As in the Black–Scholes model for stock options and the Black model for certain interest rate options, the value of a European option on an FX rate is typically calculated by assuming that the rate follows a log-normal process. The earliest currency options pricing model was published by Biger and Hull, (Financial Management, spring 1983). The model preceded the Garman and Kolhagen's Model. In 1983 Garman and Kohlhagen extended the Black–Scholes model to cope with the presence of two interest rates (one for each currency). Suppose that formula_1 is the risk-free interest rate to expiry of the domestic currency and formula_2 is the foreign currency risk-free interest rate (where domestic currency is the currency in which we obtain the value of the option; the formula also requires that FX rates – both strike and current spot be quoted in terms of "units of domestic currency per unit of foreign currency"). The results are also in the same units and to be meaningful need to be converted into one of the currencies. Then the domestic currency value of a call option into the foreign currency is formula_3 The value of a put option has value formula_4 where : formula_5 formula_6 formula_7 is the current spot rate formula_8 is the strike price formula_9 is the cumulative normal distribution function formula_1 is domestic risk free simple interest rate formula_2 is foreign risk free simple interest rate formula_10 is the time to maturity (calculated according to the appropriate day count convention) and formula_11 is the volatility of the FX rate. Risk management. An earlier pricing model was published by Biger and Hull, Financial Management, spring 1983. The model preceded Garman and Kolhagen Model. A wide range of techniques are in use for calculating the options risk exposure, or Greeks (as for example the Vanna-Volga method). Although the option prices produced by every model agree (with Garman–Kohlhagen), risk numbers can vary significantly depending on the assumptions used for the properties of spot price movements, volatility surface and interest rate curves. After Garman–Kohlhagen, the most common models are SABR and local volatility, although when agreeing risk numbers with a counterparty (e.g. for exchanging delta, or calculating the strike on a 25 delta option) Garman–Kohlhagen is always used. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "x \\mapsto 1/x" }, { "math_id": 1, "text": "r_d" }, { "math_id": 2, "text": "r_f" }, { "math_id": 3, "text": "c = S_0e^{-r_f T}\\mathcal{N}(d_1) - Ke^{-r_d T}\\mathcal{N}(d_2)" }, { "math_id": 4, "text": "p = Ke^{-r_d T}\\mathcal{N}(-d_2) - S_0e^{-r_f T}\\mathcal{N}(-d_1)" }, { "math_id": 5, "text": "d_1 = \\frac{\\ln(S_0/K) + (r_d - r_f + \\sigma^2/2)T}{\\sigma\\sqrt{T}}" }, { "math_id": 6, "text": "d_2 = d_1 - \\sigma\\sqrt{T}" }, { "math_id": 7, "text": "S_0" }, { "math_id": 8, "text": "K" }, { "math_id": 9, "text": "\\mathcal{N}(x)" }, { "math_id": 10, "text": "T" }, { "math_id": 11, "text": "\\sigma" } ]
https://en.wikipedia.org/wiki?curid=1262820
12629
Global illumination
Group of rendering algorithms used in 3D computer graphics Global illumination (GI), or indirect illumination, is a group of algorithms used in 3D computer graphics that are meant to add more realistic lighting to 3D scenes. Such algorithms take into account not only the light that comes directly from a light source ("direct illumination"), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene, whether reflective or not ("indirect illumination"). Theoretically, reflections, refractions, and shadows are all examples of global illumination, because when simulating them, one object affects the rendering of another (as opposed to an object being affected only by a direct source of light). In practice, however, only the simulation of diffuse inter-reflection or caustics is called global illumination. Algorithms. Images rendered using global illumination algorithms often appear more photorealistic than those using only direct illumination algorithms. However, such images are computationally more expensive and consequently much slower to generate. One common approach is to compute the global illumination of a scene and store that information with the geometry (e.g., radiosity). The stored data can then be used to generate images from different viewpoints for generating walkthroughs of a scene without having to go through expensive lighting calculations repeatedly. Radiosity, ray tracing, beam tracing, cone tracing, path tracing, volumetric path tracing, Metropolis light transport, ambient occlusion, photon mapping, signed distance field and image-based lighting are all examples of algorithms used in global illumination, some of which may be used together to yield results that are not fast, but accurate. These algorithms model diffuse inter-reflection which is a very important part of global illumination; however most of these (excluding radiosity) also model specular reflection, which makes them more accurate algorithms to solve the lighting equation and provide a more realistically illuminated scene. The algorithms used to calculate the distribution of light energy between surfaces of a scene are closely related to heat transfer simulations performed using finite-element methods in engineering design. Photorealism. Achieving accurate computation of global illumination in real-time remains difficult. In real-time 3D graphics, the diffuse inter-reflection component of global illumination is sometimes approximated by an "ambient" term in the lighting equation, which is also called "ambient lighting" or "ambient color" in 3D software packages. Though this method of approximation (also known as a "cheat" because it's not really a global illumination method) is easy to perform computationally, when used alone it does not provide an adequately realistic effect. Ambient lighting is known to "flatten" shadows in 3D scenes, making the overall visual effect more bland. However, used properly, ambient lighting can be an efficient way to make up for a lack of processing power. Procedure. More and more specialized algorithms are used in 3D programs that can effectively simulate the global illumination. These algorithms are numerical approximations of the rendering equation. Well known algorithms for computing global illumination include path tracing, photon mapping and radiosity. The following approaches can be distinguished here: In Light-path notation global lighting the paths of the type L (D | S) corresponds * E. A full treatment can be found in Image-based lighting. Another way to simulate real global illumination is the use of high-dynamic-range images (HDRIs), also known as environment maps, which encircle and illuminate the scene. This process is known as image-based lighting. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L = (1-T)^{-1} L^e\\," }, { "math_id": 1, "text": "L = \\sum_{i=0}^\\infty T^iL^e" }, { "math_id": 2, "text": "L_n tl_ e + = L ^{(n-1)}" } ]
https://en.wikipedia.org/wiki?curid=12629
12630
Geometric series
Sum of an (infinite) geometric progression In mathematics, a geometric series is a series in which the ratio of successive adjacent terms is constant. In other words, the sum of consecutive terms of a geometric sequence forms a geometric series. The name indicates that each term is the geometric mean of its two neighbouring terms, similar to how the name arithmetic series indicates each term is the arithmetic mean of its two neighbouring terms. In general, a geometric series is written as formula_0, where formula_1 is the initial term and a coefficient of each later term and formula_2 is the common ratio between adjacent terms. For example, the series formula_3 is geometric because each successive term can be obtained by multiplying the previous term by formula_4. Truncated geometric series formula_5 are called "finite geometric series" in certain branches of mathematics, especially in 19th century calculus and in probability and statistics and their applications. The standard generator form expression for the infinite geometric series is formula_6 and the generator form expression for the finite geometric series is formula_7 Any finite geometric series has the sum formula_8, and when formula_9 the infinite series converges to the value formula_10. Geometric series have been studied in mathematics from at least the time of Euclid in his work, "Elements", which explored geometric proportions. Archimedes further advanced the study through his work on infinite sums, particularly in calculating areas and volumes of geometric shapes (for instance calculating the area inside a parabola) and the early development of calculus. They serve as prototypes for frequently used mathematical tools such as Taylor series, Fourier series, and matrix exponentials. Geometric series have been applied to model a wide variety of natural phenomena, such as the expansion of the universe where the common ratio "r" is defined by Hubble's constant and the decay of radioactive carbon-14 atoms where the common ratio "r" is defined by the half-life of carbon-14. Parameters. The geometric series formula_11 is an infinite series derived from a special type of sequence called a geometric progression, which is defined by just two parameters: the initial term formula_1 and the common ratio formula_2. Finite geometric series formula_5 have a third parameter, the final term's power formula_12 In applications with units of measurement, the initial term formula_1 provides the units of the series and the common ratio formula_2 is a dimensionless quantity. The following table shows several geometric series with various initial terms and common ratios. Initial term "a". The geometric series formula_11 has the same coefficient formula_1 in every term. The first term of a geometric series is equal to this coefficient and is the parameter formula_1 of that geometric series, giving it its common interpretation: the "initial term." This initial term defines the units of measurement of the series as a whole, if it has any. In generator form, formula_14 this term is technically written formula_15 instead of the bare formula_1. This is equivalent because formula_16 for any number formula_17 In contrast, general power series formula_18 have coefficients formula_19 that can vary from term to term. In other words, the geometric series is a special case of the power series. Connections between power series and geometric series are discussed below in the section . Common ratio "r". The parameter formula_2 is called the common ratio because it is the ratio of any term with the previous term in the series. formula_20 where formula_21 represents the formula_22-th-power term of the geometric series. The common ratio formula_2 can be thought of as a multiplier used to calculate each next term in the series from the previous term. It must be a dimensionless quantity. When formula_23 it is often called a growth rate or rate of expansion and when formula_24 it is often called a decay rate or shrink rate, where the idea that it is a "rate" comes from interpreting formula_25 as a sort of discrete time variable. When an application area has specialized vocabulary for specific types of growth, expansion, shrinkage, and decay, that vocabulary will also often be used to name formula_2 parameters of geometric series. In economics, for instance, rates of increase and decrease of price levels are called inflation rates and deflation rates, while rates of increase in values of investments include rates of return and interest rates. The interpretation of formula_25 as a time variable is often exactly correct in applications, such as the examples of amortized analysis of algorithmic complexity and calculating the present value of an annuity in below. In such applications it is also common to report a "growth rate" formula_2 in terms of another expression such as formula_26, which is a percentage growth rate, or formula_27, which is a doubling time, the opposite of a half-life. Complex common ratio. The common ratio formula_2 can also be a complex number given by formula_28, where formula_29 is the magnitude of the number as a vector in the complex plane, formula_30 is the angle or orientation of that vector, formula_31 is Euler's number, and formula_32. In this case, the expanded form of the geometric series is formula_33 An example of how this behaves for formula_30 values that increase linearly over time with a constant angular frequency formula_34, such that formula_35 is shown in the adjacent video. For formula_35 the geometric series becomes formula_36 where the first term is a vector of length formula_1 that does not change orientation and all the following terms are vectors of proportional lengths rotating in the complex plane at integer multiples of the fundamental angular frequency formula_34, also known as harmonics of formula_34. As the video shows, these sums trace a circle. The period of rotation around the circle is formula_37. Convergence. The convergence of the sequence of partial sums of the geometric series depends on the value of the common ratio formula_2 alone: * If formula_38, the terms of the series approach zero (becoming smaller and smaller in magnitude) and the ratios of the successive terms of the series stay constant and less than one in magnitude, and thus the series converges absolutely by the ratio test for series convergence. The limit value is formula_39 * If formula_40, the terms of the series become larger and larger in magnitude and the sum of the terms also gets larger and larger in magnitude, so the series diverges. * If formula_41, the sequence of partial sums of the series does not converge. When formula_42, all the terms of the series are the same and the series grows to infinity. When formula_43, the terms take two values alternately and therefore, the sequence of partial sums of the terms oscillates between two values. Consider, for example, Grandi's series: formula_44. Partial sums of the terms oscillate between 1 and 0. Thus, the sequence of partial sums does not converge to any finite value. When formula_45 and formula_46, the partial sums circulate periodically among the values formula_47, never converging to a limit, and generally when formula_48 for any integer formula_49 and with any formula_50, the partial sums of the series will circulate indefinitely with a period of formula_49, never converging to a limit. When the series converges, the rate of convergence and the pattern of convergence depend on the value of the common ratio formula_2. The rate of convergence gets slower as formula_29 approaches formula_13. If formula_51 and formula_9, adjacent terms in the geometric series alternate between positive and negative and the partial sums of the terms oscillate above and below their eventual limit, whereas if formula_52 and formula_9 then terms all share the same sign and the partial sums of the terms approach their eventual limit monotonically. More on this is described below in the section . Sum. For convenience, the sum of the geometric series is often denoted by formula_53 and its partial sums (the sums of the series going up to only the "n"th power term) are often denoted formula_54 The partial sum of the first formula_55 terms of a geometric series, up to and including the formula_56 term, formula_57 is given by the closed form formula_58 where "r" is the common ratio. The case formula_59 is just simple addition, a case of an arithmetic series. The formula for the partial sums formula_60 with formula_61 can be found as follows: formula_62 As formula_2 approaches 1, polynomial division or L'Hospital's rule recovers the case formula_63. As formula_25 approaches infinity, the absolute value of "r" must be less than one for the series to converge, and when it does, the series converges absolutely. The sum of the infinite series then becomes formula_64 The formula holds for real formula_65 and for complex formula_2. The question of whether an infinite series converges is fundamentally a question about the distance between successive values: given enough terms, do the values of the partial sums get arbitrarily close to a finite limit value that they approach? In the above derivation of the closed form of the geometric series, the interpretation of the distance between two values was the distance between their locations on the number line or on the complex plane. That is the most common interpretation of the distance between two numbers. However, the p-adic metric, which has become a critical notion in modern number theory, offers a definition of distance such that the geometric series 1 + 2 + 4 + 8 + ... with formula_66 and formula_67 actually does converge to formula_68 even though formula_2 is outside the typical convergence range formula_9. Rate of convergence. For any sequence formula_69, its rate of convergence to a limit value formula_70 is determined by the parameters formula_71 and formula_72 such that formula_73 formula_71 is called the order of convergence, while formula_72 is called the rate of convergence. In the case of the sequence of partial sums of the geometric series, the relevant sequence is formula_60 and its limit is formula_53. Therefore, the rate and order are found via formula_74 Using formula_75 and setting formula_76 gives formula_77 so the order of convergence of the geometric series is 1 and its rate of convergence is formula_29. Convergence of order one is also called linear convergence or exponential convergence, depending on context. Geometric proofs of convergence. Alternatively, a geometric interpretation of the convergence for formula_24 is shown in the adjacent diagram. The area of the white triangle is the series remainder formula_78 Each additional term in the partial series reduces the area of that white triangle remainder by the area of the trapezoid representing the added term. The trapezoid areas (i.e., the values of the terms) get progressively thinner and shorter and closer to the origin. As the number of trapezoids approaches infinity, the white triangle remainder will vanish and therefore formula_60 will converge to formula_53. In contrast, with formula_23, shown in the next adjacent figure, the trapezoid areas representing the terms of the series would instead get progressively wider and taller and farther from the origin, not converging to the origin as terms and also not converging in sum as a series. The next adjacent diagram provides a geometric interpretation of a converging alternating geometric series with formula_79 where the areas corresponding to the negative terms are shown below the x-axis. When each positive area is paired with its adjacent smaller negative area, the result is a series of non-overlapping trapezoids, separated by gaps. To eliminate these gaps, broaden each trapezoid so that it spans the rightmost formula_80 of the original triangle area instead of just the rightmost formula_81. At the same time, to ensure the areas of the trapezoids remain consistent during this transformation, a rescaling is necessary. The required scaling factor formula_82 can be derived from the equation:formula_83Simplifying this gives:formula_84 where formula_85 Because formula_51, this scaling factor decreases the heights of the trapezoids to fill the gaps. After the gaps are removed, pairs of terms in the converging alternating geometric series form a new converging geometric series with a common ratio formula_86, reflecting the pairing of terms. The rescaled coefficient formula_87 compensates for the gap-filling. History. Zeno of Elea (c.495 – c.430 BC). 2,500 years ago, Greek mathematicians believed that an infinitely long list of positive numbers must sum to infinity. Therefore, Zeno of Elea created a paradox when he demonstrated that in order to walk from one place to another, one must first walk half the distance there, and then half of the remaining distance, and half of that remaining distance, and so on, covering infinitely many intervals before arriving. In doing so, he partitioned a fixed distance into an infinitely long list of halved remaining distances, each of which has length greater than zero. Zeno's paradox revealed to the Greeks that their assumption about an infinitely long list of positive numbers needing to add up to infinity was incorrect. Euclid of Alexandria (c.300 BC). "Euclid's Elements of Geometry" Book IX, Proposition 35, proof (of the proposition in adjacent diagram's caption): &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Let AA', BC, DD', EF be any multitude whatsoever of continuously proportional numbers, beginning from the least AA'. And let BG and FH, each equal to AA', have been subtracted from BC and EF. I say that as GC is to AA', so EH is to AA', BC, DD'. For let FK be made equal to BC, and FL to DD'. And since FK is equal to BC, of which FH is equal to BG, the remainder HK is thus equal to the remainder GC. And since as EF is to DD', so DD' to BC, and BC to AA' [Prop. 7.13], and DD' equal to FL, and BC to FK, and AA' to FH, thus as EF is to FL, so LF to FK, and FK to FH. By separation, as EL to LF, so LK to FK, and KH to FH [Props. 7.11, 7.13]. And thus as one of the leading is to one of the following, so (the sum of) all of the leading to (the sum of) all of the following [Prop. 7.12]. Thus, as KH is to FH, so EL, LK, KH to LF, FK, HF. And KH equal to CG, and FH to AA', and LF, FK, HF to DD', BC, AA'. Thus, as CG is to AA', so EH to DD', BC, AA'. Thus, as the excess of the second is to the first, so is the excess of the last is to all those before it. The very thing it was required to show. The terseness of Euclid's propositions and proofs may have been a necessity. As is, the "Elements of Geometry" is over 500 pages of propositions and proofs. Making copies of this popular textbook was labor intensive given that the printing press was not invented until 1440. And the book's popularity lasted a long time: as stated in the cited introduction to an English translation, "Elements of Geometry" "has the distinction of being the world's oldest continuously used mathematical textbook." So being very terse was being very practical. The proof of Proposition 35 in Book IX could have been even more compact if Euclid could have somehow avoided explicitly equating lengths of specific line segments from different terms in the series. For example, the contemporary notation for geometric series (i.e., "a" + "ar" + "ar"2 + "ar"3 + ... + "ar"n) does not label specific portions of terms that are equal to each other. Also in the cited introduction the editor comments, Most of the theorems appearing in the Elements were not discovered by Euclid himself, but were the work of earlier Greek mathematicians such as Pythagoras (and his school), Hippocrates of Chios, Theaetetus of Athens, and Eudoxus of Cnidos. However, Euclid is generally credited with arranging these theorems in a logical manner, so as to demonstrate (admittedly, not always with the rigour demanded by modern mathematics) that they necessarily follow from five simple axioms. Euclid is also credited with devising a number of particularly ingenious proofs of previously discovered theorems (e.g., Theorem 48 in Book 1). To help translate the proposition and proof into a form that uses current notation, a couple modifications are in the diagram. First, the four horizontal line lengths representing the values of the first four terms of a geometric series are now labeled a, ar, ar2, ar3 in the diagram's left margin. Second, new labels A' and D' are now on the first and third lines so that all the diagram's line segment names consistently specify the segment's starting point and ending point. Here is a phrase by phrase interpretation of the proposition: Similarly, here is a sentence by sentence interpretation of the proof: Archimedes of Syracuse (c.287 – c.212 BC). Archimedes used the sum of a geometric series to compute the area enclosed by a parabola and a straight line. Archimedes' theorem states that the total area under the parabola is 4/3 of the area of the blue triangle. His method was to dissect the area into an infinite number of triangles as shown in the adjacent figure. Archimedes determined that each green triangle has 1/8 the area of the blue triangle, each yellow triangle has 1/8 the area of a green triangle, and so forth. Assuming that the blue triangle has area 1, the total area is an infinite series formula_88 The first term represents the area of the blue triangle, the second term the areas of the two green triangles, the third term the areas of the four yellow triangles, and so on. Simplifying the fractions gives formula_89 This is a geometric series with common ratio formula_90 and its sum is formula_91 This computation is an example of the method of exhaustion, an early version of integration. Using calculus, the same area could be found by a definite integral. Nicole Oresme (c.1323 – 1382). In addition to his elegantly simple proof of the divergence of the harmonic series, Nicole Oresme proved that the series formula_92 His diagram for his geometric proof, similar to the adjacent diagram, shows a two dimensional geometric series. The first dimension is horizontal, in the bottom row, representing the geometric series with initial value formula_93 and common ratio formula_94 formula_95 The second dimension is vertical, where the bottom row is a new initial term formula_96 and each subsequent row above it shrinks according to the same common ratio formula_94, making another geometric series with sum formula_97, formula_98 Although difficult to visualize beyond three dimensions, Oresme's insight generalizes to any dimension formula_99. Denoting the sum of the formula_99-dimensional series formula_100, then using the limit of the formula_101-dimensional geometric series, formula_102 as the initial term of a geometric series with the same common ratio in the next dimension, results in a recursive formula for formula_100 with the base case formula_103 given by the usual sum formula with an initial term formula_1, so that: formula_104 within the range formula_105, with formula_106 and formula_107 in Oresme's particular example. Pascal's triangle exhibits the coefficients of these multi-dimensional geometric series, formula_108 where, as usual, the series converge to these closed forms only when formula_105. Examples. Repeating decimals and binaries. Decimal numbers that have repeated patterns that continue forever, for instance formula_109 formula_110 or formula_111 can be interpreted as geometric series and thereby converted to expressions of the ratio of two integers. For example, the repeated decimal fraction formula_112 can be written as the geometric series formula_113 where the initial term is formula_114 and the common ratio is formula_115. The geometric series formula provides the integer ratio that corresponds to the repeating decimal: formula_116 An example that has four digits is the repeating decimal pattern, formula_117 This can be written as the geometric series formula_118 with initial term formula_119 and common ratio formula_120 The geometric series formula provides an integer ratio that corresponds to the repeating decimal: formula_121 This approach extends beyond repeating decimals, that is, base ten, to repeating patterns in other bases such as binary, that is, base two. For example, the binary representation of the number formula_122 is formula_123 where the binary pattern 110001 repeats indefinitely. That binary representation can be written as a geometric series of binary terms, formula_124 where the initial term is formula_125 expressed in base two formula_126 in base ten and the common ratio is formula_127 in base two formula_128 in base ten. Using the geometric series formula as before, formula_129 Connections to power series. Like the geometric series, a power series formula_130 has one parameter for a common variable raised to successive powers, denoted formula_131 here, corresponding to the geometric series's "r", but it has additional parameters formula_132 one for each term in the series, for the distinct coefficients of each formula_133, rather than just a single additional parameter formula_1 for all terms, the common coefficient of formula_134 in each term of a geometric series. The geometric series can therefore be considered a class of power series in which the sequence of coefficients satisfies formula_135 for all formula_25 and formula_136. This special class of power series plays an important role in mathematics, for instance for the study of ordinary generating functions in combinatorics and the summation of divergent series in analysis. Many other power series can be written as transformations and combinations of geometric series, making the geometric series formula a convenient tool for calculating formulas for those power series as well. As a power series, the geometric series has a radius of convergence of 1. This is a consequence of the Cauchy–Hadamard theorem and the fact that formula_137 for any formula_1. Derivations of other power series formulas. Infinite series formulas. One can use simple variable substitutions to calculate some useful closed form infinite series formulas. For an infinite series containing only even powers of formula_2, for instance, formula_138 and for odd powers only, formula_139 In cases where the sum does not start at "k" = 0, one can use a shift of the index of summation together with a variable substitution, formula_140 The formulas given above are strictly valid only for |"r"| &lt; 1. The latter formula is valid in every Banach algebra, as long as the norm of "r" is less than one, and also in the field of "p"-adic numbers if |"r"|"p" &lt; 1. One can also differentiate to calculate formulas for related sums. For example, formula_141 This formula is only strictly valid for |"r"| &lt; 1 as well. From similar derivations, it follows that, for |"r"| &lt; 1, formula_142 It is also possible to use complex geometric series to calculate the sums of some trigonometric series using complex exponentials and Euler's formula. For example, consider the proposition formula_143 This can be proven via the fact that formula_144Substituting this into the original series gives formula_145 This is the difference of two geometric series with common ratios formula_146 and formula_147, and so the proof of the original proposition follows via two straightforward applications of the formula for infinite geometric series and then rearrangement of the result usingformula_148 and formula_149 to complete the proof. Like for the infinite series, one can use variable substitutions and changes of the index of summation to derive other finite power series formulas from the finite geometric series formulas. If one were to begin the sum not from k=1 or 0 but from a different value, say &amp;NoBreak;&amp;NoBreak;, thenformula_150 For a geometric series containing only even powers of formula_2, either multiply the finite sum by formula_151 to derive a closed form for the partial sums when formula_61:formula_152 or, equivalently, take formula_86 as the common ratio formula_2 and use the standard formula. For a series with only odd powers of &amp;NoBreak;&amp;NoBreak;, the same derivation via multiplying by &amp;NoBreak;&amp;NoBreak; applies when formula_61: formula_153or, equivalently, take formula_154 for formula_1 and formula_86 for formula_2 in the standard form. Differentiating such formulas with respect to &amp;NoBreak;&amp;NoBreak; can give the formulas formula_155 For example: formula_156 An exact formula for any of the generalized sums formula_157 when formula_158 is formula_159 where formula_160 denotes a Stirling number of the second kind. Applications. Economics. In economics, geometric series are used to represent the present value of an annuity (a sum of money to be paid in regular intervals). For example, suppose that a payment of $100 will be made to the owner of the annuity once per year (at the end of the year) in perpetuity. Receiving $100 a year from now is worth less than an immediate $100, because one cannot invest the money until one receives it. In particular, the present value of $100 one year in the future is $100 / (1 + formula_161 ), where formula_161 is the yearly interest rate. Similarly, a payment of $100 two years in the future has a present value of $100 / (1 + formula_161)2 (squared because two years' worth of interest is lost by not receiving the money right now). Therefore, the present value of receiving $100 per year in perpetuity is formula_162 which is the infinite series: formula_163 This is a geometric series with common ratio 1 / (1 + formula_161 ). The sum is the first term divided by (one minus the common ratio): formula_164 For example, if the yearly interest rate is 10% (formula_161 = 0.10), then the entire annuity has a present value of $100 / 0.10 = $1000. This sort of calculation is used to compute the APR of a loan (such as a mortgage loan). It can also be used to estimate the present value of expected stock dividends, or the terminal value of a financial asset assuming a stable growth rate. Fractal geometry. The area inside the Koch snowflake can be described as the union of infinitely many equilateral triangles (see figure). Each side of the green triangle is exactly 1/3 the size of a side of the large blue triangle, and therefore has exactly 1/9 the area. Similarly, each yellow triangle has 1/9 the area of a green triangle, and so forth. Taking the blue triangle as a unit of area, the total area of the snowflake is formula_165 The first term of this series represents the area of the blue triangle, the second term the total area of the three green triangles, the third term the total area of the twelve yellow triangles, and so forth. Excluding the initial 1, this series is geometric with constant ratio "r" = 4/9. The first term of the geometric series is "a" = 3(1/9) = 1/3, so the sum is formula_166 Thus the Koch snowflake has 8/5 of the area of the base triangle. Trigonometric power series. We can find the power series expansion of the arctangent function using some knowledge of differentiation, integration and the sum of a geometric series. The derivative of formula_167 is known to be formula_168. This is a standard result from integration derived as follows. Let formula_169 and formula_170 represent formula_171 and formula_172, formula_173 Therefore, letting the arctan function is the integral formula_174 which is called Gregory's series and is commonly attributed to Madhava of Sangamagrama (c. 1340 – c. 1425). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "a + ar + ar^2 + ar^3 + ..." }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "\\frac{1}{2} \\,+\\, \\frac{1}{4} \\,+\\, \\frac{1}{8} \\,+\\, \\frac{1}{16} \\,+\\, \\cdots" }, { "math_id": 4, "text": "1/2" }, { "math_id": 5, "text": "a + ar + ar^2 + ar^3 + \\dots + ar^n" }, { "math_id": 6, "text": " a + ar + ar^2 + ar^3 + \\dots = \\sum^{\\infty}_{k=0} a r^k" }, { "math_id": 7, "text": "a + ar + ar^2 + ar^3 + \\dots + ar^n = \\sum^{n}_{k=0} a r^k." }, { "math_id": 8, "text": "a (1 - r^{n+1}) / (1 - r)" }, { "math_id": 9, "text": "|r| < 1" }, { "math_id": 10, "text": "a / (1 - r)" }, { "math_id": 11, "text": "a + ar + ar^2 + ar^3 + \\dots" }, { "math_id": 12, "text": "n." }, { "math_id": 13, "text": "1" }, { "math_id": 14, "text": "\\sum^{\\infty}_{k=0} a r^k," }, { "math_id": 15, "text": "ar^0" }, { "math_id": 16, "text": "r^0 = 1" }, { "math_id": 17, "text": "r." }, { "math_id": 18, "text": "a_0 + a_1 r + a_2 r^2 + a_3 r^3 + \\dots" }, { "math_id": 19, "text": "a_k" }, { "math_id": 20, "text": "r = \\frac{t_{k+1}}{t_k} = \\frac{ar^{k+1}}{ar^{k}}" }, { "math_id": 21, "text": "t_k" }, { "math_id": 22, "text": "k" }, { "math_id": 23, "text": "r > 1" }, { "math_id": 24, "text": "0 < r < 1" }, { "math_id": 25, "text": "n" }, { "math_id": 26, "text": "(r - 1) / 100" }, { "math_id": 27, "text": "1 / \\log_2 r" }, { "math_id": 28, "text": "\\vert r \\vert e^{i \\theta}" }, { "math_id": 29, "text": "|r|" }, { "math_id": 30, "text": "\\theta" }, { "math_id": 31, "text": "e" }, { "math_id": 32, "text": "i^2= -1" }, { "math_id": 33, "text": "a + a \\vert r \\vert e^{i \\theta} + a\\vert r \\vert ^2 e^{2i\\theta} + a \\vert r \\vert^3 e^{3i\\theta} + \\dots" }, { "math_id": 34, "text": "\\omega_0" }, { "math_id": 35, "text": "\\theta = \\omega_0 t," }, { "math_id": 36, "text": "a + a \\vert r \\vert e^{i \\omega_0 t} + a\\vert r \\vert ^2 e^{2i\\omega_0 t} + a \\vert r \\vert^3 e^{3i\\omega_0t} + \\dots" }, { "math_id": 37, "text": "2 \\pi / \\omega_0" }, { "math_id": 38, "text": "\\vert r \\vert < 1" }, { "math_id": 39, "text": "\\frac{a}{1-r}." }, { "math_id": 40, "text": "\\vert r \\vert > 1" }, { "math_id": 41, "text": "\\vert r \\vert = 1" }, { "math_id": 42, "text": "r=1" }, { "math_id": 43, "text": "r = -1" }, { "math_id": 44, "text": "1 - 1 + 1 - 1 + 1 + ... " }, { "math_id": 45, "text": "r=i" }, { "math_id": 46, "text": "a = 1" }, { "math_id": 47, "text": "1, 1 + i, i, 0, 1, 1+ i,i,0, \\ldots" }, { "math_id": 48, "text": "r= e^{2\\pi i / \\tau}" }, { "math_id": 49, "text": "\\tau" }, { "math_id": 50, "text": "a \\neq 0" }, { "math_id": 51, "text": "r < 0" }, { "math_id": 52, "text": "r > 0" }, { "math_id": 53, "text": "S" }, { "math_id": 54, "text": "S_n." }, { "math_id": 55, "text": "n + 1" }, { "math_id": 56, "text": "r^{n}" }, { "math_id": 57, "text": "\n\\begin{align}\nS_n &= ar^0 + ar^1 + \\cdots + ar^{n}\\\\\n&= \\sum_{k=0}^{n} ar^k\n\\end{align}\n" }, { "math_id": 58, "text": "\n\\begin{align}\nS_n = \n\\begin{cases}\na(n + 1) & r = 1\\\\\na\\left(\\frac{1-r^{n+1}}{1-r}\\right) & \\text{otherwise}\n\\end{cases}\n\\end{align}\n" }, { "math_id": 59, "text": "r = 1" }, { "math_id": 60, "text": "S_n" }, { "math_id": 61, "text": "r \\neq 1" }, { "math_id": 62, "text": "\\begin{align} \n S_n &= ar^0 + ar^1 + \\cdots + ar^{n},\\\\\n rS_n &= ar^1 + ar^2 + \\cdots + ar^{n+1},\\\\\n S_n - rS_n &= ar^0 - ar^{n+1},\\\\\n S_n\\left(1-r\\right) &= a\\left(1-r^{n+1}\\right),\\\\\n S_n &= a\\left(\\frac{1-r^{n+1}}{1-r}\\right), \\text{ for } r \\neq 1.\n\\end{align}" }, { "math_id": 63, "text": "S_n = a(n + 1)" }, { "math_id": 64, "text": "\n\\begin{align}\nS &= a+ar+ar^2+ar^3+ar^4+\\cdots\\\\\n &= \\lim_{n \\rightarrow \\infty} S_n\\\\\n &= \\lim_{n \\rightarrow \\infty} \\frac{a(1-r^{n+1})}{1-r} \\\\\n &= \\frac{a}{1-r} - \\lim_{n \\rightarrow \\infty} \\frac{a r^{n+1}}{1-r} \\\\\n &= \\frac{a}{1-r} \\text{ for } |r| < 1.\n\\end{align}\n" }, { "math_id": 65, "text": "\nr\n" }, { "math_id": 66, "text": "a=1" }, { "math_id": 67, "text": "r = 2" }, { "math_id": 68, "text": "\\frac{a}{1-r}=\\frac{1}{1-2}=-1" }, { "math_id": 69, "text": "x_n" }, { "math_id": 70, "text": "x_L" }, { "math_id": 71, "text": " q" }, { "math_id": 72, "text": " \\mu" }, { "math_id": 73, "text": " \\lim _{n \\rightarrow \\infty} \\frac{\\left|x_{n+1}- x_L\\right|}{\\left|x_{n}-x_L\\right|^{q}}=\\mu." }, { "math_id": 74, "text": " \\lim _{n \\rightarrow \\infty} \\frac{\\left|S_{n+1} - S\\right|}{\\left|S_{n}-S\\right|^{q}}=\\mu." }, { "math_id": 75, "text": "|S_n - S| = \\left| \\frac{ar^{n+1}}{1-r} \\right|" }, { "math_id": 76, "text": " q = 1" }, { "math_id": 77, "text": "\\lim _{n \\rightarrow \\infty} \\frac{\\left| \\frac{ar^{n+2}}{1-r} \\right|}{\\left| \\frac{ar^{n+1}}{1-r} \\right|^{1}} = |r| =\\mu," }, { "math_id": 78, "text": "S - S_n = \\frac{ar^{n+1}}{1-r}" }, { "math_id": 79, "text": "-1 < r \\leq 0," }, { "math_id": 80, "text": "1 - r^2" }, { "math_id": 81, "text": "1 - |r| = 1 + r" }, { "math_id": 82, "text": "\\lambda" }, { "math_id": 83, "text": "\\lambda (1 - r^2) = 1 + r" }, { "math_id": 84, "text": "\\lambda = \\frac{1 + r}{1 - r^2} = \\frac{1}{1 - r}" }, { "math_id": 85, "text": "-1 < r \\leq 0." }, { "math_id": 86, "text": "r^2" }, { "math_id": 87, "text": "a = 1 / (1-r)" }, { "math_id": 88, "text": "1 + 2\\left(\\frac{1}{8}\\right) + 4\\left(\\frac{1}{8}\\right)^2 + 8\\left(\\frac{1}{8}\\right)^3 + \\cdots." }, { "math_id": 89, "text": "1 + \\frac{1}{4} + \\frac{1}{16} + \\frac{1}{64} + \\cdots." }, { "math_id": 90, "text": "r = 1/4" }, { "math_id": 91, "text": "\\frac{1}{1 -r}\\ = \\frac{1}{1 -\\frac{1}{4}} = \\frac{4}{3}." }, { "math_id": 92, "text": "\\frac{1}{2}+\\frac{2}{4}+\\frac{3}{8}+\\frac{4}{16}+\\frac{5}{32}+\\frac{6}{64}+\\frac{7}{128}+\\dots = 2." }, { "math_id": 93, "text": "a=\\frac{1}{2}" }, { "math_id": 94, "text": "r=\\frac{1}{2}" }, { "math_id": 95, "text": "\n\\begin{align}\nS &=\\frac{1}{2}+\\frac{1}{4}+\\frac{1}{8}+\\frac{1}{16}+\\frac{1}{32}+\\dots = \\frac{\\frac{1}{2}}{1 - \\frac{1}{2}} = 1\n\n\\end{align}\n" }, { "math_id": 96, "text": "a = S" }, { "math_id": 97, "text": "\nT\n" }, { "math_id": 98, "text": "\n\\begin{align}\nT &= S \\left(1 + \\frac{1}{2} + \\frac{1}{4} + \\frac{1}{8} + \\dots\\right)\\\\\n&= \\frac{S}{1-r} = \\frac{1}{1-\\frac{1}{2}} = 2.\n\\end{align}\n" }, { "math_id": 99, "text": "d" }, { "math_id": 100, "text": "S(d)" }, { "math_id": 101, "text": "(d-1)" }, { "math_id": 102, "text": "S(d-1)," }, { "math_id": 103, "text": "S(1)" }, { "math_id": 104, "text": "S(d) = \\frac{S(d-1)}{(1-r)} = \\frac{a}{(1-r)^d}" }, { "math_id": 105, "text": "\\vert r \\vert <1" }, { "math_id": 106, "text": "a = 1/2" }, { "math_id": 107, "text": "r = 1/2" }, { "math_id": 108, "text": "\\begin{matrix}\n 1 \\\\\n 1 \\quad 1 \\\\\n 1 \\quad 2 \\quad 1 \\\\\n 1 \\quad 3 \\quad 3 \\quad 1 \\\\\n 1 \\quad 4 \\quad 6 \\quad 4 \\quad 1\n\\end{matrix}" }, { "math_id": 109, "text": "0.7777\\ldots," }, { "math_id": 110, "text": "0.9999\\ldots," }, { "math_id": 111, "text": "0.123412341234\\ldots," }, { "math_id": 112, "text": "0.7777\\ldots" }, { "math_id": 113, "text": "0.7777\\ldots = \\frac{7}{10} + \\frac{7}{10}\\frac{1}{10} + \\frac{7}{10}\\frac{1}{10^2} + \\frac{7}{10}\\frac{1}{10^3} + \\cdots" }, { "math_id": 114, "text": "a = 7 / 10" }, { "math_id": 115, "text": "r = 1 / 10" }, { "math_id": 116, "text": "0.7777\\ldots = \\frac{a}{1-r} = \\frac{7/10}{1-1/10} = \\frac{7/10}{9/10} = \\frac{7}{9}." }, { "math_id": 117, "text": "0.123412341234...." }, { "math_id": 118, "text": "0.123412341234\\ldots = \\frac{1234}{10000} + \\frac{1234}{10000}\\frac{1}{10000} + \\frac{1234}{10000}\\frac{1}{10000^2} \\,+\\, \\frac{1234}{10000}\\frac{1}{10000^3} + \\cdots" }, { "math_id": 119, "text": "a = 1234/10000" }, { "math_id": 120, "text": "r = 1/10000." }, { "math_id": 121, "text": "0.123412341234\\ldots = \\frac{a}{1-r} = \\frac{1234/10000}{1-1/10000} \\;=\\; \\frac{1234/10000}{9999/10000} \\;=\\; \\frac{1234}{9999}." }, { "math_id": 122, "text": "0.7777\\ldots_{10}" }, { "math_id": 123, "text": "0.110001110001110001\\ldots_{2}" }, { "math_id": 124, "text": "0.110001110001110001\\ldots_{2} \\;=\\; \\frac{110001_{2}}{1000000_{2}} \\,+\\, \\frac{110001_{2}}{1000000_{2}}\\frac{1}{1000000_{2}} \\,+\\, \\frac{110001_{2}}{1000000_{2}}\\frac{1}{1000000_{2}^2} \\,+\\, \\frac{110001_{2}}{1000000_{2}}\\frac{1}{1000000_{2}^3} \\,+\\, \\cdots," }, { "math_id": 125, "text": "a = 110001_2 / 1000000_2" }, { "math_id": 126, "text": "= 49_{10} / 64_{10}" }, { "math_id": 127, "text": "r = 1 / 1000000_2" }, { "math_id": 128, "text": "= 1 / 64_{10}" }, { "math_id": 129, "text": "0.110001110001110001\\ldots_{2} \\;=\\; \\frac{a}{1-r} \\;=\\; \\frac{49_{10}/64_{10}}{1-1/64_{10}} \\;=\\; \\frac{49_{10}/64_{10}}{63_{10}/64_{10}} \\;=\\; \\frac{49_{10}}{63_{10}} \\;=\\; \\frac{7_{10}}{9_{10}}." }, { "math_id": 130, "text": "a_0 + a_1 x + a_2 x^2 + \\ldots" }, { "math_id": 131, "text": "x" }, { "math_id": 132, "text": "a_0, a_1, a_2, \\ldots," }, { "math_id": 133, "text": "x^0, x^1, x^2, \\ldots" }, { "math_id": 134, "text": "r^n" }, { "math_id": 135, "text": "a_n = a" }, { "math_id": 136, "text": "x = r" }, { "math_id": 137, "text": "\\lim_{n \\rightarrow \\infty}\\sqrt[n]{a} = 1" }, { "math_id": 138, "text": "\\sum_{k=0}^\\infty ar^{2k} = \\sum_{k=0}^\\infty a(r^{2})^k = \\frac{a}{1-r^2}" }, { "math_id": 139, "text": "\\sum_{k=0}^\\infty ar^{2k+1} = \\sum_{k=0}^\\infty (ar) (r^2)^{k} = \\frac{ar}{1-r^2}" }, { "math_id": 140, "text": "\\sum_{k=m}^\\infty ar^k = \\sum_{k=0}^\\infty (ar^m) r^k = \\frac{ar^m}{1-r}" }, { "math_id": 141, "text": "\\sum_{k=1}^\\infty kr^{k-1} = \\frac{d}{dr} \\sum_{k=0}^\\infty r^k = \\frac{1}{(1-r)^2}" }, { "math_id": 142, "text": "\\sum_{k=0}^{\\infty} k r^k = \\frac{r}{\\left(1-r\\right)^2} \\,;\\, \\sum_{k=0}^{\\infty} k^2 r^k = \\frac{r \\left( 1+r \\right)}{\\left(1-r\\right)^3} \\, ; \\, \\sum_{k=0}^{\\infty} k^3 r^k = \\frac{r \\left( 1+4 r + r^2\\right)}{\\left( 1-r\\right)^4}" }, { "math_id": 143, "text": "\\sum_{k=0}^{\\infty} \\frac{\\sin(kx)}{r^k} = \\frac{r \\sin(x)}{1 + r^2 - 2 r \\cos(x)}" }, { "math_id": 144, "text": "\\sin(kx) = \\frac{e^{ikx} - e^{-ikx}}{2i}. " }, { "math_id": 145, "text": " \\sum_{k=0}^{\\infty} \\frac{\\sin(kx)}{r^k} = \\frac{1}{2 i} \\left[ \\sum_{k=0}^{\\infty} \\left( \\frac{e^{ix}}{r} \\right)^k - \\sum_{k=0}^{\\infty} \\left(\\frac{e^{-ix}}{r}\\right)^k\\right]." }, { "math_id": 146, "text": "e^{ix}/r" }, { "math_id": 147, "text": "e^{-ix}/r" }, { "math_id": 148, "text": "\\frac{e^{ix} - e^{-ix}}{2i} = \\sin(x)" }, { "math_id": 149, "text": "\\frac{e^{ix} + e^{-ix}}{2} = \\cos(x)" }, { "math_id": 150, "text": "\\begin{align}\n\\sum_{k=m}^n ar^k &= \n\\begin{cases}\n\\frac{a(r^m-r^{n+1})}{1-r} & \\text{if }r \\neq 1 \\\\\na(n-m+1) & \\text{if }r = 1\n\\end{cases}\\end{align}" }, { "math_id": 151, "text": "1-r^2" }, { "math_id": 152, "text": "\\begin{align}\n(1-r^2) \\sum_{k=0}^{n} ar^{2k} &= a-ar^{2n+2}\\\\\n\\sum_{k=0}^{n} ar^{2k} &= \\frac{a(1-r^{2n+2})}{1-r^2}\n\\end{align}" }, { "math_id": 153, "text": "\\begin{align}\n(1-r^2) \\sum_{k=0}^{n} ar^{2k+1} &= ar-ar^{2n+3}\\\\\n\\sum_{k=0}^{n} ar^{2k+1} &= \\frac{ar(1-r^{2n+2})}{1-r^2} &= \\frac{a(r-r^{2n+3})}{1-r^2}\n\\end{align}" }, { "math_id": 154, "text": "ar" }, { "math_id": 155, "text": "G_s(n, r) := \\sum_{k=0}^n k^s r^k." }, { "math_id": 156, "text": "\\frac{d}{dr} \\sum_{k=0}^nr^k = \\sum_{k=1}^n kr^{k-1}=\n\\frac{1-r^{n+1}}{(1-r)^2}-\\frac{(n+1)r^n}{1-r}." }, { "math_id": 157, "text": "G_s(n, r)" }, { "math_id": 158, "text": "s \\in \\mathbb{N}" }, { "math_id": 159, "text": "G_s(n, r) = \\sum_{j=0}^s \\left\\lbrace{s \\atop j}\\right\\rbrace x^j \\frac{d^j}{dx^j}\\left[\\frac{1-x^{n+1}}{1-x}\\right]" }, { "math_id": 160, "text": "\\left\\lbrace{s \\atop j}\\right\\rbrace" }, { "math_id": 161, "text": "I" }, { "math_id": 162, "text": "\\sum_{n=1}^\\infty \\frac{\\$100}{(1+I)^n}," }, { "math_id": 163, "text": "\\frac{\\$ 100}{(1+I)} \\,+\\, \\frac{\\$ 100}{(1+I)^2} \\,+\\, \\frac{\\$ 100}{(1+I)^3} \\,+\\, \\frac{\\$ 100}{(1+I)^4} \\,+\\, \\cdots." }, { "math_id": 164, "text": "\\frac{\\$ 100/(1+I)}{1 - 1/(1+I)} \\;=\\; \\frac{\\$ 100}{I}." }, { "math_id": 165, "text": "1 \\,+\\, 3\\left(\\frac{1}{9}\\right) \\,+\\, 12\\left(\\frac{1}{9}\\right)^2 \\,+\\, 48\\left(\\frac{1}{9}\\right)^3 \\,+\\, \\cdots." }, { "math_id": 166, "text": "1\\,+\\,\\frac{a}{1-r}\\;=\\;1\\,+\\,\\frac{\\frac{1}{3}}{1-\\frac{4}{9}}\\;=\\;\\frac{8}{5}." }, { "math_id": 167, "text": "f(x) = \\arctan(u(x))" }, { "math_id": 168, "text": "f'(x) = \\frac{u'(x)}{(1+[u(x)]^2)}" }, { "math_id": 169, "text": "y" }, { "math_id": 170, "text": "u" }, { "math_id": 171, "text": "f(x)" }, { "math_id": 172, "text": "u(x)" }, { "math_id": 173, "text": "\n\\begin{align}\ny &= \\arctan(u)\\\\\n\\implies u &= \\tan(y) &&\\quad \\text{ in the range } -\\pi/2 < y < \\pi/2 \\text{ and } \\\\\n\\implies u' &= \\sec^2y \\cdot y' &&\\quad \\text{ by applying the quotient rule to } \\tan(y) = \\sin(y)/\\cos(y), \\\\\n\\implies y' &= \\frac{u'}{\\sec^2y} &&\\quad \\text{ by dividing both sides by } \\sec^2y, \\\\\n &= \\frac{u'}{(1+\\tan^2y)} &&\\quad \\text{ by using the trigonometric identity derived by dividing } \\sin^2y + \\cos^2y = 1 \\text{ by } \\cos^2y, \\\\\n &= \\frac{u'}{(1+u^2)} \n\\end{align}\n" }, { "math_id": 174, "text": "\n\\begin{align}\n\\arctan(x)&=\\int\\frac{dx}{1+x^2} \\quad &&\\text{in the range } -\\pi/2 < \\arctan(x) < \\pi/2,\\\\\n&=\\int\\frac{dx}{1-(-x^2)} \\quad &&\\text{by writing integrand as closed form of geometric series with }r = -x^2,\\\\\n&=\\int\\left(1 + \\left(-x^2\\right) + \\left(-x^2\\right)^2 + \\left(-x^2\\right)^3+\\cdots\\right)dx \\quad &&\\text{by writing geometric series in expanded form},\\\\\n&=\\int\\left(1-x^2+x^4-x^6+\\cdots\\right)dx \\quad &&\\text{by calculating the sign and power of each term in integrand},\\\\\n&=x-\\frac{x^3}{3}+\\frac{x^5}{5}-\\frac{x^7}{7}+\\cdots \\quad &&\\text{by integrating each term},\\\\\n&=\\sum^{\\infty}_{n=0} \\frac{(-1)^n}{2n+1} x^{2n+1} \\quad &&\\text{by writing series in generator form},\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=12630
12633368
Ideal (set theory)
Non-empty family of sets that is closed under finite unions and subsets In the mathematical field of set theory, an ideal is a partially ordered collection of sets that are considered to be "small" or "negligible". Every subset of an element of the ideal must also be in the ideal (this codifies the idea that an ideal is a notion of smallness), and the union of any two elements of the ideal must also be in the ideal. More formally, given a set formula_0 an ideal formula_1 on formula_2 is a nonempty subset of the powerset of formula_0 such that: Some authors add a fourth condition that formula_2 itself is not in formula_1; ideals with this extra property are called proper ideals. Ideals in the set-theoretic sense are exactly ideals in the order-theoretic sense, where the relevant order is set inclusion. Also, they are exactly ideals in the ring-theoretic sense on the Boolean ring formed by the powerset of the underlying set. The dual notion of an ideal is a filter. Terminology. An element of an ideal formula_1 is said to be formula_1-null or formula_1-negligible, or simply null or negligible if the ideal formula_1 is understood from context. If formula_1 is an ideal on formula_0 then a subset of formula_2 is said to be formula_1-positive (or just positive) if it is not an element of formula_9 The collection of all formula_1-positive subsets of formula_2 is denoted formula_10 If formula_1 is a proper ideal on formula_2 and for every formula_11 either formula_4 or formula_12 then formula_1 is a prime ideal. Operations on ideals. Given ideals I and J on underlying sets X and Y respectively, one forms the product formula_29 on the Cartesian product formula_30 as follows: For any subset formula_31 formula_32 That is, a set is negligible in the product ideal if only a negligible collection of x-coordinates correspond to a non-negligible slice of A in the y-direction. (Perhaps clearer: A set is positive in the product ideal if positively many x-coordinates correspond to positive slices.) An ideal I on a set X induces an equivalence relation on formula_33 the powerset of X, considering A and B to be equivalent (for formula_34 subsets of X) if and only if the symmetric difference of A and B is an element of I. The quotient of formula_18 by this equivalence relation is a Boolean algebra, denoted formula_35 (read "P of X mod I"). To every ideal there is a corresponding filter, called its dual filter. If I is an ideal on X, then the dual filter of I is the collection of all sets formula_36 where A is an element of I. (Here formula_37 denotes the relative complement of A in X; that is, the collection of all elements of X that are not in A). Relationships among ideals. If formula_1 and formula_38 are ideals on formula_2 and formula_39 respectively, formula_1 and formula_38 are Rudin–Keisler isomorphic if they are the same ideal except for renaming of the elements of their underlying sets (ignoring negligible sets). More formally, the requirement is that there be sets formula_23 and formula_40 elements of formula_1 and formula_38 respectively, and a bijection formula_41 such that for any subset formula_42 formula_43 if and only if the image of formula_44 under formula_45 If formula_1 and formula_38 are Rudin–Keisler isomorphic, then formula_35 and formula_46 are isomorphic as Boolean algebras. Isomorphisms of quotient Boolean algebras induced by Rudin–Keisler isomorphisms of ideals are called trivial isomorphisms. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X," }, { "math_id": 1, "text": "I" }, { "math_id": 2, "text": "X" }, { "math_id": 3, "text": "\\varnothing \\in I," }, { "math_id": 4, "text": "A \\in I" }, { "math_id": 5, "text": "B \\subseteq A," }, { "math_id": 6, "text": "B \\in I," }, { "math_id": 7, "text": "A, B \\in I" }, { "math_id": 8, "text": "A \\cup B \\in I." }, { "math_id": 9, "text": "I." }, { "math_id": 10, "text": "I^+." }, { "math_id": 11, "text": "A \\subseteq X" }, { "math_id": 12, "text": "X \\setminus A \\in I," }, { "math_id": 13, "text": "B \\subseteq X," }, { "math_id": 14, "text": "B" }, { "math_id": 15, "text": "X." }, { "math_id": 16, "text": "\\mathcal{B}" }, { "math_id": 17, "text": "X \\setminus \\mathcal{B} := \\{X \\setminus B : B \\in \\mathcal{B}\\}," }, { "math_id": 18, "text": "\\wp(X)" }, { "math_id": 19, "text": "X \\setminus \\wp(X) = \\wp(X)." }, { "math_id": 20, "text": "\\mathcal{B} \\subseteq \\wp(X)" }, { "math_id": 21, "text": "X \\setminus \\mathcal{B}" }, { "math_id": 22, "text": "\\mathcal{I}_{1/n}," }, { "math_id": 23, "text": "A" }, { "math_id": 24, "text": "\\sum_{n\\in A}\\frac{1}{n+1}" }, { "math_id": 25, "text": "\\mathcal{Z}_0," }, { "math_id": 26, "text": "n" }, { "math_id": 27, "text": "A," }, { "math_id": 28, "text": "\\lambda" }, { "math_id": 29, "text": "I \\times J" }, { "math_id": 30, "text": "X \\times Y," }, { "math_id": 31, "text": "A \\subseteq X \\times Y," }, { "math_id": 32, "text": "A \\in I \\times J \\quad \\text{ if and only if } \\quad \\{ x \\in X \\; : \\; \\{y : \\langle x, y \\rangle \\in A\\} \\not\\in J \\} \\in I" }, { "math_id": 33, "text": "\\wp(X)," }, { "math_id": 34, "text": "A, B" }, { "math_id": 35, "text": "\\wp(X) / I" }, { "math_id": 36, "text": "X \\setminus A," }, { "math_id": 37, "text": "X \\setminus A" }, { "math_id": 38, "text": "J" }, { "math_id": 39, "text": "Y" }, { "math_id": 40, "text": "B," }, { "math_id": 41, "text": "\\varphi : X \\setminus A \\to Y \\setminus B," }, { "math_id": 42, "text": "C \\subseteq X," }, { "math_id": 43, "text": "C \\in I" }, { "math_id": 44, "text": "C" }, { "math_id": 45, "text": "\\varphi \\in J." }, { "math_id": 46, "text": "\\wp(Y) / J" } ]
https://en.wikipedia.org/wiki?curid=12633368
12637839
Chuu-Lian Terng
Taiwanese-American mathematician Chuu-Lian Terng () is a Taiwanese-American mathematician. Her research areas are differential geometry and integrable systems, with particular interests in completely integrable Hamiltonian partial differential equations and their relations to differential geometry, the geometry and topology of submanifolds in symmetric spaces, and the geometry of isometric actions. Education and career. She received her B.S. from National Taiwan University in 1971 and her Ph.D. from Brandeis University in 1976 under the supervision of Richard Palais, whom she later married. She is currently a professor emerita at the University of California at Irvine. She was a professor at Northeastern University for many years. Before joining Northeastern, she spent two years at the University of California, Berkeley and four years at Princeton University. She also spent two years at the Institute for Advanced Study (IAS) in Princeton and two years at the Max-Planck Institute in Bonn, Germany. Terng has been an active member of the Association for Women in Mathematics (AWM). She served as AWM President from 1995 to 1997, chaired the Julia Robinson Celebration of Women in Math Conference, which was held July 1–3, 1996, and chaired the Michler Prize and Travel/Mentoring Grant Committees. Terng has served on the editorial boards of the Transactions of the AMS, the Taiwanese Journal of Mathematics, Communications of Analysis and Geometry, the Proceedings of the AMS, and the Journal of Fixed Point Theory and its Applications. In 1999, she was selected as the AWM/MAA Falconer Lecturer. Her citation reads: Her early research concerned the classification of natural vector bundles and natural differential operators between them. She then became interested in submanifold geometry. Her main contributions are developing a structure theory for isoparametric submanifolds in formula_0 and constructing soliton equations from special submanifolds. Recently, Terng and Karen Uhlenbeck (University of Texas at Austin) have developed a general approach to integrable PDEs that explains their hidden symmetries in terms of loop group actions. She is co-author of the book Submanifold Geometry and Critical Point Theory and an editor of the Journal of Differential Geometry survey volume 4 on "Integrable systems". Professor Terng served as president of the Association for Women in Mathematics (AWM) from 1995 to 1997 and as Member-at-Large of the Council of the American Mathematical Society (AMS) from 1989 to 1992. She is currently on the Advisory Board of the National Center for Theoretical Sciences in Taiwan, the Steering Committee of the Institute for Advanced Study Park City Summer Institute, and the Editorial Board of the Transactions of the AMS. Recognition. With Sun-Yung Alice Chang, Fan Chung, Winnie Li, Jang-Mei Wu, and Mei-Chi Shaw, Terng is one of a group of six women mathematicians from National Taiwan University called by Shiing-Shen Chern "a miracle in Chinese history; the glory of the Chinese people". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^n" } ]
https://en.wikipedia.org/wiki?curid=12637839
1264
Anisotropy
In geometry, property of being directionally dependent Anisotropy () is the structural property of non-uniformity in different directions, as opposed to isotropy. An anisotropic object or pattern has properties that differ according to direction of measurement. For example, many materials exhibit very different physical or mechanical properties when measured along different axes, e.g. absorbance, refractive index, conductivity, and tensile strength. An example of anisotropy is light coming through a polarizer. Another is wood, which is easier to split along its grain than across it because of the directional non-uniformity of the grain (the grain is the same in one direction, not all directions). Fields of interest. Computer graphics. In the field of computer graphics, an anisotropic surface changes in appearance as it rotates about its geometric normal, as is the case with velvet. Anisotropic filtering (AF) is a method of enhancing the image quality of textures on surfaces that are far away and steeply angled with respect to the point of view. Older techniques, such as bilinear and trilinear filtering, do not take into account the angle a surface is viewed from, which can result in aliasing or blurring of textures. By reducing detail in one direction more than another, these effects can be reduced easily. Chemistry. A chemical anisotropic filter, as used to filter particles, is a filter with increasingly smaller interstitial spaces in the direction of filtration so that the proximal regions filter out larger particles and distal regions increasingly remove smaller particles, resulting in greater flow-through and more efficient filtration. In fluorescence spectroscopy, the fluorescence anisotropy, calculated from the polarization properties of fluorescence from samples excited with plane-polarized light, is used, e.g., to determine the shape of a macromolecule. Anisotropy measurements reveal the average angular displacement of the fluorophore that occurs between absorption and subsequent emission of a photon. In NMR spectroscopy, the orientation of nuclei with respect to the applied magnetic field determines their chemical shift. In this context, anisotropic systems refer to the electron distribution of molecules with abnormally high electron density, like the pi system of benzene. This abnormal electron density affects the applied magnetic field and causes the observed chemical shift to change. Real-world imagery. Images of a gravity-bound or man-made environment are particularly anisotropic in the orientation domain, with more image structure located at orientations parallel with or orthogonal to the direction of gravity (vertical and horizontal). Physics. Physicists from University of California, Berkeley reported about their detection of the cosmic anisotropy in cosmic microwave background radiation in 1977. Their experiment demonstrated the Doppler shift caused by the movement of the earth with respect to the early Universe matter, the source of the radiation. Cosmic anisotropy has also been seen in the alignment of galaxies' rotation axes and polarization angles of quasars. Physicists use the term anisotropy to describe direction-dependent properties of materials. Magnetic anisotropy, for example, may occur in a plasma, so that its magnetic field is oriented in a preferred direction. Plasmas may also show "filamentation" (such as that seen in lightning or a plasma globe) that is directional. An "anisotropic liquid" has the fluidity of a normal liquid, but has an average structural order relative to each other along the molecular axis, unlike water or chloroform, which contain no structural ordering of the molecules. Liquid crystals are examples of anisotropic liquids. Some materials conduct heat in a way that is isotropic, that is independent of spatial orientation around the heat source. Heat conduction is more commonly anisotropic, which implies that detailed geometric modeling of typically diverse materials being thermally managed is required. The materials used to transfer and reject heat from the heat source in electronics are often anisotropic. Many crystals are anisotropic to light ("optical anisotropy"), and exhibit properties such as birefringence. Crystal optics describes light propagation in these media. An "axis of anisotropy" is defined as the axis along which isotropy is broken (or an axis of symmetry, such as normal to crystalline layers). Some materials can have multiple such optical axes. Geophysics and geology. Seismic anisotropy is the variation of seismic wavespeed with direction. Seismic anisotropy is an indicator of long range order in a material, where features smaller than the seismic wavelength (e.g., crystals, cracks, pores, layers, or inclusions) have a dominant alignment. This alignment leads to a directional variation of elasticity wavespeed. Measuring the effects of anisotropy in seismic data can provide important information about processes and mineralogy in the Earth; significant seismic anisotropy has been detected in the Earth's crust, mantle, and inner core. Geological formations with distinct layers of sedimentary material can exhibit electrical anisotropy; electrical conductivity in one direction (e.g. parallel to a layer), is different from that in another (e.g. perpendicular to a layer). This property is used in the gas and oil exploration industry to identify hydrocarbon-bearing sands in sequences of sand and shale. Sand-bearing hydrocarbon assets have high resistivity (low conductivity), whereas shales have lower resistivity. Formation evaluation instruments measure this conductivity or resistivity, and the results are used to help find oil and gas in wells. The mechanical anisotropy measured for some of the sedimentary rocks like coal and shale can change with corresponding changes in their surface properties like sorption when gases are produced from the coal and shale reservoirs. The hydraulic conductivity of aquifers is often anisotropic for the same reason. When calculating groundwater flow to drains or to wells, the difference between horizontal and vertical permeability must be taken into account; otherwise the results may be subject to error. Most common rock-forming minerals are anisotropic, including quartz and feldspar. Anisotropy in minerals is most reliably seen in their optical properties. An example of an isotropic mineral is garnet. Igneous rock like granite also shows the anisotropy due to the orientation of the minerals during the solidification process. Medical acoustics. Anisotropy is also a well-known property in medical ultrasound imaging describing a different resulting echogenicity of soft tissues, such as tendons, when the angle of the transducer is changed. Tendon fibers appear hyperechoic (bright) when the transducer is perpendicular to the tendon, but can appear hypoechoic (darker) when the transducer is angled obliquely. This can be a source of interpretation error for inexperienced practitioners. Materials science and engineering. Anisotropy, in materials science, is a material's directional dependence of a physical property. This is a critical consideration for materials selection in engineering applications. A material with physical properties that are symmetric about an axis that is normal to a plane of isotropy is called a transversely isotropic material. Tensor descriptions of material properties can be used to determine the directional dependence of that property. For a monocrystalline material, anisotropy is associated with the crystal symmetry in the sense that more symmetric crystal types have fewer independent coefficients in the tensor description of a given property. When a material is polycrystalline, the directional dependence on properties is often related to the processing techniques it has undergone. A material with randomly oriented grains will be isotropic, whereas materials with texture will be often be anisotropic. Textured materials are often the result of processing techniques like cold rolling, wire drawing, and heat treatment. Mechanical properties of materials such as Young's modulus, ductility, yield strength, and high-temperature creep rate, are often dependent on the direction of measurement. Fourth-rank tensor properties, like the elastic constants, are anisotropic, even for materials with cubic symmetry. The Young's modulus relates stress and strain when an isotropic material is elastically deformed; to describe elasticity in an anisotropic material, stiffness (or compliance) tensors are used instead. In metals, anisotropic elasticity behavior is present in all single crystals with three independent coefficients for cubic crystals, for example. For face-centered cubic materials such as nickel and copper, the stiffness is highest along the &lt;111&gt; direction, normal to the close-packed planes, and smallest parallel to &lt;100&gt;. Tungsten is so nearly isotropic at room temperature that it can be considered to have only two stiffness coefficients; aluminium is another metal that is nearly isotropic. For an isotropic material, formula_0 where formula_1 is the shear modulus, formula_2 is the Young's modulus, and formula_3 is the material's Poisson's ratio. Therefore, for cubic materials, we can think of anisotropy, formula_4, as the ratio between the empirically determined shear modulus for the cubic material and its (isotropic) equivalent: formula_5 The latter expression is known as the Zener ratio, formula_4, where formula_6 refers to elastic constants in Voigt (vector-matrix) notation. For an isotropic material, the ratio is one. Limitation of the Zener ratio to cubic materials is waived in the Tensorial anisotropy index AT that takes into consideration all the 27 components of the fully anisotropic stiffness tensor. It is composed of two major parts formula_7and formula_8, the former referring to components existing in cubic tensor and the latter in anisotropic tensor so that formula_9 This first component includes the modified Zener ratio and additionally accounts for directional differences in the material, which exist in orthotropic material, for instance. The second component of this index formula_8 covers the influence of stiffness coefficients that are nonzero only for non-cubic materials and remains zero otherwise. Fiber-reinforced or layered composite materials exhibit anisotropic mechanical properties, due to orientation of the reinforcement material. In many fiber-reinforced composites like carbon fiber or glass fiber based composites, the weave of the material (e.g. unidirectional or plain weave) can determine the extent of the anisotropy of the bulk material. The tunability of orientation of the fibers allows for application-based designs of composite materials, depending on the direction of stresses applied onto the material. Amorphous materials such as glass and polymers are typically isotropic. Due to the highly randomized orientation of macromolecules in polymeric materials, polymers are in general described as isotropic. However, mechanically gradient polymers can be engineered to have directionally dependent properties through processing techniques or introduction of anisotropy-inducing elements. Researchers have built composite materials with aligned fibers and voids to generate anisotropic hydrogels, in order to mimic hierarchically ordered biological soft matter. 3D printing, especially Fused Deposition Modeling, can introduce anisotropy into printed parts. This is due to the fact that FDM is designed to extrude and print layers of thermoplastic materials. This creates materials that are strong when tensile stress is applied in parallel to the layers and weak when the material is perpendicular to the layers. Microfabrication. Anisotropic etching techniques (such as deep reactive-ion etching) are used in microfabrication processes to create well defined microscopic features with a high aspect ratio. These features are commonly used in MEMS (microelectromechanical systems) and microfluidic devices, where the anisotropy of the features is needed to impart desired optical, electrical, or physical properties to the device. Anisotropic etching can also refer to certain chemical etchants used to etch a certain material preferentially over certain crystallographic planes (e.g., KOH etching of silicon [100] produces pyramid-like structures) Neuroscience. Diffusion tensor imaging is an MRI technique that involves measuring the fractional anisotropy of the random motion (Brownian motion) of water molecules in the brain. Water molecules located in fiber tracts are more likely to move anisotropically, since they are restricted in their movement (they move more in the dimension parallel to the fiber tract rather than in the two dimensions orthogonal to it), whereas water molecules dispersed in the rest of the brain have less restricted movement and therefore display more isotropy. This difference in fractional anisotropy is exploited to create a map of the fiber tracts in the brains of the individual. Remote sensing and radiative transfer modeling. Radiance fields (see Bidirectional reflectance distribution function (BRDF)) from a reflective surface are often not isotropic in nature. This makes calculations of the total energy being reflected from any scene a difficult quantity to calculate. In remote sensing applications, anisotropy functions can be derived for specific scenes, immensely simplifying the calculation of the net reflectance or (thereby) the net irradiance of a scene. For example, let the BRDF be formula_10 where 'i' denotes incident direction and 'v' denotes viewing direction (as if from a satellite or other instrument). And let P be the Planar Albedo, which represents the total reflectance from the scene. formula_11 formula_12 It is of interest because, with knowledge of the anisotropy function as defined, a measurement of the BRDF from a single viewing direction (say, formula_13) yields a measure of the total scene reflectance (planar albedo) for that specific incident geometry (say, formula_14). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "G = E/[2(1 + \\nu)], " }, { "math_id": 1, "text": " G " }, { "math_id": 2, "text": " E " }, { "math_id": 3, "text": " \\nu " }, { "math_id": 4, "text": " a_r " }, { "math_id": 5, "text": "a_r = \\frac{G}{E/[2(1 + \\nu)]} = \\frac{2(1+\\nu)G}{E} \\equiv \\frac{2 C_{44}}{C_{11} - C_{12}}." }, { "math_id": 6, "text": "C_{ij}" }, { "math_id": 7, "text": "A^I" }, { "math_id": 8, "text": "A^A " }, { "math_id": 9, "text": "A^T = A^I+A^A ." }, { "math_id": 10, "text": "\\gamma(\\Omega_i, \\Omega_v)" }, { "math_id": 11, "text": "P(\\Omega_i) = \\int_{\\Omega_v} \\gamma(\\Omega_i, \\Omega_v)\\hat{n} \\cdot d\\hat\\Omega_v" }, { "math_id": 12, "text": "A(\\Omega_i, \\Omega_v) = \\frac{\\gamma(\\Omega_i, \\Omega_v)}{P(\\Omega_i)}" }, { "math_id": 13, "text": "\\Omega_v" }, { "math_id": 14, "text": "\\Omega_i" } ]
https://en.wikipedia.org/wiki?curid=1264
1264088
Hematopoietic stem cell
Stem cells that give rise to other blood cells Hematopoietic stem cells (HSCs) are the stem cells that give rise to other blood cells. This process is called haematopoiesis.&lt;ref name="Birbrair n/a–n/a"&gt;&lt;/ref&gt; In vertebrates, the first definitive HSCs arise from the ventral endothelial wall of the embryonic aorta within the (midgestational) aorta-gonad-mesonephros region, through a process known as endothelial-to-hematopoietic transition. In adults, haematopoiesis occurs in the red bone marrow, in the core of most bones. The red bone marrow is derived from the layer of the embryo called the mesoderm. Haematopoiesis is the process by which all mature blood cells are produced. It must balance enormous production needs (the average person produces more than 500 billion blood cells every day) with the need to regulate the number of each blood cell type in the circulation. In vertebrates, the vast majority of hematopoiesis occurs in the bone marrow and is derived from a limited number of hematopoietic stem cells that are multipotent and capable of extensive self-renewal. Hematopoietic stem cells give rise to different types of blood cells, in lines called myeloid and lymphoid. Myeloid and lymphoid lineages both are involved in dendritic cell formation. Myeloid cells include monocytes, macrophages, neutrophils, basophils, eosinophils, erythrocytes, and megakaryocytes to platelets. Lymphoid cells include T cells, B cells, natural killer cells, and innate lymphoid cells. The definition of hematopoietic stem cell has developed since they were first discovered in 1961. The hematopoietic tissue contains cells with long-term and short-term regeneration capacities and committed multipotent, oligopotent, and unipotent progenitors. Hematopoietic stem cells constitute 1:10,000 of cells in myeloid tissue. HSC transplants are used in the treatment of cancers and other immune system disorders due to their regenerative properties. Structure. They are round, non-adherent, with a rounded nucleus and low cytoplasm-to-nucleus ratio. In shape, hematopoietic stem cells resemble lymphocytes. Location. The very first hematopoietic stem cells during (mouse and human) embryonic development are found in aorta-gonad-mesonephros region and the vitelline and umbilical arteries. Slightly later, HSCs are also found in the placenta, yolk sac, embryonic head, and fetal liver. Stem and progenitor cells can be taken from the pelvis, at the iliac crest, using a needle and syringe. The cells can be removed as liquid (to perform a smear to look at the cell morphology) or they can be removed via a core biopsy (to maintain the architecture or relationship of the cells to each other and to the bone). Subtypes. A colony-forming unit is a subtype of HSC. (This sense of the term is different from colony-forming units of microbes, which is a cell counting unit.) There are various kinds of HSC colony-forming units: The above CFUs are based on the lineage. Another CFU, the colony-forming unit–spleen (CFU-S), was the basis of an "in vivo" clonal colony formation, which depends on the ability of infused bone marrow cells to give rise to clones of maturing hematopoietic cells in the spleens of irradiated mice after 8 to 12 days. It was used extensively in early studies, but is now considered to measure more mature progenitor or transit-amplifying cells rather than stem cells. Isolating stem cells. Since hematopoietic stem cells cannot be isolated as a pure population, it is not possible to identify them in a microscope. Hematopoietic stem cells can be identified or isolated by the use of flow cytometry where the combination of several different cell surface markers (particularly CD34) are used to separate the rare hematopoietic stem cells from the surrounding blood cells. Hematopoietic stem cells lack expression of mature blood cell markers and are thus called Lin-. Lack of expression of lineage markers is used in combination with detection of several positive cell-surface markers to isolate hematopoietic stem cells. In addition, hematopoietic stem cells are characterised by their small size and low staining with vital dyes such as rhodamine 123 (rhodamine lo) or Hoechst 33342 (side population). Function. Haematopoiesis. Hematopoietic stem cells are essential to haematopoiesis, the formation of the cells within blood. Hematopoietic stem cells can replenish all blood cell types (i.e., are multipotent) and self-renew. A small number of hematopoietic stem cells can expand to generate a very large number of daughter hematopoietic stem cells. This phenomenon is used in bone marrow transplantation, when a small number of hematopoietic stem cells reconstitute the hematopoietic system. This process indicates that, subsequent to bone marrow transplantation, symmetrical cell divisions into two daughter hematopoietic stem cells must occur. Stem cell self-renewal is thought to occur in the stem cell niche in the bone marrow, and it is reasonable to assume that key signals present in this niche will be important in self-renewal. There is much interest in the environmental and molecular requirements for HSC self-renewal, as understanding the ability of HSC to replenish themselves will eventually allow the generation of expanded populations of HSC "in vitro" that can be used therapeutically. Quiescence. Hematopoietic stem cells, like all adult stem cells, mostly exist in a state of quiescence, or reversible growth arrest. The altered metabolism of quiescent HSCs helps the cells survive for extended periods of time in the hypoxic bone marrow environment. When provoked by cell death or damage, Hematopoietic stem cells exit quiescence and begin actively dividing again. The transition from dormancy to propagation and back is regulated by the MEK/ERK pathway and PI3K/AKT/mTOR pathway. Dysregulation of these transitions can lead to stem cell exhaustion, or the gradual loss of active Hematopoietic stem cells in the blood system. Mobility. Hematopoietic stem cells have a higher potential than other immature blood cells to pass the bone marrow barrier, and, thus, may travel in the blood from the bone marrow in one bone to another bone. If they settle in the thymus, they may develop into T cells. In the case of fetuses and other extramedullary hematopoiesis. Hematopoietic stem cells may also settle in the liver or spleen and develop. This enables Hematopoietic stem cells to be harvested directly from the blood. Clinical significance. Transplant. Hematopoietic stem cell transplantation (HSCT) is the transplantation of multipotent hematopoietic stem cells, usually derived from bone marrow, peripheral blood, or umbilical cord blood. It may be autologous (the patient's own stem cells are used), allogeneic (the stem cells come from a donor) or syngeneic (from an identical twin). It is most often performed for patients with certain cancers of the blood or bone marrow, such as multiple myeloma or leukemia. In these cases, the recipient's immune system is usually destroyed with radiation or chemotherapy before the transplantation. Infection and graft-versus-host disease are major complications of allogeneic HSCT. In order to harvest stem cells from the circulating peripheral blood, blood donors are injected with a cytokine, such as granulocyte-colony stimulating factor (G-CSF), that induces cells to leave the bone marrow and circulate in the blood vessels. In mammalian embryology, the first definitive Hematopoietic stem cells are detected in the AGM (aorta-gonad-mesonephros), and then massively expanded in the fetal liver prior to colonising the bone marrow before birth. Hematopoietic stem cell transplantation remains a dangerous procedure with many possible complications; it is reserved for patients with life-threatening diseases. As survival following the procedure has increased, its use has expanded beyond cancer to autoimmune diseases and hereditary skeletal dysplasias; notably malignant infantile osteopetrosis and mucopolysaccharidosis. Stem cells can be used to regenerate different types of tissues. HCT is an established as therapy for chronic myeloid leukemia, acute lymphatic leukemia, aplastic anemia, and hemoglobinopathies, in addition to acute myeloid leukemia and primary immune deficiencies. Hematopoietic system regeneration is typically achieved within 2–4 weeks post-chemo- or irradiation therapy and HCT. HSCs are being clinically tested for their use in non-hematopoietic tissue regeneration. Aging of hematopoietic stem cells. DNA damage. DNA strand breaks accumulate in long term hematopoietic stem cells during aging. This accumulation is associated with a broad attenuation of DNA repair and response pathways that depends on HSC quiescence. Non-homologous end joining (NHEJ) is a pathway that repairs double-strand breaks in DNA. NHEJ is referred to as "non-homologous" because the break ends are directly ligated without the need for a homologous template. The NHEJ pathway depends on several proteins including ligase 4, DNA polymerase mu and NHEJ factor 1 (NHEJ1, also known as Cernunnos or XLF). DNA ligase 4 (Lig4) has a highly specific role in the repair of double-strand breaks by NHEJ. Lig4 deficiency in the mouse causes a progressive loss of hematopoietic stem cells during aging. Deficiency of lig4 in pluripotent stem cells results in accumulation of DNA double-strand breaks and enhanced apoptosis. In polymerase mu mutant mice, hematopoietic cell development is defective in several peripheral and bone marrow cell populations with about a 40% decrease in bone marrow cell number that includes several hematopoietic lineages. Expansion potential of hematopoietic progenitor cells is also reduced. These characteristics correlate with reduced ability to repair double-strand breaks in hematopoietic tissue. Deficiency of NHEJ factor 1 in mice leads to premature aging of hematopoietic stem cells as indicated by several lines of evidence including evidence that long-term repopulation is defective and worsens over time. Using a human induced pluripotent stem cell model of NHEJ1 deficiency, it was shown that NHEJ1 has an important role in promoting survival of the primitive hematopoietic progenitors. These NHEJ1 deficient cells possess a weak NHEJ1-mediated repair capacity that is apparently incapable of coping with DNA damages induced by physiological stress, normal metabolism, and ionizing radiation. The sensitivity of hematopoietic stem cells to Lig4, DNA polymerase mu and NHEJ1 deficiency suggests that NHEJ is a key determinant of the ability of stem cells to maintain themselves against physiological stress over time. Rossi et al. found that endogenous DNA damage accumulates with age even in wild type Hematopoietic stem cells, and suggested that DNA damage accrual may be an important physiological mechanism of stem cell aging. Loss of clonal diversity. A study shows the clonal diversity of hematopoietic stem cells gets drastically reduced around age 70 &lt;templatestyles src="Template:Tooltip/styles.css" /&gt;to a faster-growing few, substantiating a novel theory of ageing which could enable healthy aging. Of note, the shift in clonal diversity during aging was previously reported in 2008 for the murine system by the Christa Muller-Sieburg laboratory in San Diego, California. Research. Behavior in culture. A "cobblestone area-forming cell (CAFC)" assay is a cell culture-based empirical assay. When plated onto a confluent culture of stromal feeder layer, a fraction of hematopoietic stem cells creep between the gaps (even though the stromal cells are touching each other) and eventually settle between the stromal cells and the substratum (here the dish surface) or trapped in the cellular processes between the stromal cells. Emperipolesis is the "in vivo" phenomenon in which one cell is completely engulfed into another (e.g. thymocytes into thymic nurse cells); on the other hand, when "in vitro", lymphoid lineage cells creep beneath nurse-like cells, the process is called pseudoemperipolesis. This similar phenomenon is more commonly known in the HSC field by the cell culture terminology "cobble stone area-forming cells (CAFC)", which means areas or clusters of cells look dull cobblestone-like under phase contrast microscopy, compared to the other hematopoietic stem cells, which are refractile. This happens because the cells that are floating loosely on top of the stromal cells are spherical and thus refractile. However, the cells that creep beneath the stromal cells are flattened and, thus, not refractile. The mechanism of pseudoemperipolesis is only recently coming to light. It may be mediated by interaction through CXCR4 (CD184) the receptor for CXC Chemokines (e.g., SDF1) and α4β1 integrins. Repopulation kinetics. Hematopoietic stem cells (HSC) cannot be easily observed directly, and, therefore, their behaviors need to be inferred indirectly. Clonal studies are likely the closest technique for single cell in vivo studies of HSC. Here, sophisticated experimental and statistical methods are used to ascertain that, with a high probability, a single HSC is contained in a transplant administered to a lethally irradiated host. The clonal expansion of this stem cell can then be observed over time by monitoring the percent donor-type cells in blood as the host is reconstituted. The resulting time series is defined as the repopulation kinetic of the HSC. The reconstitution kinetics are very heterogeneous. However, using symbolic dynamics, one can show that they fall into a limited number of classes. To prove this, several hundred experimental repopulation kinetics from clonal Thy-1lo SCA-1+ lin−(B220, CD4, CD8, Gr-1, Mac-1 and Ter-119) c-kit+ HSC were translated into symbolic sequences by assigning the symbols "+", "-", "~" whenever two successive measurements of the percent donor-type cells have a positive, negative, or unchanged slope, respectively. By using the Hamming distance, the repopulation patterns were subjected to cluster analysis yielding 16 distinct groups of kinetics. To finish the empirical proof, the Laplace add-one approach was used to determine that the probability of finding kinetics not contained in these 16 groups is very small. By corollary, this result shows that the hematopoietic stem cell compartment is also heterogeneous by dynamical criteria. It was originally believed that all hematopoietic stem cells were alike in their self-renewal and differentiation abilities. This view was first challenged by the 2002 discovery by the Muller-Sieburg group in San Diego, who illustrated that different stem cells can show distinct repopulation patterns that are epigenetically predetermined intrinsic properties of clonal Thy-1lo Sca-1+ lin− c-kit+ HSC. The results of these clonal studies led to the notion of lineage bias. Using the ratio formula_0 of lymphoid (L) to myeloid (M) cells in blood as a quantitative marker, the stem cell compartment can be split into three categories of HSC. Balanced (Bala) hematopoietic stem cells repopulate peripheral white blood cells in the same ratio of myeloid to lymphoid cells as seen in unmanipulated mice (on average about 15% myeloid and 85% lymphoid cells, or 3 ≤ ρ ≤ 10). Myeloid-biased (My-bi) hematopoietic stem cells give rise to very few lymphocytes resulting in ratios 0 &lt; ρ &lt; 3, while lymphoid-biased (Ly-bi) hematopoietic stem cells generate very few myeloid cells, which results in lymphoid-to-myeloid ratios of ρ &gt; 10. All three types are normal types of HSC, and they do not represent stages of differentiation. Rather, these are three classes of HSC, each with an epigenetically fixed differentiation program. These studies also showed that lineage bias is not stochastically regulated or dependent on differences in environmental influence. My-bi HSC self-renew longer than balanced or Ly-bi HSC. The myeloid bias results from reduced responsiveness to the lymphopoetin interleukin 7 (IL-7). Subsequently, other groups confirmed and highlighted the original findings. For example, the Eaves group confirmed in 2007 that repopulation kinetics, long-term self-renewal capacity, and My-bi and Ly-bi are stably inherited intrinsic HSC properties. In 2010, the Goodell group provided additional insights about the molecular basis of lineage bias in side population (SP) SCA-1+ lin− c-kit+ HSC. As previously shown for IL-7 signaling, it was found that a member of the transforming growth factor family (TGF-beta) induces and inhibits the proliferation of My-bi and Ly-bi HSC, respectively. Etymology. From Greek "haimato-", combining form of "haima" 'blood', and from the Latinized form of Greek "poietikos" 'capable of making, creative, productive', from "poiein" 'to make, create'. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rho = L/M" } ]
https://en.wikipedia.org/wiki?curid=1264088
12641022
Second-harmonic imaging microscopy
Second-harmonic imaging microscopy (SHIM) is based on a nonlinear optical effect known as second-harmonic generation (SHG). SHIM has been established as a viable microscope imaging contrast mechanism for visualization of cell and tissue structure and function. A second-harmonic microscope obtains contrasts from variations in a specimen's ability to generate second-harmonic light from the incident light while a conventional optical microscope obtains its contrast by detecting variations in optical density, path length, or refractive index of the specimen. SHG requires intense laser light passing through a material with a noncentrosymmetric molecular structure, either inherent or induced externally, for example by an electric field. Second-harmonic light emerging from an SHG material is exactly half the wavelength (frequency doubled) of the light entering the material. While two-photon-excited fluorescence (TPEF) is also a two photon process, TPEF loses some energy during the relaxation of the excited state, while SHG is energy conserving. Typically, an inorganic crystal is used to produce SHG light such as lithium niobate (LiNbO3), potassium titanyl phosphate (KTP = KTiOPO4), and lithium triborate (LBO = LiB3O5). Though SHG requires a material to have specific molecular orientation in order for the incident light to be frequency doubled, some biological materials can be highly polarizable, and assemble into fairly ordered, large noncentrosymmetric structures. While some biological materials such as collagen, microtubules, and muscle myosin can produce SHG signals, even water can become ordered and produce second-harmonic signal under certain conditions, which allows SH microscopy to image surface potentials without any labeling molecules. The SHG pattern is mainly determined by the phase matching condition. A common setup for an SHG imaging system will have a laser scanning microscope with a titanium sapphire mode-locked laser as the excitation source. The SHG signal is propagated in the forward direction. However, some experiments have shown that objects on the order of about a tenth of the wavelength of the SHG produced signal will produce nearly equal forward and backward signals. Advantages. SHIM offers several advantages for live cell and tissue imaging. SHG does not involve the excitation of molecules like other techniques such as fluorescence microscopy therefore, the molecules shouldn't suffer the effects of phototoxicity or photobleaching. Also, since many biological structures produce strong SHG signals, the labeling of molecules with exogenous probes is not required which can also alter the way a biological system functions. By using near infrared wavelengths for the incident light, SHIM has the ability to construct three-dimensional images of specimens by imaging deeper into thick tissues. Difference and complementarity with two-photon fluorescence (2PEF). Two-photons fluorescence (2PEF) is a very different process from SHG: it involves excitation of electrons to higher energy levels, and subsequent de-excitation by photon emission (unlike SHG, although it is also a 2-photon process). Thus, 2PEF is a non coherent process, spatially (emitted isotropically) and temporally (broad, sample-dependent spectrum). It is also not specific to certain structure, unlike SHG. It can therefore be coupled to SHG in multiphoton imaging to reveal some molecules that do produce autofluorescence, like elastin in tissues (while SHG reveals collagen or myosin for instance). History. Before SHG was used for imaging, the first demonstration of SHG was performed in 1961 by P. A. Franken, G. Weinreich, C. W. Peters, and A. E. Hill at the University of Michigan, Ann Arbor using a quartz sample. In 1968, SHG from interfaces was discovered by Bloembergen and has since been used as a tool for characterizing surfaces and probing interface dynamics. In 1971, Fine and Hansen reported the first observation of SHG from biological tissue samples. In 1974, Hellwarth and Christensen first reported the integration of SHG and microscopy by imaging SHG signals from polycrystalline ZnSe. In 1977, Colin Sheppard imaged various SHG crystals with a scanning optical microscope. The first biological imaging experiments were done by Freund and Deutsch in 1986 to study the orientation of collagen fibers in rat tail tendon. In 1993, Lewis examined the second-harmonic response of styryl dyes in electric fields. He also showed work on imaging live cells. In 2006, Goro Mizutani group developed a non-scanning SHG microscope that significantly shortens the time required for observation of large samples, even if the two-photons wide-field microscope was published in 1996 and could have been used to detect SHG. The non-scanning SHG microscope was used for observation of plant starch, megamolecule, spider silk and so on. In 2010 SHG was extended to whole-animal in vivo imaging. In 2019, SHG applications widened when it was applied to the use of selectively imaging agrochemicals directly on leaf surfaces to provide a way to evaluate the effectiveness of pesticides. Quantitative measurements. Orientational anisotropy. SHG polarization anisotropy can be used to determine the orientation and degree of organization of proteins in tissues since SHG signals have well-defined polarizations. By using the anisotropy equation: formula_0 and acquiring the intensities of the polarizations in the parallel and perpendicular directions. A high formula_1 value indicates an anisotropic orientation whereas a low formula_1 value indicates an isotropic structure. In work done by Campagnola and Loew, it was found that collagen fibers formed well-aligned structures with an formula_2 value. Forward over backward SHG. SHG being a coherent process (spatially and temporally), it keeps information on the direction of the excitation and is not emitted isotropically. It is mainly emitted in forward direction (same as excitation), but can also be emitted in backward direction depending on the phase-matching condition. Indeed, the coherence length beyond which the conversion of the signal decreases is: formula_3 with formula_4 for forward, but formula_5 for backward such that formula_6 » formula_7. Therefore, thicker structures will appear preferentially in forward, and thinner ones in backward: since the SHG conversion depends at first approximation on the square of the number of nonlinear converters, the signal will be higher if emitted by thick structures, thus the signal in forward direction will be higher than in backward. However, the tissue can scatter the generated light, and a part of the SHG in forward can be retro-reflected in the backward direction. Then, the forward-over-backward ratio F/B can be calculated, and is a metric of the global size and arrangement of the SHG converters (usually collagen fibrils). It can also be shown that the higher the out-of-plane angle of the scatterer, the higher its F/B ratio (see fig. 2.14 of ). Polarization-resolved SHG. The advantages of polarimetry were coupled to SHG in 2002 by Stoller et al. Polarimetry can measure the orientation and order at molecular level, and coupled to SHG it can do so with the specificity to certain structures like collagen: polarization-resolved SHG microscopy (p-SHG) is thus an expansion of SHG microscopy. p-SHG defines another anisotropy parameter, as: formula_8 which is, like "r", a measure of the principal orientation and disorder of the structure being imaged. Since it is often performed in long cylindrical filaments (like collagen), this anisotropy is often equal to formula_9 , where formula_10 is the nonlinear susceptibility tensor and X the direction of the filament (or main direction of the structure), Y orthogonal to X and Z the propagation of the excitation light. The orientation "ϕ" of the filaments in the plane XY of the image can also be extracted from p-SHG by FFT analysis, and put in a map. Fibrosis quantization. Collagen (particular case, but widely studied in SHG microscopy), can exist in various forms : 28 different types, of which 5 are fibrillar. One of the challenge is to determine and quantify the amount of fibrillar collagen in a tissue, to be able to see its evolution and relationship with other non-collagenous materials. To that end, a SHG microscopy image has to be corrected to remove the small amount of residual fluorescence or noise that exist at the SHG wavelength. After that, a mask can be applied to quantify the collagen inside the image. Among other quantization techniques, it is probably the one with the highest specificity, reproductibility and applicability despite being quite complex. Others. It has also been used to prove that backpropagating action potentials invade dendritic spines without voltage attenuation, establishing a sound basis for future work on Long-term potentiation. Its use here was that it provided a way to accurately measure the voltage in the tiny dendritic spines with an accuracy unattainable with standard two-photon microscopy. Meanwhile, SHG can efficiently convert near-infrared light to visible light to enable imaging-guided photodynamic therapy, overcoming the penetration depth limitations. Materials that can be imaged. SHG microscopy and its expansions can be used to study various tissues: some example images are reported in the figure below: collagen inside the extracellular matrix remains the main application. It can be found in tendon, skin, bone, cornea, aorta, fascia, cartilage, meniscus, intervertebral disks... Myosin can also be imaged in skeletal muscle or cardiac muscle. Coupling with THG microscopy. Third-Harmonic Generation (THG) microscopy can be complementary to SHG microscopy, as it is sensitive to the transverse interfaces, and to the 3rd order nonlinear susceptibility formula_11 Applications. Cancer progression, tumor characterization. The mammographic density is correlated with the collagen density, thus SHG can be used for identifying breast cancer. SHG is usually coupled to other nonlinear techniques such as Coherent anti-Stokes Raman Scattering or Two-photon excitation microscopy, as part of a routine called multiphoton microscopy (or tomography) that provides a non-invasive and rapid in vivo histology of biopsies that may be cancerous. Breast cancer. The comparison of forward and backward SHG images gives insight about the microstructure of collagen, itself related to the grade and stage of a tumor, and its progression in breast. Comparison of SHG and 2PEF can also show the change of collagen orientation in tumors. Even if SHG microscopy has contributed a lot to breast cancer research, it is not yet established as a reliable technique in hospitals, or for diagnostic of this pathology in general. Ovarian cancer. Healthy ovaries present in SHG a uniform epithelial layer and well-organized collagen in their stroma, whereas abnormal ones show an epithelium with large cells and a changed collagen structure. The r ratio &lt;templatestyles src="Crossreference/styles.css" /&gt; is also used to show that the alignment of fibrils is slightly higher for cancerous than for normal tissues. Skin cancer. SHG is, again, combined to 2PEF is used to calculate the ratio: formula_12 where shg (resp. tpef) is the number of thresholded pixels in the SHG (resp. 2PEF) image, a high MFSI meaning a pure SHG image (with no fluorescence). The highest MFSI is found in cancerous tissues, which provides a contrast mode to differentiate from normal tissues. SHG was also combined to Third-Harmonic Generation (THG) to show that backward &lt;templatestyles src="Crossreference/styles.css" /&gt; THG is higher in tumors. Pancreatic cancer. Changes in collagen ultrastructure in pancreatic cancer can be investigated by multiphoton fluorescence and polarization-resolved SHIM. Other cancers. SHG microscopy was reported for the study of lung, colonic, esophageal stroma and cervical cancers. Pathologies detection. Alterations in the organization or polarity of the collagen fibrils can be signs of pathology. In particular, the anisotropic alignment of collagen fibers allowed the discrimination of healthy dermis from pathological scars in skin. Also, pathologies in cartilage such as osteoarthritis can be probed by polarization-resolved SHG microscopy. SHIM was later extended to fibro-cartilage (meniscus). Tissue engineering. The ability of SHG to image specific molecules can reveal the structure of a certain tissue one material at a time, and at various scales (from macro to micro) using microscopy. For instance, the collagen (type I) is specifically imaged from the extracellular matrix (ECM) of cells, or when it serves as a scaffold or conjonctive material in tissues. SHG also reveals fibroin in silk, myosin in muscles and biosynthetized cellulose. All of this imaging capability can be used to design artificials tissues, by targeting specific points of the tissue : SHG can indeed quantitatively measure some orientations, and material quantity and arrangement. Also, SHG coupled to other multiphoton techniques can serve to monitor the development of engineered tissues, when the sample is relatively thin however. Of course, they can finally be used as a quality control of the fabricated tissues. Structure of the eye. Cornea, at the surface of the eye, is considered to be made of plywood-like structure of collagen, due to the self-organization properties of sufficiently dense collagen. Yet, the collagenous orientation in lamellae is still under debate in this tissue. Keratoconus cornea can also be imaged by SHG to reveal morphological alterations of the collagen. Third-Harmonic Generation (THG) microscopy is moreover used to image the cornea, which is complementary to SHG signal as THG and SHG maxima in this tissue are often at different places.
[ { "math_id": 0, "text": "\\frac{I_{par}-I_{perp}}{I_{par}+2I_{perp}}=r" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": "r=0.7" }, { "math_id": 3, "text": "l_c = 2/\\Delta k " }, { "math_id": 4, "text": "\\Delta k \\propto 1/(n_{2\\omega}-n_{\\omega}) " }, { "math_id": 5, "text": "\\Delta k_{bwd} \\propto 1/(n_{2\\omega}+n_{\\omega}) " }, { "math_id": 6, "text": "l_c" }, { "math_id": 7, "text": "l_{c,bwd}" }, { "math_id": 8, "text": "\\rho = \\sqrt{\\frac{I_{par}}{I_{perp}}} " }, { "math_id": 9, "text": "\\rho = \\frac{\\chi^{(2)}_{XXX}}{\\chi^{(2)}_{XYY}} " }, { "math_id": 10, "text": "\\chi^{(2)}" }, { "math_id": 11, "text": "\\chi^{(3)}" }, { "math_id": 12, "text": " MFSI=(\\text{shg}-\\text{tpef})/(\\text{shg}+\\text{tpef}) " } ]
https://en.wikipedia.org/wiki?curid=12641022
12642008
Panomorph
Wide angle lens with better optical parameters A panomorph lens is a particular type of wide-angle lens specifically designed to improve optical performances in predefined zones of interest, or across the whole image, compared to traditional fisheye lenses.[predatory publisher] Some examples of improved optical parameters include the number of pixels, the MTF or the relative illumination. The term "panomorph" derives from the Greek words "pan" meaning all, "horama" meaning view, and "morph" meaning form. History. The origin of panomorph technology dates back to 1999 from a French company named ImmerVision now headquartered in Montreal, Canada. Since the first panomorph lenses have been used in video surveillance applications in the early 2000s, panomorph lenses are an improvement on existing wide-angle lenses in a broad range of applications. Technology. Traditional wide-angle lenses have significant barrel distortion resulting from the compromise of imaging a wide field of view onto a finite, flat image plane; and non-uniform image quality due to the off-axis optical aberrations increasing with the field angle; and significant relative illumination falloff due to the cosine fourth illumination law. To improve the optical performances of the resulting images in predefined zones of interest or in the whole image, panomorph lenses can use one or many strategies at the optical design stage, including: The zones of interest or whole image improvements resulting from using any of these design strategies in a given panomorph lenses enable improved optical performances compared to other traditional wide-angle lenses.[predatory publisher] Imaging software. No matter the strategies used to improve the performances in zones of interest, each panomorph lens is designed with specific parameters such as the object-to-image mapping function. Precise specifications of these design parameters for each panomorph lens is encoded in their unique RPL (Registered Panomorph Lens) code to allow de-warping algorithms to process the image and properly display the final image. The display is optimized to keep advantage of the improved performance in the zone of interest created by the panomorph lenses as opposed to algorithms for fisheye lenses which employ a linear mapping function to de-warp the image without any considerations for their departure from a perfect linear mapping (formula_0 distortion). Applications. By providing wide-angle images with zones of interest, panomorph lenses are often designed with specific applications in mind. Panomorph lenses have already been used in various industries, including:
[ { "math_id": 0, "text": "f\\theta" } ]
https://en.wikipedia.org/wiki?curid=12642008
1264398
Hyperbolic coordinates
Geometric mean and hyperbolic angle as coordinates in quadrant I In mathematics, hyperbolic coordinates are a method of locating points in quadrant I of the Cartesian plane formula_0. Hyperbolic coordinates take values in the hyperbolic plane defined as: formula_1. These coordinates in "HP" are useful for studying logarithmic comparisons of direct proportion in "Q" and measuring deviations from direct proportion. For formula_2 in formula_3 take formula_4 and formula_5. The parameter "u" is the hyperbolic angle to ("x, y") and "v" is the geometric mean of "x" and "y". The inverse mapping is formula_6. The function formula_7 is a continuous mapping, but not an analytic function. Alternative quadrant metric. Since "HP" carries the metric space structure of the Poincaré half-plane model of hyperbolic geometry, the bijective correspondence formula_8 brings this structure to "Q". It can be grasped using the notion of hyperbolic motions. Since geodesics in "HP" are semicircles with centers on the boundary, the geodesics in "Q" are obtained from the correspondence and turn out to be rays from the origin or petal-shaped curves leaving and re-entering the origin. And the hyperbolic motion of "HP" given by a left-right shift corresponds to a squeeze mapping applied to "Q". Since hyperbolas in "Q" correspond to lines parallel to the boundary of "HP", they are horocycles in the metric geometry of "Q". If one only considers the Euclidean topology of the plane and the topology inherited by "Q", then the lines bounding "Q" seem close to "Q". Insight from the metric space "HP" shows that the open set "Q" has only the origin as boundary when viewed through the correspondence. Indeed, consider rays from the origin in "Q", and their images, vertical rays from the boundary "R" of "HP". Any point in "HP" is an infinite distance from the point "p" at the foot of the perpendicular to "R", but a sequence of points on this perpendicular may tend in the direction of "p". The corresponding sequence in "Q" tends along a ray toward the origin. The old Euclidean boundary of "Q" is no longer relevant. Applications in physical science. Fundamental physical variables are sometimes related by equations of the form "k" = "x y". For instance, "V" = "I R" (Ohm's law), "P" = "V I" (electrical power), "P V" = "k T" (ideal gas law), and "f" λ = "v" (relation of wavelength, frequency, and velocity in the wave medium). When the "k" is constant, the other variables lie on a hyperbola, which is a horocycle in the appropriate "Q" quadrant. For example, in thermodynamics the isothermal process explicitly follows the hyperbolic path and work can be interpreted as a hyperbolic angle change. Similarly, a given mass "M" of gas with changing volume will have variable density δ = "M / V", and the ideal gas law may be written "P = k T" δ so that an isobaric process traces a hyperbola in the quadrant of absolute temperature and gas density. For hyperbolic coordinates in the theory of relativity see the History section. Economic applications. There are many natural applications of hyperbolic coordinates in economics: History. The geometric mean is an ancient concept, but hyperbolic angle was developed in this configuration by Gregoire de Saint-Vincent. He was attempting to perform quadrature with respect to the rectangular hyperbola "y" = 1/"x". That challenge was a standing open problem since Archimedes performed the quadrature of the parabola. The curve passes through (1,1) where it is opposite the origin in a unit square. The other points on the curve can be viewed as rectangles having the same area as this square. Such a rectangle may be obtained by applying a squeeze mapping to the square. Another way to view these mappings is via hyperbolic sectors. Starting from (1,1) the hyperbolic sector of unit area ends at (e, 1/e), where e is 2.71828…, according to the development of Leonhard Euler in "Introduction to the Analysis of the Infinite" (1748). Taking (e, 1/e) as the vertex of rectangle of unit area, and applying again the squeeze that made it from the unit square, yields formula_16 Generally n squeezes yields formula_17 A. A. de Sarasa noted a similar observation of G. de Saint Vincent, that as the abscissas increased in a geometric series, the sum of the areas against the hyperbola increased in arithmetic series, and this property corresponded to the logarithm already in use to reduce multiplications to additions. Euler’s work made the natural logarithm a standard mathematical tool, and elevated mathematics to the realm of transcendental functions. The hyperbolic coordinates are formed on the original picture of G. de Saint-Vincent, which provided the quadrature of the hyperbola, and transcended the limits of algebraic functions. In 1875 Johann von Thünen published a theory of natural wages which used geometric mean of a subsistence wage and market value of the labor using the employer's capital. In special relativity the focus is on the 3-dimensional hypersurface in the future of spacetime where various velocities arrive after a given proper time. Scott Walter explains that in November 1907 Hermann Minkowski alluded to a well-known three-dimensional hyperbolic geometry while speaking to the Göttingen Mathematical Society, but not to a four-dimensional one. In tribute to Wolfgang Rindler, the author of a standard introductory university-level textbook on relativity, hyperbolic coordinates of spacetime are called Rindler coordinates.
[ { "math_id": 0, "text": "\\{(x, y) \\ :\\ x > 0,\\ y > 0\\ \\} = Q" }, { "math_id": 1, "text": "HP = \\{(u, v) : u \\in \\mathbb{R}, v > 0 \\}" }, { "math_id": 2, "text": "(x,y)" }, { "math_id": 3, "text": "Q" }, { "math_id": 4, "text": "u = \\ln \\sqrt{\\frac{x}{y}} " }, { "math_id": 5, "text": "v = \\sqrt{xy}" }, { "math_id": 6, "text": "x = v e^u ,\\quad y = v e^{-u}" }, { "math_id": 7, "text": "Q \\rarr HP" }, { "math_id": 8, "text": "Q \\leftrightarrow HP" }, { "math_id": 9, "text": "x = 1" }, { "math_id": 10, "text": "y" }, { "math_id": 11, "text": "0 < y < 1" }, { "math_id": 12, "text": "u > 0" }, { "math_id": 13, "text": "0 < z < y." }, { "math_id": 14, "text": "\\Delta u = \\ln \\sqrt{\\frac{y}{z}}. " }, { "math_id": 15, "text": "\\Delta u" }, { "math_id": 16, "text": "(e^2, \\ e^{-2})." }, { "math_id": 17, "text": "(e^n, \\ e^{-n})." } ]
https://en.wikipedia.org/wiki?curid=1264398
12645144
Strategic complements
Game theory concept In economics and game theory, the decisions of two or more players are called strategic complements if they mutually reinforce one another, and they are called strategic substitutes if they mutually offset one another. These terms were originally coined by Bulow, Geanakoplos, and Klemperer (1985). To see what is meant by 'reinforce' or 'offset', consider a situation in which the players all have similar choices to make, as in the paper of Bulow et al., where the players are all imperfectly competitive firms that must each decide how much to produce. Then the production decisions are strategic complements if an increase in the production of one firm increases the marginal revenues of the others, because that gives the others an incentive to produce more too. This tends to be the case if there are sufficiently strong aggregate increasing returns to scale and/or the demand curves for the firms' products have a sufficiently low own-price elasticity. On the other hand, the production decisions are strategic substitutes if an increase in one firm's output decreases the marginal revenues of the others, giving them an incentive to produce less. According to Russell Cooper and Andrew John, strategic complementarity is the basic property underlying examples of multiple equilibria in coordination games. Calculus formulation. Mathematically, consider a symmetric game with two players that each have payoff function formula_0, where formula_1 represents the player's own decision, and formula_2 represents the decision of the other player. Assume formula_3 is increasing and concave in the player's own strategy formula_1. Under these assumptions, the two decisions are strategic complements if an increase in each player's own decision formula_1 raises the marginal payoff formula_4 of the other player. In other words, the decisions are strategic complements if the second derivative formula_5 is positive for formula_6. Equivalently, this means that the function formula_3 is supermodular. On the other hand, the decisions are strategic substitutes if formula_5 is negative, that is, if formula_3 is submodular. Example. In their original paper, Bulow et al. use a simple model of competition between two firms to illustrate their ideas. The revenue for firm x with production rates formula_7 is given by formula_8 while the revenue for firm y with production rate formula_9 in market 2 is given by formula_10 At any interior equilibrium, formula_11, we must have formula_12 Using vector calculus, geometric algebra, or differential geometry, Bulow et al. showed that the sensitivity of the Cournot equilibrium to changes in formula_13 can be calculated in terms of second partial derivatives of the payoff functions: formula_14 When formula_15, formula_16 This, as price is increased in market 1, Firm x sells more in market 1 and less in market 2, while firm y sells more in market 2. If the Cournot equilibrium of this model is calculated explicitly, we find formula_17 Supermodular games. A game with strategic complements is also called a supermodular game. This was first formalized by Topkis, and studied by Vives. There are efficient algorithms for finding pure-strategy Nash equilibria in such games.
[ { "math_id": 0, "text": "\\,\\Pi(x_i, x_j)" }, { "math_id": 1, "text": "\\,x_i" }, { "math_id": 2, "text": "\\,x_j" }, { "math_id": 3, "text": "\\,\\Pi" }, { "math_id": 4, "text": "\\frac{\\partial \\Pi _j}{\\partial x_j}" }, { "math_id": 5, "text": "\\frac{\\partial ^2\\Pi _j}{\\partial x_j \\partial x_i}" }, { "math_id": 6, "text": "i \\neq j" }, { "math_id": 7, "text": " (x_1,x_2)" }, { "math_id": 8, "text": " U_x(x_1, x_2; y_2) = p_1 x_1 + (1 - x_2 - y_2 ) x_2 - (x_1 + x_2)^2/2 - F " }, { "math_id": 9, "text": "y_2" }, { "math_id": 10, "text": " U_y(y_2;x_1,x_2) = (1 - x_2 - y_2 ) y_2 - y_2^2/2 - F " }, { "math_id": 11, "text": " (x_1^*, x_2^*, y_2^*) " }, { "math_id": 12, "text": " \\dfrac{\\partial U_x}{\\partial x_1} = 0, \\dfrac{\\partial U_x}{\\partial x_2} = 0, \\dfrac{\\partial U_y}{\\partial y_2} = 0. " }, { "math_id": 13, "text": "p_1" }, { "math_id": 14, "text": " \\begin{bmatrix} \\dfrac{d x_1^*}{d p_1} \\\\[2.2ex] \\dfrac{d x_2^*}{d p_1} \\\\[2.2ex] \\dfrac{d y_2^*}{d p_1} \\end{bmatrix}\n=\n\\begin{bmatrix}\n\\dfrac{\\partial^2 U_x}{\\partial x_1 \\partial x_1 }\n&\n\\dfrac{\\partial^2 U_x}{\\partial x_1 \\partial x_2 }\n&\n\\dfrac{\\partial^2 U_x}{\\partial x_1 \\partial y_2 }\n\\\\[2.2ex]\n\\dfrac{\\partial^2 U_x}{\\partial x_1 \\partial x_2 }\n&\n\\dfrac{\\partial^2 U_x}{\\partial x_2 \\partial x_2 }\n&\n\\dfrac{\\partial^2 U_x}{\\partial y_2 \\partial x_2 }\n\\\\[2.2ex]\n\\dfrac{\\partial^2 U_y}{\\partial x_1 \\partial y_2 }\n&\n\\dfrac{\\partial^2 U_y}{\\partial x_2 \\partial y_2 }\n&\n\\dfrac{\\partial^2 U_y}{\\partial y_2 \\partial y_2 }\n\\end{bmatrix}^{-1}\n\\begin{bmatrix}\n-\\dfrac{\\partial^2 U_x}{\\partial p_1 \\partial x_1 }\n\\\\[2.2ex]\n-\\dfrac{\\partial^2 U_x}{\\partial p_1 \\partial x_2 }\n\\\\[2.2ex]\n-\\dfrac{\\partial^2 U_y}{\\partial p_1 \\partial y_2 }\n\\end{bmatrix}\n" }, { "math_id": 15, "text": " 1/4 \\leq p_1 \\leq 2/3 " }, { "math_id": 16, "text": " \\begin{bmatrix} \\dfrac{d x_1^*}{d p_1} \\\\[2.2ex] \\dfrac{d x_2^*}{d p_1} \\\\[2.2ex] \\dfrac{d y_2^*}{d p_1} \\end{bmatrix}\n= \\begin{bmatrix} -1 & -1 & 0 \\\\ -1 & -3 & -1 \\\\ 0 & -1 & -3 \\end{bmatrix}^{-1}\n\\begin{bmatrix} -1 \\\\ 0 \\\\ 0 \\end{bmatrix}\n= \\frac{1}{5}\n\\begin{bmatrix} 8 \\\\ -3 \\\\ 1 \\end{bmatrix}\n" }, { "math_id": 17, "text": " x_1^* = \\max \\left\\{ 0, \\frac{8 p_1 - 2 }{5} \\right\\}, x_2^* = \\max \\left\\{ 0, \\frac{2 - 3 p_1}{5} \\right\\}, y_2^* = \\frac{p_1+ 1 }{5}. " } ]
https://en.wikipedia.org/wiki?curid=12645144
12646549
Filter (set theory)
Family of sets representing "large" sets In mathematics, a filter on a set formula_0 is a family formula_1 of subsets such that: A filter on a set may be thought of as representing a "collection of large subsets", one intuitive example being the neighborhood filter. Filters appear in order theory, model theory, and set theory, but can also be found in topology, from which they originate. The dual notion of a filter is an ideal. Filters were introduced by Henri Cartan in 1937 and as described in the article dedicated to filters in topology, they were subsequently used by Nicolas Bourbaki in their book "Topologie Générale" as an alternative to the related notion of a net developed in 1922 by E. H. Moore and Herman L. Smith. Order filters are generalizations of filters from sets to arbitrary partially ordered sets. Specifically, a filter on a set is just a proper order filter in the special case where the partially ordered set consists of the power set ordered by set inclusion. &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Preliminaries, notation, and basic notions. In this article, upper case Roman letters like formula_9 and formula_0 denote sets (but not families unless indicated otherwise) and formula_10 will denote the power set of formula_11 A subset of a power set is called a family of sets (or simply, a family) where it is over formula_0 if it is a subset of formula_12 Families of sets will be denoted by upper case calligraphy letters such as formula_13 Whenever these assumptions are needed, then it should be assumed that formula_0 is non–empty and that formula_14 etc. are families of sets over formula_11 The terms "prefilter" and "filter base" are synonyms and will be used interchangeably. Warning about competing definitions and notation There are unfortunately several terms in the theory of filters that are defined differently by different authors. These include some of the most important terms such as "filter". While different definitions of the same term usually have significant overlap, due to the very technical nature of filters (and point–set topology), these differences in definitions nevertheless often have important consequences. When reading mathematical literature, it is recommended that readers check how the terminology related to filters is defined by the author. For this reason, this article will clearly state all definitions as they are used. Unfortunately, not all notation related to filters is well established and some notation varies greatly across the literature (for example, the notation for the set of all prefilters on a set) so in such cases this article uses whatever notation is most self describing or easily remembered. The theory of filters and prefilters is well developed and has a plethora of definitions and notations, many of which are now unceremoniously listed to prevent this article from becoming prolix and to allow for the easy look up of notation and definitions. Their important properties are described later. Sets operations The or isotonization in formula_0 of a family of sets formula_15 is formula_16 and similarly the downward closure of formula_1 is formula_17 &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; For any two families formula_20 declare that formula_21 if and only if for every formula_22 there exists some formula_23 in which case it is said that formula_24 is coarser than formula_25 and that formula_25 is finer than (or subordinate to) formula_26 The notation formula_27 may also be used in place of formula_28 Two families formula_29 mesh, written formula_30 if formula_31 Throughout, formula_32 is a map and formula_9 is a set. Nets and their tails A directed set is a set formula_33 together with a preorder, which will be denoted by formula_34 (unless explicitly indicated otherwise), that makes formula_35 into an (upward) directed set; this means that for all formula_36 there exists some formula_37 such that formula_38 For any indices formula_39 the notation formula_40 is defined to mean formula_41 while formula_42 is defined to mean that formula_41 holds but it is not true that formula_43 (if formula_34 is antisymmetric then this is equivalent to formula_44). A net in formula_0 is a map from a non–empty directed set into formula_11 The notation formula_45 will be used to denote a net with domain formula_46 Warning about using strict comparison If formula_45 is a net and formula_47 then it is possible for the set formula_50 which is called the tail of formula_48 after formula_51, to be empty (for example, this happens if formula_51 is an upper bound of the directed set formula_33). In this case, the family formula_52 would contain the empty set, which would prevent it from being a prefilter (defined later). This is the (important) reason for defining formula_49 as formula_53 rather than formula_52 or even formula_54 and it is for this reason that in general, when dealing with the prefilter of tails of a net, the strict inequality formula_55 may not be used interchangeably with the inequality formula_56 Filters and prefilters. The following is a list of properties that a family formula_1 of sets may possess and they form the defining properties of filters, prefilters, and filter subbases. Whenever it is necessary, it should be assumed that formula_57 The family of sets formula_1 is: Many of the properties of formula_1 defined above and below, such as "proper" and "directed downward," do not depend on formula_60 so mentioning the set formula_0 is optional when using such terms. Definitions involving being "upward closed in formula_60" such as that of "filter on formula_60" do depend on formula_0 so the set formula_0 should be mentioned if it is not clear from context. formula_61 A family formula_1 is/is a(n): There are no prefilters on formula_65 (nor are there any nets valued in formula_66), which is why this article, like most authors, will automatically assume without comment that formula_67 whenever this assumption is needed. Basic examples. Named examples Other examples Ultrafilters. There are many other characterizations of "ultrafilter" and "ultra prefilter," which are listed in the article on ultrafilters. Important properties of ultrafilters are also described in that article. formula_79 A non–empty family formula_15 of sets is/is an: Any non–degenerate family that has a singleton set as an element is ultra, in which case it will then be an ultra prefilter if and only if it also has the finite intersection property. The trivial filter formula_81 is ultra if and only if formula_0 is a singleton set. The ultrafilter lemma The following important theorem is due to Alfred Tarski (1930). &lt;templatestyles src="Math_theorem/styles.css" /&gt; The ultrafilter lemma/principal/theorem (Tarski) — Every filter on a set formula_0 is a subset of some ultrafilter on formula_11 A consequence of the ultrafilter lemma is that every filter is equal to the intersection of all ultrafilters containing it. Assuming the axioms of Zermelo–Fraenkel (ZF), the ultrafilter lemma follows from the Axiom of choice (in particular from Zorn's lemma) but is strictly weaker than it. The ultrafilter lemma implies the Axiom of choice for finite sets. If only dealing with Hausdorff spaces, then most basic results (as encountered in introductory courses) in Topology (such as Tychonoff's theorem for compact Hausdorff spaces and the Alexander subbase theorem) and in functional analysis (such as the Hahn–Banach theorem) can be proven using only the ultrafilter lemma; the full strength of the axiom of choice might not be needed. Kernels. The kernel is useful in classifying properties of prefilters and other families of sets. The kernel of a family of sets formula_1 is the intersection of all sets that are elements of formula_82 formula_83 If formula_15 then for any point formula_84 Properties of kernels If formula_15 then formula_85 and this set is also equal to the kernel of the π–system that is generated by formula_59 In particular, if formula_1 is a filter subbase then the kernels of all of the following sets are equal: (1) formula_64 (2) the π–system generated by formula_64 and (3) the filter generated by formula_59 If formula_32 is a map then formula_86 and formula_87 If formula_88 then formula_89 while if formula_1 and formula_24 are equivalent then formula_90 Equivalent families have equal kernels. Two principal families are equivalent if and only if their kernels are equal; that is, if formula_1 and formula_24 are principal then they are equivalent if and only if formula_90 Classifying families by their kernels. A family formula_1 of sets is: If formula_1 is a principal filter on formula_0 then formula_93 and formula_94 where formula_95 is also the smallest prefilter that generates formula_59 Family of examples: For any non–empty formula_96 the family formula_97 is free but it is a filter subbase if and only if no finite union of the form formula_98 covers formula_99 in which case the filter that it generates will also be free. In particular, formula_100 is a filter subbase if formula_58 is countable (for example, formula_101 the primes), a meager set in formula_99 a set of finite measure, or a bounded subset of formula_76 If formula_58 is a singleton set then formula_100 is a subbase for the Fréchet filter on formula_76 For every filter formula_73 there exists a unique pair of dual ideals formula_102 such that formula_103 is free, formula_104 is principal, and formula_105 and formula_106 do not mesh (that is, formula_107). The dual ideal formula_103 is called the free part of formula_25 while formula_104 is called the principal part where at least one of these dual ideals is filter. If formula_25 is principal then formula_108 otherwise, formula_109 and formula_110 is a free (non–degenerate) filter. Finite prefilters and finite sets If a filter subbase formula_1 is finite then it is fixed (that is, not free); this is because formula_18 is a finite intersection and the filter subbase formula_1 has the finite intersection property. A finite prefilter is necessarily principal, although it does not have to be closed under finite intersections. If formula_0 is finite then all of the conclusions above hold for any formula_57 In particular, on a finite set formula_60 there are no free filter subbases (and so no free prefilters), all prefilters are principal, and all filters on formula_0 are principal filters generated by their (non–empty) kernels. The trivial filter formula_70 is always a finite filter on formula_0 and if formula_0 is infinite then it is the only finite filter because a non–trivial finite filter on a set formula_0 is possible if and only if formula_0 is finite. However, on any infinite set there are non–trivial filter subbases and prefilters that are finite (although they cannot be filters). If formula_0 is a singleton set then the trivial filter formula_70 is the only proper subset of formula_10 and moreover, this set formula_70 is a principal ultra prefilter and any superset formula_111 (where formula_112) with the finite intersection property will also be a principal ultra prefilter (even if formula_113 is infinite). Characterizing fixed ultra prefilters. If a family of sets formula_1 is fixed (that is, formula_91) then formula_1 is ultra if and only if some element of formula_1 is a singleton set, in which case formula_1 will necessarily be a prefilter. Every principal prefilter is fixed, so a principal prefilter formula_1 is ultra if and only if formula_114 is a singleton set. Every filter on formula_0 that is principal at a single point is an ultrafilter, and if in addition formula_0 is finite, then there are no ultrafilters on formula_0 other than these. The next theorem shows that every ultrafilter falls into one of two categories: either it is free or else it is a principal filter generated by a single point. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Proposition — If formula_25 is an ultrafilter on formula_0 then the following are equivalent: Finer/coarser, subordination, and meshing. The preorder formula_34 that is defined below is of fundamental importance for the use of prefilters (and filters) in topology. For instance, this preorder is used to define the prefilter equivalent of "subsequence", where "formula_115" can be interpreted as "formula_25 is a subsequence of formula_24" (so "subordinate to" is the prefilter equivalent of "subsequence of"). It is also used to define prefilter convergence in a topological space. The definition of formula_1 meshes with formula_116 which is closely related to the preorder formula_117 is used in Topology to define cluster points. Two families of sets formula_29 and are compatible, indicated by writing formula_30 if formula_31 If formula_29 do not mesh then they are dissociated. If formula_118 then formula_119 are said to mesh if formula_120 mesh, or equivalently, if the of formula_121 which is the family formula_122 does not contain the empty set, where the trace is also called the of formula_123 Declare that formula_124 stated as formula_24 is coarser than formula_25 and formula_25 is finer than (or subordinate to) formula_116 if any of the following equivalent conditions hold: and if in addition formula_25 is upward closed, which means that formula_125 then this list can be extended to include: If an upward closed family formula_25 is finer than formula_24 (that is, formula_21) but formula_127 then formula_25 is said to be strictly finer than formula_24 and formula_24 is strictly coarser than formula_63 Two families are comparable if one of these sets is finer than the other. "Example": If formula_128 is a subsequence of formula_129 then formula_130 is subordinate to formula_131 in symbols: formula_132 and also formula_133 Stated in plain English, the prefilter of tails of a subsequence is always subordinate to that of the original sequence. To see this, let formula_134 be arbitrary (or equivalently, let formula_135 be arbitrary) and it remains to show that this set contains some formula_136 For the set formula_137 to contain formula_138 it is sufficient to have formula_139 Since formula_140 are strictly increasing integers, there exists formula_141 such that formula_142 and so formula_143 holds, as desired. Consequently, formula_144 The left hand side will be a strict/proper subset of the right hand side if (for instance) every point of formula_48 is unique (that is, when formula_145 is injective) and formula_146 is the even-indexed subsequence formula_147 because under these conditions, every tail formula_148 (for every formula_141) of the subsequence will belong to the right hand side filter but not to the left hand side filter. For another example, if formula_1 is any family then formula_149 always holds and furthermore, formula_150 Assume that formula_126 are families of sets that satisfy formula_151 Then formula_152 and formula_153 and also formula_154 If in addition to formula_155 is a filter subbase and formula_156 then formula_24 is a filter subbase and also formula_126 mesh. More generally, if both formula_157 and if the intersection of any two elements of formula_25 is non–empty, then formula_29 mesh. Every filter subbase is coarser than both the π–system that it generates and the filter that it generates. If formula_126 are families such that formula_158 the family formula_24 is ultra, and formula_159 then formula_25 is necessarily ultra. It follows that any family that is equivalent to an ultra family will necessarily be ultra. In particular, if formula_24 is a prefilter then either both formula_24 and the filter formula_160 it generates are ultra or neither one is ultra. If a filter subbase is ultra then it is necessarily a prefilter, in which case the filter that it generates will also be ultra. A filter subbase formula_1 that is not a prefilter cannot be ultra; but it is nevertheless still possible for the prefilter and filter generated by formula_1 to be ultra. If formula_118 is upward closed in formula_0 then formula_161 Relational properties of subordination The relation formula_34 is reflexive and transitive, which makes it into a preorder on formula_162 The relation formula_163 is antisymmetric but if formula_0 has more than one point then it is not symmetric. Symmetry: For any formula_164 So the set formula_0 has more than one point if and only if the relation formula_163 is not symmetric. Antisymmetry: If formula_165 but while the converse does not hold in general, it does hold if formula_24 is upward closed (such as if formula_24 is a filter). Two filters are equivalent if and only if they are equal, which makes the restriction of formula_34 to formula_72 antisymmetric. But in general, formula_34 is not antisymmetric on formula_75 nor on formula_166; that is, formula_167 does not necessarily imply formula_168; not even if both formula_169 are prefilters. For instance, if formula_1 is a prefilter but not a filter then formula_170 Equivalent families of sets. The preorder formula_34 induces its canonical equivalence relation on formula_171 where for all formula_172 formula_1 is equivalent to formula_24 if any of the following equivalent conditions hold: Two upward closed (in formula_0) subsets of formula_10 are equivalent if and only if they are equal. If formula_15 then necessarily formula_173 and formula_1 is equivalent to formula_174 Every equivalence class other than formula_175 contains a unique representative (that is, element of the equivalence class) that is upward closed in formula_11 Properties preserved between equivalent families Let formula_176 be arbitrary and let formula_25 be any family of sets. If formula_29 are equivalent (which implies that formula_177) then for each of the statements/properties listed below, either it is true of both formula_29 or else it is false of both formula_29: Missing from the above list is the word "filter" because this property is not preserved by equivalence. However, if formula_29 are filters on formula_60 then they are equivalent if and only if they are equal; this characterization does not extend to prefilters. Equivalence of prefilters and filter subbases If formula_1 is a prefilter on formula_0 then the following families are always equivalent to each other: and moreover, these three families all generate the same filter on formula_0 (that is, the upward closures in formula_0 of these families are equal). In particular, every prefilter is equivalent to the filter that it generates. By transitivity, two prefilters are equivalent if and only if they generate the same filter. Every prefilter is equivalent to exactly one filter on formula_60 which is the filter that it generates (that is, the prefilter's upward closure). Said differently, every equivalence class of prefilters contains exactly one representative that is a filter. In this way, filters can be considered as just being distinguished elements of these equivalence classes of prefilters. A filter subbase that is not also a prefilter cannot be equivalent to the prefilter (or filter) that it generates. In contrast, every prefilter is equivalent to the filter that it generates. This is why prefilters can, by and large, be used interchangeably with the filters that they generate while filter subbases cannot. Every filter is both a π–system and a ring of sets. Examples of determining equivalence/non–equivalence Examples: Let formula_78 and let formula_178 be the set formula_179 of integers (or the set formula_180). Define the sets formula_181 All three sets are filter subbases but none are filters on formula_0 and only formula_1 is prefilter (in fact, formula_1 is even free and closed under finite intersections). The set formula_182 is fixed while formula_183 is free (unless formula_184). They satisfy formula_185 but no two of these families are equivalent; moreover, no two of the filters generated by these three filter subbases are equivalent/equal. This conclusion can be reached by showing that the π–systems that they generate are not equivalent. Unlike with formula_186 every set in the π–system generated by formula_182 contains formula_179 as a subset, which is what prevents their generated π–systems (and hence their generated filters) from being equivalent. If formula_178 was instead formula_187 then all three families would be free and although the sets formula_188 would remain not equivalent to each other, their generated π–systems would be equivalent and consequently, they would generate the same filter on formula_0; however, this common filter would still be strictly coarser than the filter generated by formula_59 Set theoretic properties and constructions. Trace and meshing. If formula_1 is a prefilter (resp. filter) on formula_189 then the trace of formula_121 which is the family formula_190 is a prefilter (resp. a filter) if and only if formula_119 mesh (that is, formula_191), in which case the trace of formula_19 is said to be induced by formula_9. If formula_1 is ultra and if formula_119 mesh then the trace formula_192 is ultra. If formula_1 is an ultrafilter on formula_0 then the trace of formula_19 is a filter on formula_9 if and only if formula_193 For example, suppose that formula_1 is a filter on formula_189 is such that formula_194 Then formula_119 mesh and formula_195 generates a filter on formula_0 that is strictly finer than formula_59 When prefilters mesh Given non–empty families formula_196 the family formula_197 satisfies formula_198 and formula_199 If formula_200 is proper (resp. a prefilter, a filter subbase) then this is also true of both formula_74 In order to make any meaningful deductions about formula_200 from formula_201 needs to be proper (that is, formula_202 which is the motivation for the definition of "mesh". In this case, formula_200 is a prefilter (resp. filter subbase) if and only if this is true of both formula_74 Said differently, if formula_29 are prefilters then they mesh if and only if formula_200 is a prefilter. Generalizing gives a well known characterization of "mesh" entirely in terms of subordination (that is, formula_34): Two prefilters (resp. filter subbases) formula_29 mesh if and only if there exists a prefilter (resp. filter subbase) formula_25 such that formula_21 and formula_203 If the least upper bound of two filters formula_29 exists in formula_72 then this least upper bound is equal to formula_204 Images and preimages under functions. Throughout, formula_205 will be maps between non–empty sets. Images of prefilters Let formula_206 Many of the properties that formula_1 may have are preserved under images of maps; notable exceptions include being upward closed, being closed under finite intersections, and being a filter, which are not necessarily preserved. Explicitly, if one of the following properties is true of formula_207 then it will necessarily also be true of formula_208 (although possibly not on the codomain formula_209 unless formula_210 is surjective): Moreover, if formula_211 is a prefilter then so are both formula_212 The image under a map formula_213 of an ultra set formula_15 is again ultra and if formula_1 is an ultra prefilter then so is formula_214 If formula_1 is a filter then formula_215 is a filter on the range formula_216 but it is a filter on the codomain formula_209 if and only if formula_210 is surjective. Otherwise it is just a prefilter on formula_209 and its upward closure must be taken in formula_209 to obtain a filter. The upward closure of formula_217 is formula_218 where if formula_1 is upward closed in formula_113 (that is, a filter) then this simplifies to: formula_219 If formula_220 then taking formula_210 to be the inclusion map formula_221 shows that any prefilter (resp. ultra prefilter, filter subbase) on formula_0 is also a prefilter (resp. ultra prefilter, filter subbase) on formula_222 Preimages of prefilters Let formula_206 Under the assumption that formula_213 is surjective: formula_223 is a prefilter (resp. filter subbase, π–system, closed under finite unions, proper) if and only if this is true of formula_59 However, if formula_1 is an ultrafilter on formula_113 then even if formula_32 is surjective (which would make formula_223 a prefilter), it is nevertheless still possible for the prefilter formula_223 to be neither ultra nor a filter on formula_0  (see this footnote for an example). If formula_213 is not surjective then denote the trace of formula_224 by formula_225 where in this case particular case the trace satisfies: formula_226 and consequently also: formula_227 This last equality and the fact that the trace formula_228 is a family of sets over formula_229 means that to draw conclusions about formula_230 the trace formula_228 can be used in place of formula_1 and the surjection formula_231 can be used in place of formula_232 For example: formula_223 is a prefilter (resp. filter subbase, π–system, proper) if and only if this is true of formula_233 In this way, the case where formula_32 is not (necessarily) surjective can be reduced down to the case of a surjective function (which is a case that was described at the start of this subsection). Even if formula_1 is an ultrafilter on formula_234 if formula_32 is not surjective then it is nevertheless possible that formula_235 which would make formula_223 degenerate as well. The next characterization shows that degeneracy is the only obstacle. If formula_1 is a prefilter then the following are equivalent: and moreover, if formula_223 is a prefilter then so is formula_236 If formula_237 and if formula_238 denotes the inclusion map then the trace of formula_19 is equal to formula_239 This observation allows the results in this subsection to be applied to investigating the trace on a set. Bijections, injections, and surjections All properties involving filters are preserved under bijections. This means that if formula_240 is a bijection, then formula_1 is a prefilter (resp. ultra, ultra prefilter, filter on formula_60 ultrafilter on formula_60 filter subbase, π–system, ideal on formula_60 etc.) if and only if the same is true of formula_241 A map formula_242 is injective if and only if for all prefilters formula_243 is equivalent to formula_244 The image of an ultra family of sets under an injection is again ultra. The map formula_213 is a surjection if and only if whenever formula_1 is a prefilter on formula_113 then the same is true of formula_245 (this result does not require the ultrafilter lemma). Subordination is preserved by images and preimages. The relation formula_34 is preserved under both images and preimages of families of sets. This means that for any families formula_20 formula_246 Moreover, the following relations always hold for any family of sets formula_24: formula_247 where equality will hold if formula_32 is surjective. Furthermore, formula_248 If formula_249 then formula_250 and formula_251 where equality will hold if formula_210 is injective. Products of prefilters. Suppose formula_252 is a family of one or more non–empty sets, whose product will be denoted by formula_253 and for every index formula_254 let formula_255 denote the canonical projection. Let formula_256 be non−empty families, also indexed by formula_257 such that formula_258 for each formula_259 The product of the families formula_260 is defined identically to how the basic open subsets of the product topology are defined (had all of these formula_261 been topologies). That is, both the notations formula_262 denote the family of all cylinder subsets formula_263 such that formula_264 for all but finitely many formula_47 and where formula_265 for any one of these finitely many exceptions (that is, for any formula_51 such that formula_266 necessarily formula_265). When every formula_261 is a filter subbase then the family formula_267 is a filter subbase for the filter on formula_268 generated by formula_269 If formula_270 is a filter subbase then the filter on formula_268 that it generates is called the filter generated by formula_260. If every formula_261 is a prefilter on formula_271 then formula_270 will be a prefilter on formula_268 and moreover, this prefilter is equal to the coarsest prefilter formula_272 such that formula_273 for every formula_259 However, formula_270 may fail to be a filter on formula_268 even if every formula_261 is a filter on formula_274 Set subtraction and some examples. Set subtracting away a subset of the kernel If formula_1 is a prefilter on formula_275 then formula_276 is a prefilter, where this latter set is a filter if and only if formula_1 is a filter and formula_277 In particular, if formula_1 is a neighborhood basis at a point formula_69 in a topological space formula_0 having at least 2 points, then formula_278 is a prefilter on formula_11 This construction is used to define formula_279 in terms of prefilter convergence. Using duality between ideals and dual ideals There is a dual relation formula_280 or formula_281 which is defined to mean that every formula_5 is contained in some formula_282 Explicitly, this means that for every formula_5 , there is some formula_22 such that formula_283 This relation is dual to formula_34 in sense that formula_280 if and only if formula_284 The relation formula_280 is closely related to the downward closure of a family in a manner similar to how formula_34 is related to the upward closure family. For an example that uses this duality, suppose formula_213 is a map and formula_285 Define formula_286 which contains the empty set if and only if formula_287 does. It is possible for formula_287 to be an ultrafilter and for formula_288 to be empty or not closed under finite intersections (see footnote for example). Although formula_288 does not preserve properties of filters very well, if formula_287 is downward closed (resp. closed under finite unions, an ideal) then this will also be true for formula_289 Using the duality between ideals and dual ideals allows for a construction of the following filter. Suppose formula_1 is a filter on formula_113 and let formula_290 be its dual in formula_222 If formula_291 then formula_288's dual formula_292 will be a filter. Other examples Example: The set formula_1 of all dense open subsets of a topological space is a proper π–system and a prefilter. If the space is a Baire space, then the set of all countable intersections of dense open subsets is a π–system and a prefilter that is finer than formula_59 Example: The family formula_293 of all dense open sets of formula_78 having finite Lebesgue measure is a proper π–system and a free prefilter. The prefilter formula_293 is properly contained in, and not equivalent to, the prefilter consisting of all dense open subsets of formula_76 Since formula_0 is a Baire space, every countable intersection of sets in formula_293 is dense in formula_0 (and also comeagre and non–meager) so the set of all countable intersections of elements of formula_293 is a prefilter and π–system; it is also finer than, and not equivalent to, formula_294 Filters and nets. This section will describe the relationships between prefilters and nets in great detail because of how important these details are applying filters to topology − particularly in switching from utilizing nets to utilizing filters and vice verse − and because it to make it easier to understand later why subnets (with their most commonly used definitions) are not generally equivalent with "sub–prefilters". Nets to prefilters. A net formula_295 is canonically associated with its prefilter of tails formula_296 If formula_213 is a map and formula_48 is a net in formula_0 then formula_297 Prefilters to nets. A pointed set is a pair formula_298 consisting of a non–empty set formula_9 and an element formula_299 For any family formula_64 let formula_300 Define a canonical preorder formula_34 on pointed sets by declaring formula_301 If formula_302 even if formula_303 so this preorder is not antisymmetric and given any family of sets formula_64 formula_304 is partially ordered if and only if formula_62 consists entirely of singleton sets. If formula_305 is a maximal element of formula_306; moreover, all maximal elements are of this form. If formula_307 is a greatest element if and only if formula_308 in which case formula_309 is the set of all greatest elements. However, a greatest element formula_310 is a maximal element if and only if formula_311 so there is at most one element that is both maximal and greatest. There is a canonical map formula_312 defined by formula_313 If formula_314 then the tail of the assignment formula_315 starting at formula_316 is formula_317 Although formula_304 is not, in general, a partially ordered set, it is a directed set if (and only if) formula_1 is a prefilter. So the most immediate choice for the definition of "the net in formula_0 induced by a prefilter formula_1" is the assignment formula_318 from formula_306 into formula_11 If formula_1 is a prefilter on formula_0 then the net associated with formula_1 is the map formula_319 that is, formula_320 If formula_1 is a prefilter on formula_321 is a net in formula_0 and the prefilter associated with formula_322 is formula_1; that is: formula_323 This would not necessarily be true had formula_322 been defined on a proper subset of formula_324 For example, suppose formula_0 has at least two distinct elements, formula_325 is the indiscrete filter, and formula_92 is arbitrary. Had formula_322 instead been defined on the singleton set formula_326 where the restriction of formula_322 to formula_327 will temporarily be denote by formula_328 then the prefilter of tails associated with formula_329 would be the principal prefilter formula_330 rather than the original filter formula_68; this means that the equality formula_331 is false, so unlike formula_332 the prefilter formula_1 can not be recovered from formula_333 Worse still, while formula_1 is the unique minimal filter on formula_60 the prefilter formula_334 instead generates a maximal filter (that is, an ultrafilter) on formula_11 However, if formula_45 is a net in formula_0 then it is not in general true that formula_335 is equal to formula_48 because, for example, the domain of formula_48 may be of a completely different cardinality than that of formula_335 (since unlike the domain of formula_336 the domain of an arbitrary net in formula_0 could have any cardinality). Ultranets and ultra prefilters A net formula_337 is called an ultranet or universal net in formula_0 if for every subset formula_338 is eventually in formula_9 or it is eventually in formula_339; this happens if and only if formula_49 is an ultra prefilter. A prefilter formula_80 is an ultra prefilter if and only if formula_322 is an ultranet in formula_11 Partially ordered net. The domain of the canonical net formula_322 is in general not partially ordered. However, in 1955 Bruns and Schmidt discovered a construction that allows for the canonical net to have a domain that is both partially ordered and directed; this was independently rediscovered by Albert Wilansky in 1970. It begins with the construction of a strict partial order (meaning a transitive and irreflexive relation) formula_55 on a subset of formula_340 that is similar to the lexicographical order on formula_341 of the strict partial orders formula_342 For any formula_343 in formula_344 declare that formula_42 if and only if formula_345 or equivalently, if and only if formula_346 The non−strict partial order associated with formula_347 denoted by formula_117 is defined by declaring that formula_348 Unwinding these definitions gives the following characterization: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;formula_41 if and only if formula_349 and also formula_350 which shows that formula_34 is just the lexicographical order on formula_340 induced by formula_351 where formula_0 is partially ordered by equality formula_352 Both formula_353 are serial and neither possesses a greatest element or a maximal element; this remains true if they are each restricted to the subset of formula_340 defined by formula_354 where it will henceforth be assumed that they are. Denote the assignment formula_355 from this subset by: formula_356 If formula_357 then just as with formula_322 before, the tail of the formula_358 starting at formula_316 is equal to formula_359 If formula_1 is a prefilter on formula_0 then formula_358 is a net in formula_0 whose domain formula_360 is a partially ordered set and moreover, formula_361 Because the tails of formula_362 are identical (since both are equal to the prefilter formula_1), there is typically nothing lost by assuming that the domain of the net associated with a prefilter is both directed and partially ordered. If the set formula_180 is replaced with the positive rational numbers then the strict partial order formula_363 will also be a dense order. Subordinate filters and subnets. The notion of "formula_1 is subordinate to formula_24" (written formula_364) is for filters and prefilters what "formula_365 is a subsequence of formula_366" is for sequences. For example, if formula_367 denotes the set of tails of formula_48 and if formula_368 denotes the set of tails of the subsequence formula_369 (where formula_370) then formula_371 (that is, formula_372) is true but formula_373 is in general false. Non–equivalence of subnets and subordinate filters. A subset formula_374 of a preordered space formula_35 is or cofinal in formula_33 if for every formula_47 there exists some formula_375 If formula_374 contains a tail of formula_33 then formula_77 is said to be or ; explicitly, this means that there exists some formula_376 (that is, formula_377). An eventual set is necessarily not empty. A subset is eventual if and only if its complement is not frequent (which is termed ). A map formula_378 between two preordered sets is if whenever formula_379 Subnets in the sense of Willard and subnets in the sense of Kelley are the most commonly used definitions of "subnet." The first definition of a subnet was introduced by John L. Kelley in 1955. Stephen Willard introduced his own variant of Kelley's definition of subnet in 1970. AA–subnets were introduced independently by Smiley (1957), Aarnes and Andenaes (1972), and Murdeshwar (1983); AA–subnets were studied in great detail by Aarnes and Andenaes but they are not often used. Let formula_380 be nets. Then Kelley did not require the map formula_381 to be order preserving while the definition of an AA–subnet does away entirely with any map between the two nets' domains and instead focuses entirely on formula_0 − the nets' common codomain. Every Willard–subnet is a Kelley–subnet and both are AA–subnets. In particular, if formula_382 is a Willard–subnet or a Kelley–subnet of formula_45 then formula_383 AA–subnets have a defining characterization that immediately shows that they are fully interchangeable with sub(ordinate)filters. Explicitly, what is meant is that the following statement is true for AA–subnets: If formula_71 are prefilters then formula_384 is an AA–subnet of formula_385 If "AA–subnet" is replaced by "Willard–subnet" or "Kelley–subnet" then the above statement becomes false. In particular, the problem is that the following statement is in general false: False statement: If formula_71 are prefilters such that formula_386 is a Kelley–subnet of formula_385 Since every Willard–subnet is a Kelley–subnet, this statement remains false if the word "Kelley–subnet" is replaced with "Willard–subnet". If "subnet" is defined to mean Willard–subnet or Kelley–subnet then nets and filters are not completely interchangeable because there exists a filter–sub(ordinate)filter relationships that cannot be expressed in terms of a net–subnet relationship between the two induced nets. In particular, the problem is that Kelley–subnets and Willard–subnets are not fully interchangeable with subordinate filters. If the notion of "subnet" is not used or if "subnet" is defined to mean AA–subnet, then this ceases to be a problem and so it becomes correct to say that nets and filters are interchangeable. Despite the fact that AA–subnets do not have the problem that Willard and Kelley subnets have, they are not widely used or known about. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Proofs &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "\\mathcal{B}" }, { "math_id": 2, "text": "X \\in \\mathcal{B}" }, { "math_id": 3, "text": " \\emptyset \\notin \\mathcal{B}" }, { "math_id": 4, "text": "A\\in \\mathcal{B}" }, { "math_id": 5, "text": "B \\in \\mathcal{B}" }, { "math_id": 6, "text": "A\\cap B\\in \\mathcal{B}" }, { "math_id": 7, "text": "A\\subset B\\subset X" }, { "math_id": 8, "text": "B\\in \\mathcal{B}" }, { "math_id": 9, "text": "S" }, { "math_id": 10, "text": "\\wp(X)" }, { "math_id": 11, "text": "X." }, { "math_id": 12, "text": "\\wp(X)." }, { "math_id": 13, "text": "\\mathcal{B}, \\mathcal{C}, \\text{ and } \\mathcal{F}." }, { "math_id": 14, "text": "\\mathcal{B}, \\mathcal{F}," }, { "math_id": 15, "text": "\\mathcal{B} \\subseteq \\wp(X)" }, { "math_id": 16, "text": "\\mathcal{B}^{\\uparrow X} := \\{S \\subseteq X ~:~ B \\subseteq S \\text{ for some } B \\in \\mathcal{B} \\,\\} = \\bigcup_{B \\in \\mathcal{B}} \\{S ~:~ B \\subseteq S \\subseteq X\\}" }, { "math_id": 17, "text": "\\mathcal{B}^{\\downarrow} := \\{S \\subseteq B ~:~ B \\in \\mathcal{B} \\,\\} = \\bigcup_{B \\in \\mathcal{B}} \\wp(B)." }, { "math_id": 18, "text": "\\ker \\mathcal{B} = \\bigcap_{B \\in \\mathcal{B}} B" }, { "math_id": 19, "text": "\\mathcal{B} \\text{ on } S" }, { "math_id": 20, "text": "\\mathcal{C} \\text{ and } \\mathcal{F}," }, { "math_id": 21, "text": "\\mathcal{C} \\leq \\mathcal{F}" }, { "math_id": 22, "text": "C \\in \\mathcal{C}" }, { "math_id": 23, "text": "F \\in \\mathcal{F} \\text{ such that } F \\subseteq C," }, { "math_id": 24, "text": "\\mathcal{C}" }, { "math_id": 25, "text": "\\mathcal{F}" }, { "math_id": 26, "text": "\\mathcal{C}." }, { "math_id": 27, "text": "\\mathcal{F} \\vdash \\mathcal{C} \\text{ or } \\mathcal{F} \\geq \\mathcal{C}" }, { "math_id": 28, "text": "\\mathcal{C} \\leq \\mathcal{F}." }, { "math_id": 29, "text": "\\mathcal{B} \\text{ and } \\mathcal{C}" }, { "math_id": 30, "text": "\\mathcal{B} \\# \\mathcal{C}," }, { "math_id": 31, "text": "B \\cap C \\neq \\varnothing \\text{ for all } B \\in \\mathcal{B} \\text{ and } C \\in \\mathcal{C}." }, { "math_id": 32, "text": "f" }, { "math_id": 33, "text": "I" }, { "math_id": 34, "text": "\\,\\leq\\," }, { "math_id": 35, "text": "(I, \\leq)" }, { "math_id": 36, "text": "i, j \\in I," }, { "math_id": 37, "text": "k \\in I" }, { "math_id": 38, "text": "i \\leq k \\text{ and } j \\leq k." }, { "math_id": 39, "text": "i \\text{ and } j," }, { "math_id": 40, "text": "j \\geq i" }, { "math_id": 41, "text": "i \\leq j" }, { "math_id": 42, "text": "i < j" }, { "math_id": 43, "text": "j \\leq i" }, { "math_id": 44, "text": "i \\leq j \\text{ and } i \\neq j" }, { "math_id": 45, "text": "x_{\\bull} = \\left(x_i\\right)_{i \\in I}" }, { "math_id": 46, "text": "I." }, { "math_id": 47, "text": "i \\in I" }, { "math_id": 48, "text": "x_{\\bull}" }, { "math_id": 49, "text": "\\operatorname{Tails}\\left(x_{\\bull}\\right)" }, { "math_id": 50, "text": "x_{> i} = \\left\\{x_j ~:~ j > i \\text{ and } j \\in I \\right\\}," }, { "math_id": 51, "text": "i" }, { "math_id": 52, "text": "\\left\\{x_{> i} ~:~ i \\in I \\right\\}" }, { "math_id": 53, "text": "\\left\\{x_{\\geq i} ~:~ i \\in I \\right\\}" }, { "math_id": 54, "text": "\\left\\{x_{> i} ~:~ i \\in I \\right\\}\\cup \\left\\{x_{\\geq i} ~:~ i \\in I \\right\\}" }, { "math_id": 55, "text": "\\,<\\," }, { "math_id": 56, "text": "\\,\\leq." }, { "math_id": 57, "text": "\\mathcal{B} \\subseteq \\wp(X)." }, { "math_id": 58, "text": "C" }, { "math_id": 59, "text": "\\mathcal{B}." }, { "math_id": 60, "text": "X," }, { "math_id": 61, "text": "\\textrm{Filters}(X)\n\\quad=\\quad \\textrm{DualIdeals}(X) \\,\\setminus\\, \\{ \\wp(X) \\}\n\\quad\\subseteq\\quad \\textrm{Prefilters}(X)\n\\quad\\subseteq\\quad \\textrm{FilterSubbases}(X)." }, { "math_id": 62, "text": "\\mathcal{B} \\neq \\varnothing" }, { "math_id": 63, "text": "\\mathcal{F}." }, { "math_id": 64, "text": "\\mathcal{B}," }, { "math_id": 65, "text": "X = \\varnothing" }, { "math_id": 66, "text": "\\varnothing" }, { "math_id": 67, "text": "X \\neq \\varnothing" }, { "math_id": 68, "text": "\\mathcal{B} = \\{X\\}" }, { "math_id": 69, "text": "x" }, { "math_id": 70, "text": "\\{X\\}" }, { "math_id": 71, "text": "\\mathcal{B} \\text{ and } \\mathcal{F}" }, { "math_id": 72, "text": "\\operatorname{Filters}(X)" }, { "math_id": 73, "text": "\\mathcal{F} \\text{ on } X" }, { "math_id": 74, "text": "\\mathcal{B} \\text{ and } \\mathcal{C}." }, { "math_id": 75, "text": "\\operatorname{Prefilters}(X)" }, { "math_id": 76, "text": "\\R." }, { "math_id": 77, "text": "R" }, { "math_id": 78, "text": "X = \\R" }, { "math_id": 79, "text": "\\begin{alignat}{8}\n\\textrm{Ultrafilters}(X)\\;\n&=\\; \\textrm{Filters}(X) \\,\\cap\\, \\textrm{UltraPrefilters}(X)\\\\\n&\\subseteq\\; \\textrm{UltraPrefilters}(X) = \\textrm{UltraFilterSubbases}(X)\\\\\n&\\subseteq\\; \\textrm{Prefilters}(X) \\\\\n\\end{alignat}" }, { "math_id": 80, "text": "\\mathcal{B} \\text{ on } X" }, { "math_id": 81, "text": "\\{X\\} \\text{ on } X" }, { "math_id": 82, "text": "\\mathcal{B}:" }, { "math_id": 83, "text": "\\ker \\mathcal{B} = \\bigcap_{B \\in \\mathcal{B}} B" }, { "math_id": 84, "text": "x, x \\not\\in \\ker \\mathcal{B} \\text{ if and only if } X \\setminus \\{x\\} \\in \\mathcal{B}^{\\uparrow X}." }, { "math_id": 85, "text": "\\ker \\left(\\mathcal{B}^{\\uparrow X}\\right) = \\ker \\mathcal{B}" }, { "math_id": 86, "text": "f(\\ker \\mathcal{B}) \\subseteq \\ker f(\\mathcal{B})" }, { "math_id": 87, "text": "f^{-1}(\\ker \\mathcal{B}) = \\ker f^{-1}(\\mathcal{B})." }, { "math_id": 88, "text": "\\mathcal{B} \\leq \\mathcal{C}" }, { "math_id": 89, "text": "\\ker \\mathcal{C} \\subseteq \\ker \\mathcal{B}" }, { "math_id": 90, "text": "\\ker \\mathcal{B} = \\ker \\mathcal{C}." }, { "math_id": 91, "text": "\\ker \\mathcal{B} \\neq \\varnothing" }, { "math_id": 92, "text": "x \\in X" }, { "math_id": 93, "text": "\\varnothing \\neq \\ker \\mathcal{B} \\in \\mathcal{B}" }, { "math_id": 94, "text": "\\mathcal{B} = \\{\\ker \\mathcal{B}\\}^{\\uparrow X} = \\{S \\cup \\ker \\mathcal{B} : S \\subseteq X \\setminus \\ker \\mathcal{B}\\} = \\wp(X \\setminus \\ker \\mathcal{B}) \\,(\\cup)\\, \\{\\ker \\mathcal{B}\\}" }, { "math_id": 95, "text": "\\{\\ker \\mathcal{B}\\}" }, { "math_id": 96, "text": "C \\subseteq \\R," }, { "math_id": 97, "text": "\\mathcal{B}_C = \\{\\R \\setminus (r + C) ~:~ r \\in \\R\\}" }, { "math_id": 98, "text": "\\left(r_1 + C\\right) \\cup \\cdots \\cup \\left(r_n + C\\right)" }, { "math_id": 99, "text": "\\R," }, { "math_id": 100, "text": "\\mathcal{B}_C" }, { "math_id": 101, "text": "C = \\Q, \\Z," }, { "math_id": 102, "text": "\\mathcal{F}^* \\text{ and } \\mathcal{F}^{\\bull} \\text{ on } X" }, { "math_id": 103, "text": "\\mathcal{F}^*" }, { "math_id": 104, "text": "\\mathcal{F}^{\\bull}" }, { "math_id": 105, "text": "\\mathcal{F}^* \\wedge \\mathcal{F}^{\\bull} = \\mathcal{F}," }, { "math_id": 106, "text": "\\mathcal{F}^* \\text{ and } \\mathcal{F}^{\\bull}" }, { "math_id": 107, "text": "\\mathcal{F}^* \\vee \\mathcal{F}^{\\bull} = \\wp(X)" }, { "math_id": 108, "text": "\\mathcal{F}^{\\bull} := \\mathcal{F} \\text{ and } \\mathcal{F}^* := \\wp(X);" }, { "math_id": 109, "text": "\\mathcal{F}^{\\bull} := \\{\\ker \\mathcal{F}\\}^{\\uparrow X}" }, { "math_id": 110, "text": "\\mathcal{F}^* := \\mathcal{F} \\vee \\{X \\setminus \\left(\\ker \\mathcal{F}\\right)\\}^{\\uparrow X}" }, { "math_id": 111, "text": "\\mathcal{F} \\supseteq \\mathcal{B}" }, { "math_id": 112, "text": "\\mathcal{F} \\subseteq \\wp(Y) \\text{ and } X \\subseteq Y" }, { "math_id": 113, "text": "Y" }, { "math_id": 114, "text": "\\ker \\mathcal{B}" }, { "math_id": 115, "text": "\\mathcal{F} \\geq \\mathcal{C}" }, { "math_id": 116, "text": "\\mathcal{C}," }, { "math_id": 117, "text": "\\,\\leq," }, { "math_id": 118, "text": "S \\subseteq X \\text{ and } \\mathcal{B} \\subseteq \\wp(X)" }, { "math_id": 119, "text": "\\mathcal{B} \\text{ and } S" }, { "math_id": 120, "text": "\\mathcal{B} \\text{ and } \\{S\\}" }, { "math_id": 121, "text": "\\mathcal{B} \\text{ on } S," }, { "math_id": 122, "text": "\\mathcal{B}\\big\\vert_S = \\{B \\cap S ~:~ B \\in \\mathcal{B}\\}," }, { "math_id": 123, "text": "\\mathcal{B} \\text{ to } S." }, { "math_id": 124, "text": "\\mathcal{C} \\leq \\mathcal{F}, \\mathcal{F} \\geq \\mathcal{C}, \\text{ and } \\mathcal{F} \\vdash \\mathcal{C}," }, { "math_id": 125, "text": "\\mathcal{F} = \\mathcal{F}^{\\uparrow X}," }, { "math_id": 126, "text": "\\mathcal{C} \\text{ and } \\mathcal{F}" }, { "math_id": 127, "text": "\\mathcal{C} \\neq \\mathcal{F}" }, { "math_id": 128, "text": "x_{i_{\\bull}} = \\left(x_{i_n}\\right)_{n=1}^\\infty" }, { "math_id": 129, "text": "x_{\\bull} = \\left(x_i\\right)_{i=1}^\\infty" }, { "math_id": 130, "text": "\\operatorname{Tails}\\left(x_{i_{\\bull}}\\right)" }, { "math_id": 131, "text": "\\operatorname{Tails}\\left(x_{\\bull}\\right);" }, { "math_id": 132, "text": "\\operatorname{Tails}\\left(x_{i_{\\bull}}\\right) \\vdash \\operatorname{Tails}\\left(x_{\\bull}\\right)" }, { "math_id": 133, "text": "\\operatorname{Tails}\\left(x_{\\bull}\\right) \\leq \\operatorname{Tails}\\left(x_{i_{\\bull}}\\right)." }, { "math_id": 134, "text": "C := x_{\\geq i} \\in \\operatorname{Tails}\\left(x_{\\bull}\\right)" }, { "math_id": 135, "text": "i \\in \\N" }, { "math_id": 136, "text": "F := x_{i_{\\geq n}} \\in \\operatorname{Tails}\\left(x_{i_{\\bull}}\\right)." }, { "math_id": 137, "text": "x_{\\geq i} = \\left\\{x_i, x_{i+1}, \\ldots\\right\\}" }, { "math_id": 138, "text": "x_{i_{\\geq n}} = \\left\\{x_{i_n}, x_{i_{n+1}}, \\ldots\\right\\}," }, { "math_id": 139, "text": "i \\leq i_n." }, { "math_id": 140, "text": "i_1 < i_2 < \\cdots" }, { "math_id": 141, "text": "n \\in \\N" }, { "math_id": 142, "text": "i_n \\geq i," }, { "math_id": 143, "text": "x_{\\geq i} \\supseteq x_{i_{\\geq n}}" }, { "math_id": 144, "text": "\\operatorname{TailsFilter}\\left(x_{\\bull}\\right) \\subseteq \\operatorname{TailsFilter}\\left(x_{i_{\\bull}}\\right)." }, { "math_id": 145, "text": "x_{\\bull} : \\N \\to X" }, { "math_id": 146, "text": "x_{i_{\\bull}}" }, { "math_id": 147, "text": "\\left(x_2, x_4, x_6, \\ldots\\right)" }, { "math_id": 148, "text": "x_{i_{\\geq n}} = \\left\\{x_{2n}, x_{2n + 2}, x_{2n + 4}, \\ldots\\right\\}" }, { "math_id": 149, "text": "\\varnothing \\leq \\mathcal{B} \\leq \\mathcal{B} \\leq \\{\\varnothing\\}" }, { "math_id": 150, "text": "\\{\\varnothing\\} \\leq \\mathcal{B} \\text{ if and only if } \\varnothing \\in \\mathcal{B}." }, { "math_id": 151, "text": "\\mathcal{B} \\leq \\mathcal{F} \\text{ and } \\mathcal{C} \\leq \\mathcal{F}." }, { "math_id": 152, "text": "\\ker \\mathcal{F} \\subseteq \\ker \\mathcal{C}," }, { "math_id": 153, "text": "\\mathcal{C} \\neq \\varnothing \\text{ implies } \\mathcal{F} \\neq \\varnothing," }, { "math_id": 154, "text": "\\varnothing \\in \\mathcal{C} \\text{ implies } \\varnothing \\in \\mathcal{F}." }, { "math_id": 155, "text": "\\mathcal{C} \\leq \\mathcal{F}, \\mathcal{F}" }, { "math_id": 156, "text": "\\mathcal{C} \\neq \\varnothing," }, { "math_id": 157, "text": "\\varnothing \\neq \\mathcal{B} \\leq \\mathcal{F} \\text{ and } \\varnothing \\neq \\mathcal{C} \\leq \\mathcal{F}" }, { "math_id": 158, "text": "\\mathcal{C} \\leq \\mathcal{F}," }, { "math_id": 159, "text": "\\varnothing \\not\\in \\mathcal{F}," }, { "math_id": 160, "text": "\\mathcal{C}^{\\uparrow X}" }, { "math_id": 161, "text": "S \\not\\in \\mathcal{B} \\text{ if and only if } (X \\setminus S) \\# \\mathcal{B}." }, { "math_id": 162, "text": "\\wp(\\wp(X))." }, { "math_id": 163, "text": "\\,\\leq\\, \\text{ on } \\operatorname{Filters}(X)" }, { "math_id": 164, "text": "\\mathcal{B} \\subseteq \\wp(X), \\mathcal{B} \\leq \\{X\\} \\text{ if and only if } \\{X\\} = \\mathcal{B}." }, { "math_id": 165, "text": "\\mathcal{B} \\subseteq \\mathcal{C} \\text{ then } \\mathcal{B} \\leq \\mathcal{C}" }, { "math_id": 166, "text": "\\wp(\\wp(X))" }, { "math_id": 167, "text": "\\mathcal{C} \\leq \\mathcal{B} \\text{ and } \\mathcal{B} \\leq \\mathcal{C}" }, { "math_id": 168, "text": "\\mathcal{B} = \\mathcal{C}" }, { "math_id": 169, "text": "\\mathcal{C} \\text{ and } \\mathcal{B}" }, { "math_id": 170, "text": "\\mathcal{B} \\leq \\mathcal{B}^{\\uparrow X} \\text{ and } \\mathcal{B}^{\\uparrow X} \\leq \\mathcal{B} \\text{ but } \\mathcal{B} \\neq \\mathcal{B}^{\\uparrow X}." }, { "math_id": 171, "text": "\\wp(\\wp(X))," }, { "math_id": 172, "text": "\\mathcal{B}, \\mathcal{C} \\in \\wp(\\wp(X))," }, { "math_id": 173, "text": "\\varnothing \\leq \\mathcal{B} \\leq \\wp(X)" }, { "math_id": 174, "text": "\\mathcal{B}^{\\uparrow X}." }, { "math_id": 175, "text": "\\{\\varnothing\\}" }, { "math_id": 176, "text": "\\mathcal{B}, \\mathcal{C} \\in \\wp(\\wp(X))" }, { "math_id": 177, "text": "\\ker \\mathcal{B} = \\ker \\mathcal{C}" }, { "math_id": 178, "text": "E" }, { "math_id": 179, "text": "\\Z" }, { "math_id": 180, "text": "\\N" }, { "math_id": 181, "text": "\\mathcal{B} = \\{[e, \\infty) ~:~ e \\in E\\} \\qquad \\text{ and } \\qquad \\mathcal{C}_{\\operatorname{open}} = \\{(-\\infty, e) \\cup (1 + e, \\infty) ~:~ e \\in E\\} \\qquad \\text{ and } \\qquad \\mathcal{C}_{\\operatorname{closed}} = \\{(-\\infty, e] \\cup [1 + e, \\infty) ~:~ e \\in E\\}." }, { "math_id": 182, "text": "\\mathcal{C}_{\\operatorname{closed}}" }, { "math_id": 183, "text": "\\mathcal{C}_{\\operatorname{open}}" }, { "math_id": 184, "text": "E = \\N" }, { "math_id": 185, "text": "\\mathcal{C}_{\\operatorname{closed}} \\leq \\mathcal{C}_{\\operatorname{open}} \\leq \\mathcal{B}," }, { "math_id": 186, "text": "\\mathcal{C}_{\\operatorname{open}}," }, { "math_id": 187, "text": "\\Q \\text{ or } \\R" }, { "math_id": 188, "text": "\\mathcal{C}_{\\operatorname{closed}} \\text{ and } \\mathcal{C}_{\\operatorname{open}}" }, { "math_id": 189, "text": "X \\text{ and } S \\subseteq X" }, { "math_id": 190, "text": "\\mathcal{B}\\big\\vert_S := \\mathcal{B} (\\cap) \\{S\\}," }, { "math_id": 191, "text": "\\varnothing \\not\\in \\mathcal{B} (\\cap) \\{S\\}" }, { "math_id": 192, "text": "\\mathcal{B}\\big\\vert_S" }, { "math_id": 193, "text": "S \\in \\mathcal{B}." }, { "math_id": 194, "text": "S \\neq X \\text{ and } X \\setminus S \\not\\in \\mathcal{B}." }, { "math_id": 195, "text": "\\mathcal{B} \\cup \\{S\\}" }, { "math_id": 196, "text": "\\mathcal{B} \\text{ and } \\mathcal{C}," }, { "math_id": 197, "text": "\\mathcal{B} (\\cap) \\mathcal{C} := \\{B \\cap C ~:~ B \\in \\mathcal{B} \\text{ and } C \\in \\mathcal{C}\\}" }, { "math_id": 198, "text": "\\mathcal{C} \\leq \\mathcal{B} (\\cap) \\mathcal{C}" }, { "math_id": 199, "text": "\\mathcal{B} \\leq \\mathcal{B} (\\cap) \\mathcal{C}." }, { "math_id": 200, "text": "\\mathcal{B} (\\cap) \\mathcal{C}" }, { "math_id": 201, "text": "\\mathcal{B} \\text{ and } \\mathcal{C}, \\mathcal{B} (\\cap) \\mathcal{C}" }, { "math_id": 202, "text": "\\varnothing \\not\\in \\mathcal{B} (\\cap) \\mathcal{C}," }, { "math_id": 203, "text": "\\mathcal{B} \\leq \\mathcal{F}." }, { "math_id": 204, "text": "\\mathcal{B} (\\cap) \\mathcal{C}." }, { "math_id": 205, "text": "f : X \\to Y \\text{ and } g : Y \\to Z" }, { "math_id": 206, "text": "\\mathcal{B} \\subseteq \\wp(Y)." }, { "math_id": 207, "text": "\\mathcal{B} \\text{ on } Y," }, { "math_id": 208, "text": "g(\\mathcal{B}) \\text{ on } g(Y)" }, { "math_id": 209, "text": "Z" }, { "math_id": 210, "text": "g" }, { "math_id": 211, "text": "\\mathcal{B} \\subseteq \\wp(Y)" }, { "math_id": 212, "text": "g(\\mathcal{B}) \\text{ and } g^{-1}(g(\\mathcal{B}))." }, { "math_id": 213, "text": "f : X \\to Y" }, { "math_id": 214, "text": "f(\\mathcal{B})." }, { "math_id": 215, "text": "g(\\mathcal{B})" }, { "math_id": 216, "text": "g(Y)," }, { "math_id": 217, "text": "g(\\mathcal{B}) \\text{ in } Z" }, { "math_id": 218, "text": "g(\\mathcal{B})^{\\uparrow Z} = \\left\\{S \\subseteq Z ~:~ B \\subseteq g^{-1}(S) \\text{ for some } B \\in \\mathcal{B} \\right\\}" }, { "math_id": 219, "text": "g(\\mathcal{B})^{\\uparrow Z} = \\left\\{S \\subseteq Z ~:~ g^{-1}(S) \\in \\mathcal{B} \\right\\}." }, { "math_id": 220, "text": "X \\subseteq Y" }, { "math_id": 221, "text": "X \\to Y" }, { "math_id": 222, "text": "Y." }, { "math_id": 223, "text": "f^{-1}(\\mathcal{B})" }, { "math_id": 224, "text": "\\mathcal{B} \\text{ on } f(X)" }, { "math_id": 225, "text": "\\mathcal{B}\\big\\vert_{f(X)}," }, { "math_id": 226, "text": "\\mathcal{B}\\big\\vert_{f(X)} = f\\left(f^{-1}(\\mathcal{B})\\right)" }, { "math_id": 227, "text": "f^{-1}(\\mathcal{B}) = f^{-1}\\left(\\mathcal{B}\\big\\vert_{f(X)}\\right)." }, { "math_id": 228, "text": "\\mathcal{B}\\big\\vert_{f(X)}" }, { "math_id": 229, "text": "f(X)" }, { "math_id": 230, "text": "f^{-1}(\\mathcal{B})," }, { "math_id": 231, "text": "f : X \\to f(X)" }, { "math_id": 232, "text": "f : X \\to Y." }, { "math_id": 233, "text": "\\mathcal{B}\\big\\vert_{f(X)}." }, { "math_id": 234, "text": "Y," }, { "math_id": 235, "text": "\\varnothing \\in \\mathcal{B}\\big\\vert_{f(X)}," }, { "math_id": 236, "text": "f\\left(f^{-1}(\\mathcal{B})\\right)." }, { "math_id": 237, "text": "S \\subseteq Y" }, { "math_id": 238, "text": "\\operatorname{In} : S \\to Y" }, { "math_id": 239, "text": "\\operatorname{In}^{-1}(\\mathcal{B})." }, { "math_id": 240, "text": "\\mathcal{B} \\subseteq \\wp(Y) \\text{ and } g : Y \\to Z" }, { "math_id": 241, "text": "g(\\mathcal{B}) \\text{ on } Z." }, { "math_id": 242, "text": "g : Y \\to Z" }, { "math_id": 243, "text": "\\mathcal{B} \\text{ on } Y, \\mathcal{B}" }, { "math_id": 244, "text": "g^{-1}(g(\\mathcal{B}))." }, { "math_id": 245, "text": "f^{-1}(\\mathcal{B}) \\text{ on } X" }, { "math_id": 246, "text": "\\mathcal{C} \\leq \\mathcal{F} \\quad \\text{ implies } \\quad g(\\mathcal{C}) \\leq g(\\mathcal{F}) \\quad \\text{ and } \\quad f^{-1}(\\mathcal{C}) \\leq f^{-1}(\\mathcal{F})." }, { "math_id": 247, "text": "\\mathcal{C} \\leq f\\left(f^{-1}(\\mathcal{C})\\right)" }, { "math_id": 248, "text": "f^{-1}(\\mathcal{C}) = f^{-1}\\left(f\\left(f^{-1}(\\mathcal{C})\\right)\\right) \\quad \\text{ and } \\quad g(\\mathcal{C}) = g\\left(g^{-1}(g(\\mathcal{C}))\\right)." }, { "math_id": 249, "text": "\\mathcal{B} \\subseteq \\wp(X) \\text{ and } \\mathcal{C} \\subseteq \\wp(Y)" }, { "math_id": 250, "text": "f(\\mathcal{B}) \\leq \\mathcal{C} \\quad \\text{ if and only if } \\quad \\mathcal{B} \\leq f^{-1}(\\mathcal{C})" }, { "math_id": 251, "text": "g^{-1}(g(\\mathcal{C})) \\leq \\mathcal{C}" }, { "math_id": 252, "text": "X_{\\bull} = \\left(X_i\\right)_{i \\in I}" }, { "math_id": 253, "text": "\\prod X_{\\bull} := \\prod_{i \\in I} X_i," }, { "math_id": 254, "text": "i \\in I," }, { "math_id": 255, "text": "\\Pr{}_{X_i} : \\prod X_{\\bull} \\to X_i" }, { "math_id": 256, "text": "\\mathcal{B}_{\\bull} := \\left(\\mathcal{B}_i\\right)_{i \\in I}" }, { "math_id": 257, "text": "I," }, { "math_id": 258, "text": "\\mathcal{B}_i \\subseteq \\wp\\left(X_i\\right)" }, { "math_id": 259, "text": "i \\in I." }, { "math_id": 260, "text": "\\mathcal{B}_{\\bull}" }, { "math_id": 261, "text": "\\mathcal{B}_i" }, { "math_id": 262, "text": "\\prod_{} \\mathcal{B}_{\\bull} = \\prod_{i \\in I} \\mathcal{B}_i" }, { "math_id": 263, "text": "\\prod_{i \\in I} S_i \\subseteq \\prod_{} X_{\\bull}" }, { "math_id": 264, "text": "S_i = X_i" }, { "math_id": 265, "text": "S_i \\in \\mathcal{B}_i" }, { "math_id": 266, "text": "S_i \\neq X_i," }, { "math_id": 267, "text": "\\bigcup_{i \\in I} \\Pr{}_{X_i}^{-1} \\left(\\mathcal{B}_i\\right)" }, { "math_id": 268, "text": "\\prod X_{\\bull}" }, { "math_id": 269, "text": "\\mathcal{B}_{\\bull}." }, { "math_id": 270, "text": "\\prod \\mathcal{B}_{\\bull}" }, { "math_id": 271, "text": "X_i" }, { "math_id": 272, "text": "\\mathcal{F} \\text{ on } \\prod X_{\\bull}" }, { "math_id": 273, "text": "\\Pr{}_{X_i} (\\mathcal{F}) = \\mathcal{B}_i" }, { "math_id": 274, "text": "X_i." }, { "math_id": 275, "text": "X, S \\subseteq \\ker \\mathcal{B}, \\text{ and } S \\not\\in \\mathcal{B}" }, { "math_id": 276, "text": "\\{B \\setminus S ~:~ B \\in \\mathcal{B}\\}" }, { "math_id": 277, "text": "S = \\varnothing." }, { "math_id": 278, "text": "\\{B \\setminus \\{x\\} ~:~ B \\in \\mathcal{B}\\}" }, { "math_id": 279, "text": "\\lim_{\\stackrel{x \\to x_0}{x \\neq x_0}} f(x) \\to y" }, { "math_id": 280, "text": "\\mathcal{B} \\vartriangleleft \\mathcal{C}" }, { "math_id": 281, "text": "\\mathcal{C} \\vartriangleright \\mathcal{B}," }, { "math_id": 282, "text": "C \\in \\mathcal{C}." }, { "math_id": 283, "text": "B \\subseteq C." }, { "math_id": 284, "text": "(X \\setminus \\mathcal{B}) \\leq (X \\setminus \\mathcal{C})." }, { "math_id": 285, "text": "\\Xi \\subseteq \\wp(Y)." }, { "math_id": 286, "text": "\\Xi_f := \\{I \\subseteq X ~:~ f(I) \\in \\Xi\\}" }, { "math_id": 287, "text": "\\Xi" }, { "math_id": 288, "text": "\\Xi_f" }, { "math_id": 289, "text": "\\Xi_f." }, { "math_id": 290, "text": "\\Xi := Y \\setminus \\mathcal{B}" }, { "math_id": 291, "text": "X \\not\\in \\Xi_f" }, { "math_id": 292, "text": "X \\setminus \\Xi_f" }, { "math_id": 293, "text": "\\mathcal{B}_{\\operatorname{Open}}" }, { "math_id": 294, "text": "\\mathcal{B}_{\\operatorname{Open}}." }, { "math_id": 295, "text": "x_{\\bull} = \\left(x_i\\right)_{i \\in I} \\text{ in } X" }, { "math_id": 296, "text": "\\operatorname{Tails}\\left(x_{\\bull}\\right)." }, { "math_id": 297, "text": "\\operatorname{Tails}\\left(f\\left(x_{\\bull}\\right)\\right) = f\\left(\\operatorname{Tails}\\left(x_{\\bull}\\right)\\right)." }, { "math_id": 298, "text": "(S, s)" }, { "math_id": 299, "text": "s \\in S." }, { "math_id": 300, "text": "\\operatorname{PointedSets}(\\mathcal{B}) := \\left\\{(B, b) ~:~ B \\in \\mathcal{B} \\text{ and } b \\in B \\right\\}." }, { "math_id": 301, "text": "(R, r) \\leq (S, s) \\quad \\text{ if and only if } \\quad R \\supseteq S." }, { "math_id": 302, "text": "s_0, s_1 \\in S \\text{ then } \\left(S, s_0\\right) \\leq \\left(S, s_1\\right) \\text{ and } \\left(S, s_1\\right) \\leq \\left(S, s_0\\right)" }, { "math_id": 303, "text": "s_0 \\neq s_1," }, { "math_id": 304, "text": "(\\operatorname{PointedSets}(\\mathcal{B}), \\leq)" }, { "math_id": 305, "text": "\\{x\\} \\in \\mathcal{B} \\text{ then } (\\{x\\}, x)" }, { "math_id": 306, "text": "\\operatorname{PointedSets}(\\mathcal{B})" }, { "math_id": 307, "text": "\\left(B, b_0\\right) \\in \\operatorname{PointedSets}(\\mathcal{B}) \\text{ then } \\left(B, b_0\\right)" }, { "math_id": 308, "text": "B = \\ker \\mathcal{B}," }, { "math_id": 309, "text": "\\{(B, b) ~:~ b \\in B\\}" }, { "math_id": 310, "text": "(B, b)" }, { "math_id": 311, "text": "B = \\{b\\} = \\ker \\mathcal{B}," }, { "math_id": 312, "text": "\\operatorname{Point}_{\\mathcal{B}} ~:~ \\operatorname{PointedSets}(\\mathcal{B}) \\to X" }, { "math_id": 313, "text": "(B, b) \\mapsto b." }, { "math_id": 314, "text": "i_0 = \\left(B_0, b_0\\right) \\in \\operatorname{PointedSets}(\\mathcal{B})" }, { "math_id": 315, "text": "\\operatorname{Point}_{\\mathcal{B}}" }, { "math_id": 316, "text": "i_0" }, { "math_id": 317, "text": "\\left\\{c ~:~ (C, c) \\in \\operatorname{PointedSets}(\\mathcal{B}) \\text{ and } \\left(B_0, b_0\\right) \\leq (C, c) \\right\\} = B_0." }, { "math_id": 318, "text": "(B, b) \\mapsto b" }, { "math_id": 319, "text": "\\begin{alignat}{4}\n\\operatorname{Net}_{\\mathcal{B}} :\\;&& (\\operatorname{PointedSets}(\\mathcal{B}), \\leq) &&\\,\\to \\;& X \\\\\n && (B, b) &&\\,\\mapsto\\;& b \\\\\n\\end{alignat}" }, { "math_id": 320, "text": "\\operatorname{Net}_{\\mathcal{B}}(B, b) := b." }, { "math_id": 321, "text": "X \\text{ then } \\operatorname{Net}_{\\mathcal{B}}" }, { "math_id": 322, "text": "\\operatorname{Net}_{\\mathcal{B}}" }, { "math_id": 323, "text": "\\operatorname{Tails}\\left(\\operatorname{Net}_{\\mathcal{B}}\\right) = \\mathcal{B}." }, { "math_id": 324, "text": "\\operatorname{PointedSets}(\\mathcal{B})." }, { "math_id": 325, "text": "\\mathcal{B} := \\{X\\}" }, { "math_id": 326, "text": "D := \\{(X, x)\\}," }, { "math_id": 327, "text": "D" }, { "math_id": 328, "text": "\\operatorname{Net}_D : D \\to X," }, { "math_id": 329, "text": "\\operatorname{Net}_D : D \\to X" }, { "math_id": 330, "text": "\\{\\, \\{x\\} \\,\\}" }, { "math_id": 331, "text": "\\operatorname{Tails}\\left(\\operatorname{Net}_D\\right) = \\mathcal{B}" }, { "math_id": 332, "text": "\\operatorname{Net}_{\\mathcal{B}}," }, { "math_id": 333, "text": "\\operatorname{Net}_D." }, { "math_id": 334, "text": "\\operatorname{Tails}\\left(\\operatorname{Net}_D\\right) = \\{\\{x\\}\\}" }, { "math_id": 335, "text": "\\operatorname{Net}_{\\operatorname{Tails}\\left(x_{\\bull}\\right)}" }, { "math_id": 336, "text": "\\operatorname{Net}_{\\operatorname{Tails}\\left(x_{\\bull}\\right)}," }, { "math_id": 337, "text": "x_{\\bull} \\text{ in } X" }, { "math_id": 338, "text": "S \\subseteq X, x_{\\bull}" }, { "math_id": 339, "text": "X \\setminus S" }, { "math_id": 340, "text": "\\mathcal{B} \\times \\N \\times X" }, { "math_id": 341, "text": "\\mathcal{B} \\times \\N" }, { "math_id": 342, "text": "(\\mathcal{B}, \\supsetneq) \\text{ and } (\\N, <)." }, { "math_id": 343, "text": "i = (B, m, b) \\text{ and } j = (C, n, c)" }, { "math_id": 344, "text": "\\mathcal{B} \\times \\N \\times X," }, { "math_id": 345, "text": "B \\supseteq C \\text{ and either: } \\text{(1) } B \\neq C \\text{ or else (2) } B = C \\text{ and } m < n," }, { "math_id": 346, "text": "\\text{(1) } B \\supseteq C, \\text{ and (2) if } B = C \\text{ then } m < n." }, { "math_id": 347, "text": "\\,<," }, { "math_id": 348, "text": "i \\leq j\\, \\text{ if and only if } i < j \\text{ or } i = j." }, { "math_id": 349, "text": "\\text{(1) } B \\supseteq C, \\text{ and (2) if } B = C \\text{ then } m \\leq n," }, { "math_id": 350, "text": "\\text{(3) if } B = C \\text{ and } m = n \\text{ then } b = c," }, { "math_id": 351, "text": "(\\mathcal{B}, \\supseteq), \\,(\\N, \\leq), \\text{ and } (X, =)," }, { "math_id": 352, "text": "\\,=.\\," }, { "math_id": 353, "text": "\\,< \\text{ and } \\leq\\," }, { "math_id": 354, "text": "\\begin{alignat}{4}\n\\operatorname{Poset}_{\\mathcal{B}} \n\\;&:=\\; \\{\\, (B, m, b) \\;\\in\\; \\mathcal{B} \\times \\N \\times X ~:~ b \\in B \\,\\}, \\\\\n\\end{alignat}" }, { "math_id": 355, "text": "i = (B, m, b) \\mapsto b" }, { "math_id": 356, "text": "\\begin{alignat}{4}\n\\operatorname{PosetNet}_{\\mathcal{B}}\\ :\\ &&\\ \\operatorname{Poset}_{\\mathcal{B}}\\ &&\\,\\to \\;& X \\\\[0.5ex]\n &&\\ (B, m, b) \\ &&\\,\\mapsto\\;& b \\\\[0.5ex]\n\\end{alignat}" }, { "math_id": 357, "text": "i_0 = \\left(B_0, m_0, b_0\\right) \\in \\operatorname{Poset}_{\\mathcal{B}}" }, { "math_id": 358, "text": "\\operatorname{PosetNet}_{\\mathcal{B}}" }, { "math_id": 359, "text": "B_0." }, { "math_id": 360, "text": "\\operatorname{Poset}_{\\mathcal{B}}" }, { "math_id": 361, "text": "\\operatorname{Tails}\\left(\\operatorname{PosetNet}_{\\mathcal{B}}\\right) = \\mathcal{B}." }, { "math_id": 362, "text": "\\operatorname{PosetNet}_{\\mathcal{B}} \\text{ and } \\operatorname{Net}_{\\mathcal{B}}" }, { "math_id": 363, "text": "<" }, { "math_id": 364, "text": "\\mathcal{B} \\vdash \\mathcal{C}" }, { "math_id": 365, "text": "x_{n_{\\bull}} = \\left(x_{n_i}\\right)_{i=1}^{\\infty}" }, { "math_id": 366, "text": "x_{\\bull} = \\left(x_i\\right)_{i=1}^{\\infty}" }, { "math_id": 367, "text": "\\operatorname{Tails}\\left(x_{\\bull}\\right) = \\left\\{x_{\\geq i} : i \\in \\N \\right\\}" }, { "math_id": 368, "text": "\\operatorname{Tails}\\left(x_{n_{\\bull}}\\right) = \\left\\{x_{n_{\\geq i}} : i \\in \\N \\right\\}" }, { "math_id": 369, "text": "x_{n_{\\bull}}" }, { "math_id": 370, "text": "x_{n_{\\geq i}} := \\left\\{x_{n_i} ~:~ i \\in \\N \\right\\}" }, { "math_id": 371, "text": "\\operatorname{Tails}\\left(x_{n_{\\bull}}\\right) ~\\vdash~ \\operatorname{Tails}\\left(x_{\\bull}\\right)" }, { "math_id": 372, "text": "\\operatorname{Tails}\\left(x_{\\bull}\\right) \\leq \\operatorname{Tails}\\left(x_{n_{\\bull}}\\right)" }, { "math_id": 373, "text": "\\operatorname{Tails}\\left(x_{\\bull}\\right) ~\\vdash~ \\operatorname{Tails}\\left(x_{n_{\\bull}}\\right)" }, { "math_id": 374, "text": "R \\subseteq I" }, { "math_id": 375, "text": "r \\in R \\text{ such that } i \\leq r." }, { "math_id": 376, "text": "i \\in I \\text{ such that } I_{\\geq i} \\subseteq R" }, { "math_id": 377, "text": "j \\in R \\text{ for all } j \\in I \\text{ satisfying } i \\leq j" }, { "math_id": 378, "text": "h : A \\to I" }, { "math_id": 379, "text": "a, b \\in A \\text{ satisfy } a \\leq b, \\text{ then } h(a) \\leq h(b)." }, { "math_id": 380, "text": "S = S_{\\bull} ~:~ (A, \\leq) \\to X \\text{ and } N = N_{\\bull} ~:~ (I, \\leq) \\to X" }, { "math_id": 381, "text": "h" }, { "math_id": 382, "text": "y_{\\bull} = \\left(y_a\\right)_{a \\in A}" }, { "math_id": 383, "text": "\\operatorname{Tails}\\left(x_{\\bull}\\right) \\leq \\operatorname{Tails}\\left(y_{\\bull}\\right)." }, { "math_id": 384, "text": "\\mathcal{B} \\leq \\mathcal{F} \\text{ if and only if } \\operatorname{Net}_{\\mathcal{F}}" }, { "math_id": 385, "text": "\\;\\operatorname{Net}_{\\mathcal{B}}." }, { "math_id": 386, "text": "\\mathcal{B} \\leq \\mathcal{F} \\text{ then } \\operatorname{Net}_{\\mathcal{F}}" } ]
https://en.wikipedia.org/wiki?curid=12646549
126474
Symmetric matrix
Matrix equal to its transpose In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, formula_0 Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if formula_1 denotes the entry in the formula_2th row and formula_3th column then formula_4 for all indices formula_2 and formula_5 Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative. In linear algebra, a real symmetric matrix represents a self-adjoint operator represented in an orthonormal basis over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refers to one which has real-valued entries. Symmetric matrices appear naturally in a variety of applications, and typical numerical linear algebra software makes special accommodations for them. Example. The following formula_6 matrix is symmetric: formula_7 Since formula_8. Properties. Decomposition into symmetric and skew-symmetric. Any square matrix can uniquely be written as sum of a symmetric and a skew-symmetric matrix. This decomposition is known as the Toeplitz decomposition. Let formula_16 denote the space of formula_17 matrices. If formula_18 denotes the space of formula_17 symmetric matrices and formula_19 the space of formula_17 skew-symmetric matrices then formula_20 and formula_21, i.e. formula_22 where formula_23 denotes the direct sum. Let formula_24 then formula_25 Notice that formula_26 and formula_27. This is true for every square matrix formula_28 with entries from any field whose characteristic is different from 2. A symmetric formula_17 matrix is determined by formula_29 scalars (the number of entries on or above the main diagonal). Similarly, a skew-symmetric matrix is determined by formula_30 scalars (the number of entries above the main diagonal). Matrix congruent to a symmetric matrix. Any matrix congruent to a symmetric matrix is again symmetric: if formula_28 is a symmetric matrix, then so is formula_31 for any matrix formula_9. Symmetry implies normality. A (real-valued) symmetric matrix is necessarily a normal matrix. Real symmetric matrices. Denote by formula_32 the standard inner product on formula_33. The real formula_17 matrix formula_9 is symmetric if and only if formula_34 Since this definition is independent of the choice of basis, symmetry is a property that depends only on the linear operator A and a choice of inner product. This characterization of symmetry is useful, for example, in differential geometry, for each tangent space to a manifold may be endowed with an inner product, giving rise to what is called a Riemannian manifold. Another area where this formulation is used is in Hilbert spaces. The finite-dimensional spectral theorem says that any symmetric matrix whose entries are real can be diagonalized by an orthogonal matrix. More explicitly: For every real symmetric matrix formula_9 there exists a real orthogonal matrix formula_35 such that formula_36 is a diagonal matrix. Every real symmetric matrix is thus, up to choice of an orthonormal basis, a diagonal matrix. If formula_9 and formula_10 are formula_17 real symmetric matrices that commute, then they can be simultaneously diagonalized by an orthogonal matrix: there exists a basis of formula_33 such that every element of the basis is an eigenvector for both formula_9 and formula_10. Every real symmetric matrix is Hermitian, and therefore all its eigenvalues are real. (In fact, the eigenvalues are the entries in the diagonal matrix formula_37 (above), and therefore formula_37 is uniquely determined by formula_9 up to the order of its entries.) Essentially, the property of being symmetric for real matrices corresponds to the property of being Hermitian for complex matrices. Complex symmetric matrices. A complex symmetric matrix can be 'diagonalized' using a unitary matrix: thus if formula_9 is a complex symmetric matrix, there is a unitary matrix formula_38 such that formula_39 is a real diagonal matrix with non-negative entries. This result is referred to as the Autonne–Takagi factorization. It was originally proved by Léon Autonne (1915) and Teiji Takagi (1925) and rediscovered with different proofs by several other mathematicians. In fact, the matrix formula_40 is Hermitian and positive semi-definite, so there is a unitary matrix formula_41 such that formula_42 is diagonal with non-negative real entries. Thus formula_43 is complex symmetric with formula_44 real. Writing formula_45 with formula_28 and formula_46 real symmetric matrices, formula_47. Thus formula_48. Since formula_28 and formula_46 commute, there is a real orthogonal matrix formula_49 such that both formula_50 and formula_51 are diagonal. Setting formula_52 (a unitary matrix), the matrix formula_53 is complex diagonal. Pre-multiplying formula_38 by a suitable diagonal unitary matrix (which preserves unitarity of formula_38), the diagonal entries of formula_53 can be made to be real and non-negative as desired. To construct this matrix, we express the diagonal matrix as formula_54. The matrix we seek is simply given by formula_55. Clearly formula_56 as desired, so we make the modification formula_57. Since their squares are the eigenvalues of formula_58, they coincide with the singular values of formula_9. (Note, about the eigen-decomposition of a complex symmetric matrix formula_9, the Jordan normal form of formula_9 may not be diagonal, therefore formula_9 may not be diagonalized by any similarity transformation.) Decomposition. Using the Jordan normal form, one can prove that every square real matrix can be written as a product of two real symmetric matrices, and every square complex matrix can be written as a product of two complex symmetric matrices. Every real non-singular matrix can be uniquely factored as the product of an orthogonal matrix and a symmetric positive definite matrix, which is called a polar decomposition. Singular matrices can also be factored, but not uniquely. Cholesky decomposition states that every real positive-definite symmetric matrix formula_9 is a product of a lower-triangular matrix formula_59 and its transpose, formula_60 If the matrix is symmetric indefinite, it may be still decomposed as formula_61 where formula_62 is a permutation matrix (arising from the need to pivot), formula_59 a lower unit triangular matrix, and formula_37 is a direct sum of symmetric formula_63 and formula_64 blocks, which is called Bunch–Kaufman decomposition A general (complex) symmetric matrix may be defective and thus not be diagonalizable. If formula_9 is diagonalizable it may be decomposed as formula_65 where formula_35 is an orthogonal matrix formula_66, and formula_67 is a diagonal matrix of the eigenvalues of formula_9. In the special case that formula_9 is real symmetric, then formula_35 and formula_67 are also real. To see orthogonality, suppose formula_68 and formula_69 are eigenvectors corresponding to distinct eigenvalues formula_70, formula_71. Then formula_72 Since formula_70 and formula_71 are distinct, we have formula_73. Hessian. Symmetric formula_17 matrices of real functions appear as the Hessians of twice differentiable functions of formula_13 real variables (the continuity of the second derivative is not needed, despite common belief to the opposite). Every quadratic form formula_74 on formula_33 can be uniquely written in the form formula_75 with a symmetric formula_17 matrix formula_9. Because of the above spectral theorem, one can then say that every quadratic form, up to the choice of an orthonormal basis of formula_76, "looks like" formula_77 with real numbers formula_78. This considerably simplifies the study of quadratic forms, as well as the study of the level sets formula_79 which are generalizations of conic sections. This is important partly because the second-order behavior of every smooth multi-variable function is described by the quadratic form belonging to the function's Hessian; this is a consequence of Taylor's theorem. Symmetrizable matrix. An formula_17 matrix formula_9 is said to be symmetrizable if there exists an invertible diagonal matrix formula_37 and symmetric matrix formula_80 such that formula_81 The transpose of a symmetrizable matrix is symmetrizable, since formula_82 and formula_83 is symmetric. A matrix formula_84 is symmetrizable if and only if the following conditions are met: See also. Other types of symmetry or pattern in square matrices have special names; see for example: &lt;templatestyles src="Div col/styles.css"/&gt; See also symmetry in mathematics. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "A \\text{ is symmetric} \\iff A = A^\\textsf{T}." }, { "math_id": 1, "text": "a_{ij}" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "j" }, { "math_id": 4, "text": "A \\text{ is symmetric} \\iff \\text{ for every }i,j,\\quad a_{ji} = a_{ij}" }, { "math_id": 5, "text": "j." }, { "math_id": 6, "text": "3 \\times 3" }, { "math_id": 7, "text": "A =\n \\begin{bmatrix}\n 1 & 7 & 3 \\\\\n 7 & 4 & 5 \\\\\n 3 & 5 & 2\n \\end{bmatrix}" }, { "math_id": 8, "text": "A=A^\\textsf{T}" }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": "B" }, { "math_id": 11, "text": "AB" }, { "math_id": 12, "text": "AB=BA" }, { "math_id": 13, "text": "n" }, { "math_id": 14, "text": "A^n" }, { "math_id": 15, "text": "A^{-1}" }, { "math_id": 16, "text": "\\mbox{Mat}_n" }, { "math_id": 17, "text": "n \\times n" }, { "math_id": 18, "text": "\\mbox{Sym}_n" }, { "math_id": 19, "text": "\\mbox{Skew}_n" }, { "math_id": 20, "text": "\\mbox{Mat}_n = \\mbox{Sym}_n + \\mbox{Skew}_n" }, { "math_id": 21, "text": "\\mbox{Sym}_n \\cap \\mbox{Skew}_n = \\{0\\}" }, { "math_id": 22, "text": "\\mbox{Mat}_n = \\mbox{Sym}_n \\oplus \\mbox{Skew}_n , " }, { "math_id": 23, "text": "\\oplus" }, { "math_id": 24, "text": "X \\in \\mbox{Mat}_n" }, { "math_id": 25, "text": "X = \\frac{1}{2}\\left(X + X^\\textsf{T}\\right) + \\frac{1}{2}\\left(X - X^\\textsf{T}\\right)." }, { "math_id": 26, "text": "\\frac{1}{2}\\left(X + X^\\textsf{T}\\right) \\in \\mbox{Sym}_n" }, { "math_id": 27, "text": "\\frac{1}{2} \\left(X - X^\\textsf{T}\\right) \\in \\mathrm{Skew}_n" }, { "math_id": 28, "text": "X" }, { "math_id": 29, "text": "\\tfrac{1}{2}n(n+1)" }, { "math_id": 30, "text": "\\tfrac{1}{2}n(n-1)" }, { "math_id": 31, "text": "A X A^{\\mathrm T}" }, { "math_id": 32, "text": "\\langle \\cdot,\\cdot \\rangle" }, { "math_id": 33, "text": "\\mathbb{R}^n" }, { "math_id": 34, "text": "\\langle Ax, y \\rangle = \\langle x, Ay \\rangle \\quad \\forall x, y \\in \\mathbb{R}^n." }, { "math_id": 35, "text": "Q" }, { "math_id": 36, "text": "D = Q^{\\mathrm T} A Q" }, { "math_id": 37, "text": "D" }, { "math_id": 38, "text": "U" }, { "math_id": 39, "text": "U A U^{\\mathrm T}" }, { "math_id": 40, "text": "B=A^{\\dagger} A" }, { "math_id": 41, "text": "V" }, { "math_id": 42, "text": "V^{\\dagger} B V" }, { "math_id": 43, "text": "C=V^{\\mathrm T} A V" }, { "math_id": 44, "text": "C^{\\dagger}C" }, { "math_id": 45, "text": "C=X+iY" }, { "math_id": 46, "text": "Y" }, { "math_id": 47, "text": "C^{\\dagger}C=X^2+Y^2+i(XY-YX)" }, { "math_id": 48, "text": "XY=YX" }, { "math_id": 49, "text": "W" }, { "math_id": 50, "text": "W X W^{\\mathrm T}" }, { "math_id": 51, "text": "W Y W^{\\mathrm T}" }, { "math_id": 52, "text": "U=W V^{\\mathrm T}" }, { "math_id": 53, "text": "UAU^{\\mathrm T}" }, { "math_id": 54, "text": "UAU^\\mathrm T = \\operatorname{diag}(r_1 e^{i\\theta_1},r_2 e^{i\\theta_2}, \\dots, r_n e^{i\\theta_n})" }, { "math_id": 55, "text": "D = \\operatorname{diag}(e^{-i\\theta_1/2},e^{-i\\theta_2/2}, \\dots, e^{-i\\theta_n/2})" }, { "math_id": 56, "text": "DUAU^\\mathrm TD = \\operatorname{diag}(r_1, r_2, \\dots, r_n)" }, { "math_id": 57, "text": "U' = DU" }, { "math_id": 58, "text": "A^{\\dagger} A" }, { "math_id": 59, "text": "L" }, { "math_id": 60, "text": "A = LL^\\textsf{T}." }, { "math_id": 61, "text": "PAP^\\textsf{T} = LDL^\\textsf{T}" }, { "math_id": 62, "text": "P" }, { "math_id": 63, "text": "1 \\times 1" }, { "math_id": 64, "text": "2 \\times 2" }, { "math_id": 65, "text": "A = Q \\Lambda Q^\\textsf{T}" }, { "math_id": 66, "text": "Q Q^\\textsf{T} = I" }, { "math_id": 67, "text": "\\Lambda" }, { "math_id": 68, "text": "\\mathbf x" }, { "math_id": 69, "text": "\\mathbf y" }, { "math_id": 70, "text": "\\lambda_1" }, { "math_id": 71, "text": "\\lambda_2" }, { "math_id": 72, "text": "\\lambda_1 \\langle \\mathbf x, \\mathbf y \\rangle = \\langle A \\mathbf x, \\mathbf y \\rangle = \\langle \\mathbf x, A \\mathbf y \\rangle = \\lambda_2 \\langle \\mathbf x, \\mathbf y \\rangle." }, { "math_id": 73, "text": "\\langle \\mathbf x, \\mathbf y \\rangle = 0" }, { "math_id": 74, "text": "q" }, { "math_id": 75, "text": "q(\\mathbf{x}) = \\mathbf{x}^\\textsf{T} A \\mathbf{x}" }, { "math_id": 76, "text": "\\R^n" }, { "math_id": 77, "text": "q\\left(x_1, \\ldots, x_n\\right) = \\sum_{i=1}^n \\lambda_i x_i^2" }, { "math_id": 78, "text": "\\lambda_i" }, { "math_id": 79, "text": "\\left\\{ \\mathbf{x} : q(\\mathbf{x}) = 1 \\right\\}" }, { "math_id": 80, "text": "S" }, { "math_id": 81, "text": "A = DS." }, { "math_id": 82, "text": "A^{\\mathrm T}=(DS)^{\\mathrm T}=SD=D^{-1}(DSD)" }, { "math_id": 83, "text": "DSD" }, { "math_id": 84, "text": "A=(a_{ij})" }, { "math_id": 85, "text": "a_{ij} = 0" }, { "math_id": 86, "text": "a_{ji} = 0" }, { "math_id": 87, "text": "1 \\le i \\le j \\le n." }, { "math_id": 88, "text": "a_{i_1 i_2} a_{i_2 i_3} \\dots a_{i_k i_1} = a_{i_2 i_1} a_{i_3 i_2} \\dots a_{i_1 i_k}" }, { "math_id": 89, "text": "\\left(i_1, i_2, \\dots, i_k\\right)." } ]
https://en.wikipedia.org/wiki?curid=126474
12648196
Neutral vector
In statistics, and specifically in the study of the Dirichlet distribution, a neutral vector of random variables is one that exhibits a particular type of statistical independence amongst its elements. In particular, when elements of the random vector must add up to certain sum, then an element in the vector is neutral with respect to the others if the distribution of the vector created by expressing the remaining elements as proportions of their total is independent of the element that was omitted. Definition. A single element formula_0 of a random vector formula_1 is neutral if the "relative" proportions of all the other elements are independent of formula_0. Formally, consider the vector of random variables formula_2 where formula_3 The values formula_0 are interpreted as lengths whose sum is unity. In a variety of contexts, it is often desirable to eliminate a proportion, say formula_4, and consider the distribution of the remaining intervals within the remaining length. The first element of formula_5, viz formula_4 is defined as "neutral" if formula_4 is statistically independent of the vector formula_6 Variable formula_7 is neutral if formula_8 is independent of the remaining interval: that is, formula_8 being independent of formula_9 Thus formula_7, viewed as the first element of formula_10, is neutral. In general, variable formula_11 is neutral if formula_12 is independent of formula_13 Complete neutrality. A vector for which each element is neutral is completely neutral. If formula_14 is drawn from a Dirichlet distribution, then formula_5 is completely neutral. In 1980, James and Mosimann showed that the Dirichlet distribution is characterised by neutrality. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_i" }, { "math_id": 1, "text": "X_1,X_2,\\ldots,X_k" }, { "math_id": 2, "text": "X=(X_1,\\ldots,X_k)" }, { "math_id": 3, "text": "\\sum_{i=1}^k X_i=1." }, { "math_id": 4, "text": "X_1" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "X^*_1 = \\left( \\frac{X_2}{1-X_1}, \\frac{X_3}{1-X_1}, \\ldots, \\frac{X_k}{1-X_1} \\right)." }, { "math_id": 7, "text": "X_2" }, { "math_id": 8, "text": "X_2/(1-X_1)" }, { "math_id": 9, "text": "X^*_{1,2} = \\left( \\frac{X_3}{1-X_1-X_2}, \\frac{X_4}{1-X_1-X_2}, \\ldots, \\frac{X_k}{1-X_1-X_2} \\right)." }, { "math_id": 10, "text": " Y = (X_2,X_3,\\ldots,X_k) " }, { "math_id": 11, "text": "X_j" }, { "math_id": 12, "text": "X_1,\\ldots X_{j-1}" }, { "math_id": 13, "text": "X^*_{1,\\ldots,j} = \\left( \\frac{X_{j+1}}{1-X_1-\\cdots -X_j}, \\ldots, \\frac{X_k}{1-X_1-\\cdots - X_j} \\right)." }, { "math_id": 14, "text": "X = (X_1, \\ldots, X_K)\\sim\\operatorname{Dir}(\\alpha)" } ]
https://en.wikipedia.org/wiki?curid=12648196
12648537
Generalized Dirichlet distribution
In statistics, the generalized Dirichlet distribution (GD) is a generalization of the Dirichlet distribution with a more general covariance structure and almost twice the number of parameters. Random vectors with a GD distribution are completely neutral. The density function of formula_0 is formula_1 where we define formula_2. Here formula_3 denotes the Beta function. This reduces to the standard Dirichlet distribution if formula_4 for formula_5 (formula_6 is arbitrary). For example, if "k=4", then the density function of formula_7 is formula_8 where formula_9 and formula_10. Connor and Mosimann define the PDF as they did for the following reason. Define random variables formula_11 with formula_12. Then formula_13 have the generalized Dirichlet distribution as parametrized above, if the formula_14 are independent beta with parameters formula_15, formula_16. Alternative form given by Wong. Wong gives the slightly more concise form for formula_17 formula_18 where formula_19 for formula_20 and formula_21. Note that Wong defines a distribution over a formula_22 dimensional space (implicitly defining formula_23) while Connor and Mosiman use a formula_24 dimensional space with formula_25. General moment function. If formula_26, then formula_27 where formula_28 for formula_29 and formula_30. Thus formula_31 Reduction to standard Dirichlet distribution. As stated above, if formula_4 for formula_32 then the distribution reduces to a standard Dirichlet. This condition is different from the usual case, in which setting the additional parameters of the generalized distribution to zero results in the original distribution. However, in the case of the GDD, this results in a very complicated density function. Bayesian analysis. Suppose formula_26 is generalized Dirichlet, and that formula_33 is multinomial with formula_34 trials (here formula_35). Writing formula_36 for formula_37 and formula_38 the joint posterior of formula_39 is a generalized Dirichlet distribution with formula_40 where formula_41 and formula_42 for formula_43 Sampling experiment. Wong gives the following system as an example of how the Dirichlet and generalized Dirichlet distributions differ. He posits that a large urn contains balls of formula_44 different colours. The proportion of each colour is unknown. Write formula_45 for the proportion of the balls with colour formula_46 in the urn. Experiment 1. Analyst 1 believes that formula_47 (ie, formula_48 is Dirichlet with parameters formula_49). The analyst then makes formula_44 glass boxes and puts formula_49 marbles of colour formula_50 in box formula_50 (it is assumed that the formula_49 are integers formula_51). Then analyst 1 draws a ball from the urn, observes its colour (say colour formula_46) and puts it in box formula_46. He can identify the correct box because they are transparent and the colours of the marbles within are visible. The process continues until formula_34 balls have been drawn. The posterior distribution is then Dirichlet with parameters being the number of marbles in each box. Experiment 2. Analyst 2 believes that formula_52 follows a generalized Dirichlet distribution: formula_53. All parameters are again assumed to be positive integers. The analyst makes formula_44 wooden boxes. The boxes have two areas: one for balls and one for marbles. The balls are coloured but the marbles are not coloured. Then for formula_54, he puts formula_55 balls of colour formula_46, and formula_56 marbles, in to box formula_46. He then puts a ball of colour formula_44 in box formula_44. The analyst then draws a ball from the urn. Because the boxes are wood, the analyst cannot tell which box to put the ball in (as he could in experiment 1 above); he also has a poor memory and cannot remember which box contains which colour balls. He has to discover which box is the correct one to put the ball in. He does this by opening box 1 and comparing the balls in it to the drawn ball. If the colours differ, the box is the wrong one. The analyst places a marble in box 1 and proceeds to box 2. He repeats the process until the balls in the box match the drawn ball, at which point he places the ball in the box with the other balls of matching colour. The analyst then draws another ball from the urn and repeats until formula_34 balls are drawn. The posterior is then generalized Dirichlet with parameters formula_57 being the number of balls, and formula_58 the number of marbles, in each box. Note that in experiment 2, changing the order of the boxes has a non-trivial effect, unlike experiment 1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p_1,\\ldots,p_{k-1}" }, { "math_id": 1, "text": "\n\\left[\n\\prod_{i=1}^{k-1}B(a_i,b_i)\\right]^{-1}\np_k^{b_{k-1}-1}\n\\prod_{i=1}^{k-1}\\left[\np_i^{a_i-1}\\left(\\sum_{j=i}^kp_j\\right)^{b_{i-1}-(a_i+b_i)}\\right]\n" }, { "math_id": 2, "text": "p_k= 1- \\sum_{i=1}^{k-1}p_i" }, { "math_id": 3, "text": "B(x,y)" }, { "math_id": 4, "text": "b_{i-1}=a_i+b_i" }, { "math_id": 5, "text": "2\\leqslant i\\leqslant k-1" }, { "math_id": 6, "text": "b_0" }, { "math_id": 7, "text": "p_1,p_2,p_3" }, { "math_id": 8, "text": "\n\\left[\\prod_{i=1}^{3}B(a_i,b_i)\\right]^{-1}\np_1^{a_1-1}p_2^{a_2-1}p_3^{a_3-1}p_4^{b_3-1}\\left(p_2+p_3+p_4\\right)^{b_1-\\left(a_2+b_2\\right)}\\left(p_3+p_4\\right)^{b_2-\\left(a_3+b_3\\right)} \n" }, { "math_id": 9, "text": "p_1+p_2+p_3<1" }, { "math_id": 10, "text": "p_4=1-p_1-p_2-p_3" }, { "math_id": 11, "text": "z_1,\\ldots,z_{k-1}" }, { "math_id": 12, "text": "z_1=p_1, z_2=p_2/\\left(1-p_1\\right), z_3=p_3/\\left(1-(p_1+p_2)\\right),\\ldots,z_i = p_i/\\left(1-\\left(p_1+\\cdots+p_{i-1}\\right)\\right)" }, { "math_id": 13, "text": "p_1,\\ldots,p_k" }, { "math_id": 14, "text": "z_i" }, { "math_id": 15, "text": "a_i,b_i" }, { "math_id": 16, "text": "i=1,\\ldots,k-1" }, { "math_id": 17, "text": "x_1+\\cdots +x_k \\leq 1" }, { "math_id": 18, "text": "\n\\prod_{i=1}^k\\frac{x_i^{\\alpha_i-1}\\left(1-x_1-\\cdots-x_i\\right)^{\\gamma_i}}{B(\\alpha_i,\\beta_i)}\n" }, { "math_id": 19, "text": "\\gamma_j=\\beta_j-\\alpha_{j+1}-\\beta_{j+1}" }, { "math_id": 20, "text": " 1 \\leq j \\leq k-1" }, { "math_id": 21, "text": "\\gamma_k=\\beta_k-1" }, { "math_id": 22, "text": "k" }, { "math_id": 23, "text": "x_{k+1} = 1 - \\sum_{i=1}^k x_i" }, { "math_id": 24, "text": "k-1" }, { "math_id": 25, "text": "x_k = 1 - \\sum_{i=1}^{k-1} x_i" }, { "math_id": 26, "text": "X=\\left(X_1,\\ldots,X_k\\right)\\sim GD_k\\left(\\alpha_1,\\ldots,\\alpha_k;\\beta_1,\\ldots,\\beta_k\\right)" }, { "math_id": 27, "text": "\nE\\left[X_1^{r_1}X_2^{r_2}\\cdots X_k^{r_k}\\right]=\n\\prod_{j=1}^k\n\\frac{\n \\Gamma\\left(\\alpha_j+\\beta_j\\right)\n \\Gamma\\left(\\alpha_j+r_j\\right)\n \\Gamma\\left(\\beta_j+\\delta_j\\right)\n}{\n \\Gamma\\left(\\alpha_j\\right)\n \\Gamma\\left(\\beta_j\\right)\n \\Gamma\\left(\\alpha_j+\\beta_j+r_j+\\delta_j\\right)\n}\n" }, { "math_id": 28, "text": "\\delta_j=r_{j+1}+r_{j+2}+\\cdots +r_k" }, { "math_id": 29, "text": "j=1,2,\\cdots,k-1" }, { "math_id": 30, "text": "\\delta_k=0" }, { "math_id": 31, "text": "\nE\\left(X_j\\right)=\\frac{\\alpha_j}{\\alpha_j+\\beta_j}\\prod_{m=1}^{j-1}\\frac{\\beta_m}{\\alpha_m+\\beta_m}.\n" }, { "math_id": 32, "text": "2 \\leq i \\leq k" }, { "math_id": 33, "text": "Y\\mid X" }, { "math_id": 34, "text": "n" }, { "math_id": 35, "text": "Y=\\left(Y_1,\\ldots,Y_k\\right)" }, { "math_id": 36, "text": "Y_j=y_j" }, { "math_id": 37, "text": " 1 \\leq j \\leq k" }, { "math_id": 38, "text": "y_{k+1}=n-\\sum_{i=1}^ky_i" }, { "math_id": 39, "text": "X|Y" }, { "math_id": 40, "text": "\nX\\mid Y\\sim GD_k\\left(\n{\\alpha'}_1,\\ldots,{\\alpha'}_k;\n{\\beta'}_1,\\ldots,{\\beta'}_k\n\\right)\n" }, { "math_id": 41, "text": "{\\alpha'}_j=\\alpha_j+y_j" }, { "math_id": 42, "text": "{\\beta'}_j=\\beta_j+\\sum_{i=j+1}^{k+1}y_i" }, { "math_id": 43, "text": "1\\leqslant j\\leqslant k." }, { "math_id": 44, "text": "k+1" }, { "math_id": 45, "text": "X=(X_1,\\ldots,X_k)" }, { "math_id": 46, "text": "j" }, { "math_id": 47, "text": "X\\sim D(\\alpha_1,\\ldots,\\alpha_k,\\alpha_{k+1})" }, { "math_id": 48, "text": " X" }, { "math_id": 49, "text": "\\alpha_i" }, { "math_id": 50, "text": "i" }, { "math_id": 51, "text": "\\geq 1" }, { "math_id": 52, "text": "X" }, { "math_id": 53, "text": "X\\sim GD(\\alpha_1,\\ldots,\\alpha_k;\\beta_1,\\ldots,\\beta_k)" }, { "math_id": 54, "text": "j=1,\\ldots,k" }, { "math_id": 55, "text": "\\alpha_j" }, { "math_id": 56, "text": "\\beta_j" }, { "math_id": 57, "text": "\\alpha" }, { "math_id": 58, "text": "\\beta" } ]
https://en.wikipedia.org/wiki?curid=12648537
12649450
Peripheral cycle
Graph cycle which does not separate remaining elements In graph theory, a peripheral cycle (or peripheral circuit) in an undirected graph is, intuitively, a cycle that does not separate any part of the graph from any other part. Peripheral cycles (or, as they were initially called, peripheral polygons, because Tutte called cycles "polygons") were first studied by , and play important roles in the characterization of planar graphs and in generating the cycle spaces of nonplanar graphs. Definitions. A peripheral cycle formula_0 in a graph formula_1 can be defined formally in one of several equivalent ways: The equivalence of these definitions is not hard to see: a connected subgraph of formula_4 (together with the edges linking it to formula_0), or a chord of a cycle that causes it to fail to be induced, must in either case be a bridge, and must also be an equivalence class of the binary relation on edges in which two edges are related if they are the ends of a path with no interior vertices in formula_0. Properties. Peripheral cycles appear in the theory of polyhedral graphs, that is, 3-vertex-connected planar graphs. For every planar graph formula_1, and every planar embedding of formula_1, the faces of the embedding that are induced cycles must be peripheral cycles. In a polyhedral graph, all faces are peripheral cycles, and every peripheral cycle is a face. It follows from this fact that (up to combinatorial equivalence, the choice of the outer face, and the orientation of the plane) every polyhedral graph has a unique planar embedding. In planar graphs, the cycle space is generated by the faces, but in non-planar graphs peripheral cycles play a similar role: for every 3-vertex-connected finite graph, the cycle space is generated by the peripheral cycles. The result can also be extended to locally-finite but infinite graphs. In particular, it follows that 3-connected graphs are guaranteed to contain peripheral cycles. There exist 2-connected graphs that do not contain peripheral cycles (an example is the complete bipartite graph formula_7, for which every cycle has two bridges) but if a 2-connected graph has minimum degree three then it contains at least one peripheral cycle. Peripheral cycles in 3-connected graphs can be computed in linear time and have been used for designing planarity tests. They were also extended to the more general notion of non-separating ear decompositions. In some algorithms for testing planarity of graphs, it is useful to find a cycle that is not peripheral, in order to partition the problem into smaller subproblems. In a biconnected graph of circuit rank less than three (such as a cycle graph or theta graph) every cycle is peripheral, but every biconnected graph with circuit rank three or more has a non-peripheral cycle, which may be found in linear time. Generalizing chordal graphs, define a strangulated graph to be a graph in which every peripheral cycle is a triangle. They characterize these graphs as being the clique-sums of chordal graphs and maximal planar graphs. Related concepts. Peripheral cycles have also been called non-separating cycles, but this term is ambiguous, as it has also been used for two related but distinct concepts: simple cycles the removal of which would disconnect the remaining graph, and cycles of a topologically embedded graph such that cutting along the cycle would not disconnect the surface on which the graph is embedded. In matroids, a non-separating circuit is a circuit of the matroid (that is, a minimal dependent set) such that deleting the circuit leaves a smaller matroid that is connected (that is, that cannot be written as a direct sum of matroids). These are analogous to peripheral cycles, but not the same even in graphic matroids (the matroids whose circuits are the simple cycles of a graph). For example, in the complete bipartite graph formula_8, every cycle is peripheral (it has only one bridge, a two-edge path) but the graphic matroid formed by this bridge is not connected, so no circuit of the graphic matroid of formula_8 is non-separating. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "e_1" }, { "math_id": 3, "text": "e_2" }, { "math_id": 4, "text": "G\\setminus C" }, { "math_id": 5, "text": "B" }, { "math_id": 6, "text": "G\\setminus B" }, { "math_id": 7, "text": "K_{2,4}" }, { "math_id": 8, "text": "K_{2,3}" } ]
https://en.wikipedia.org/wiki?curid=12649450
1265440
Prime Obsession
Book by John Derbyshire Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics (2003) is a historical book on mathematics by John Derbyshire, detailing the history of the Riemann hypothesis, named for Bernhard Riemann, and some of its applications. The book was awarded the Mathematical Association of America's inaugural Euler Book Prize in 2007. Overview. The book is written such that even-numbered chapters present historical elements related to the development of the conjecture, and odd-numbered chapters deal with the mathematical and technical aspects. Despite the title, the book provides biographical information on many iconic mathematicians including Euler, Gauss, and Lagrange. In chapter 1, "Card Trick", Derbyshire introduces the idea of an infinite series and the ideas of convergence and divergence of these series. He imagines that there is a deck of cards stacked neatly together, and that one pulls off the top card so that it overhangs from the deck. Explaining that it can overhang only as far as the center of gravity allows, the card is pulled so that exactly half of it is overhanging. Then, without moving the top card, he slides the second card so that it is overhanging too at equilibrium. As he does this more and more, the fractional amount of overhanging cards as they accumulate becomes less and less. He explores various types of series such as the harmonic series. In chapter 2, Bernhard Riemann is introduced and a brief historical account of Eastern Europe in the 18th Century is discussed. In chapter 3, the Prime Number Theorem (PNT) is introduced. The function which mathematicians use to describe the number of primes in "N" numbers, π("N"), is shown to behave in a logarithmic manner, as so: formula_0 where "log" is the natural logarithm. In chapter 4, Derbyshire gives a short biographical history of Carl Friedrich Gauss and Leonard Euler, setting up their involvement in the Prime Number Theorem. In chapter 5, the Riemann Zeta Function is introduced: formula_1 In chapter 7, the sieve of Eratosthenes is shown to be able to be simulated using the Zeta function. With this, the following statement which becomes the pillar stone of the book is asserted: formula_2 Following the derivation of this finding, the book delves into how this is manipulated to expose the PNT's nature. Audience and reception. According to reviewer S. W. Graham, the book is written at a level that is suitable for advanced undergraduate students of mathematics. In contrast, James V. Rauff recommends it to "anyone interested in the history and mathematics of the Riemann hypothesis". Reviewer Don Redmond writes that, while the even-numbered chapters explain the history well, the odd-numbered chapters present the mathematics too informally to be useful, failing to provide insight to readers who do not already understand the mathematics, and failing even to explain the importance of the Riemann hypothesis. Graham adds that the level of mathematics is inconsistent, with detailed explanations of basics and sketchier explanations of material that is more advanced. But for those who do already understand the mathematics, he calls the book "a familiar story entertainingly told". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\pi(N) \\approx \\frac{N}{\\log(N)} " }, { "math_id": 1, "text": " \\zeta(s) = 1 + \\frac{1}{2^s} + \\frac{1}{3^s} + \\frac{1}{4^s} + \\cdots = \\sum_{n = 1}^\\infty \\frac{1}{n^s} " }, { "math_id": 2, "text": " \\zeta(s) = \\prod_{p\\ \\mathrm{prime}} \\frac{1}{1 - {p^{-s}}}" } ]
https://en.wikipedia.org/wiki?curid=1265440
12656637
Indexed grammar
Indexed grammars are a generalization of context-free grammars in that nonterminals are equipped with lists of "flags", or "index symbols". The language produced by an indexed grammar is called an indexed language. Definition. Modern definition by Hopcroft and Ullman. In contemporary publications following Hopcroft and Ullman (1979), an indexed grammar is formally defined a 5-tuple "G" = ⟨"N","T","F","P","S"⟩ where In productions as well as in derivations of indexed grammars, a string ("stack") "σ" ∈ "F"* of index symbols is attached to every nonterminal symbol "A" ∈ "N", denoted by "A"["σ"]. Terminal symbols may not be followed by index stacks. For an index stack "σ" ∈ "F"* and a string "α" ∈ ("N" ∪ "T")* of nonterminal and terminal symbols, "α"["σ"] denotes the result of attaching ["σ"] to every nonterminal in "α"; for example if "α" equals "a" "B" "C" "d" "E" with "a","d" ∈ "T" terminal, and "B","C","E" ∈ "N" nonterminal symbols, then "α"["σ"] denotes "a" "B"["σ"] "C"["σ"] "d" "E"["σ"]. Using this notation, each production in "P" has to be of the form where "A", "B" ∈ "N" are nonterminal symbols, "f" ∈ "F" is an index, "σ" ∈ "F"* is a string of index symbols, and "α" ∈ ("N" ∪ "T")* is a string of nonterminal and terminal symbols. Some authors write ".." instead of "σ" for the index stack in production rules; the rule of type 1, 2, and 3 then reads "A"[..]→"α"[..],   "A"[..]→"B"["f"..], and "A"["f"..]→"α"[..], respectively. Derivations are similar to those in a context-free grammar except for the index stack attached to each nonterminal symbol. When a production like e.g. "A"["σ"] → "B"["σ"]"C"["σ"] is applied, the index stack of "A" is copied to both "B" and "C". Moreover, a rule can push an index symbol onto the stack, or pop its "topmost" (i.e., leftmost) index symbol. Formally, the relation ⇒ ("direct derivation") is defined on the set ("N"["F"*]∪"T")* of "sentential forms" as follows: As usual, the derivation relation ∗⇒ is defined as the reflexive transitive closure of direct derivation ⇒. The language "L"("G") = { "w" ∈ "T"*: "S" ∗⇒ "w" } is the set of all strings of terminal symbols derivable from the start symbol. Original definition by Aho. Historically, the concept of indexed grammars was first introduced by Alfred Aho (1968) using a different formalism. Aho defined an indexed grammar to be a 5-tuple ("N","T","F","P","S") where Direct derivations were as follows: This formalism is e.g. used by Hayashi (1973, p. 65-66). Examples. In practice, stacks of indices can count and remember what rules were applied and in which order. For example, indexed grammars can describe the context-sensitive language of word triples { "www" : "w" ∈ {"a","b"}* }: A derivation of "abbabbabb" is then "S"[] ⇒ "S"["g"] ⇒ "S"["gg"] ⇒ "S"["fgg"] ⇒ "T"["fgg"] "T"["fgg"] "T"["fgg"] ⇒ "a" "T"["gg"] "T"["fgg"] "T"["fgg"] ⇒ "ab" "T"["g"] "T"["fgg"] "T"["fgg"] ⇒ "abb" "T"[] "T"["fgg"] "T"["fgg"] ⇒ "abb" "T"["fgg"] "T"["fgg"] ⇒ ... ⇒ "abb" "abb" "T"["fgg"] ⇒ ... ⇒ "abb" "abb" "abb". As another example, the grammar "G" = ⟨ {"S","T","A","B","C"}, {"a","b","c"}, {"f","g"}, "P", "S" ⟩ produces the language { "a""n""b""n""c""n": "n" ≥ 1 }, where the production set "P" consists of An example derivation is "S"[] ⇒ "T"["g"] ⇒ "T"["fg"] ⇒ "A"["fg"] "B"["fg"] "C"["fg"] ⇒ "aA"["g"] "B"["fg"] "C"["fg"] ⇒ "aA"["g"] "bB"["g"] "C"["fg"] ⇒ "aA"["g"] "bB"["g"] "cC"["g"] ⇒ "aa" "bB"["g"] "cC"["g"] ⇒ "aa" "bb" "cC"["g"] ⇒ "aa" "bb" "cc". Both example languages are not context-free by the pumping lemma. Properties. Hopcroft and Ullman tend to consider indexed languages as a "natural" class, since they are generated by several formalisms other than indexed grammars, viz. Hayashi generalized the pumping lemma to indexed grammars. Conversely, Gilman gives a "shrinking lemma" for indexed languages. Linear indexed grammars. Gerald Gazdar has defined a second class, the linear indexed grammars (LIG), by requiring that at most one nonterminal in each production be specified as receiving the stack, whereas in an ordinary indexed grammar, all nonterminals receive copies of the stack. Formally, a linear indexed grammar is defined similar to an ordinary indexed grammar, but the production's form requirements are modified to: where "A", "B", "f", "σ", "α" are used as above, and "β" ∈ ("N" ∪ "T")* is a string of nonterminal and terminal symbols like "α". Also, the direct derivation relation ⇒ is defined similar to above. This new class of grammars defines a strictly smaller class of languages, which belongs to the mildly context-sensitive classes. The language { "www" : "w" ∈ {"a","b"}* } is generable by an indexed grammar, but not by a linear indexed grammar, while both { "ww" : "w" ∈ {"a","b"}* } and { "a""n" "b""n" "c""n" : n ≥ 1 } are generable by a linear indexed grammar. If both the original and the modified production rules are admitted, the language class remains the indexed languages. Example. Letting σ denote an arbitrary sequence of stack symbols, we can define a grammar for the language "L" = {"a""n" "b""n" "c""n" | "n" ≥ 1 } as To derive the string "abc" we have the steps: "S"[] ⇒ "aS"["f"]"c" ⇒ "aT"["f"]"c" ⇒ "aT"[]"bc" ⇒ "abc" Similarly: "S"[] ⇒ "aS"["f"]"c" ⇒ "aaS"["ff"]"cc" ⇒ "aaT"["ff"]"cc" ⇒ "aaT"["f"]"bcc" ⇒ "aaT"[]"bbcc" ⇒ "aabbcc" Computational power. The linearly indexed languages are a subset of the indexed languages, and thus all LIGs can be recoded as IGs, making the LIGs strictly less powerful than the IGs. A conversion from a LIG to an IG is relatively simple. LIG rules in general look approximately like formula_0, modulo the push/pop part of a rewrite rule. The symbols formula_1 and formula_2 represent strings of terminal and/or non-terminal symbols, and any non-terminal symbol in either must have an empty stack, by the definition of a LIG. This is, of course, counter to how IGs are defined: in an IG, the non-terminals whose stacks are not being pushed to or popped from must have exactly the same stack as the rewritten non-terminal. Thus, somehow, we need to have non-terminals in formula_1 and formula_2 which, despite having non-empty stacks, behave as if they had empty stacks. Consider the rule formula_3 as an example case. In converting this to an IG, the replacement for formula_4 must be some formula_5 that behaves exactly like formula_4 regardless of what formula_6 is. To achieve this, we can simply have a pair of rules that takes any formula_5 where formula_6 is not empty, and pops symbols from the stack. Then, when the stack is empty, it can be rewritten as formula_4. formula_7 formula_8 We can apply this in general to derive an IG from an LIG. So for example if the LIG for the language formula_9 is as follows: formula_10 formula_11 formula_12 formula_13 formula_14 The sentential rule here is not an IG rule, but using the above conversion algorithm, we can define new rules for formula_15, changing the grammar to: formula_16 formula_17 formula_18 formula_11 formula_12 formula_13 formula_14 Each rule now fits the definition of an IG, in which all the non-terminals in the right hand side of a rewrite rule receive a copy of the rewritten symbol's stack. The indexed grammars are therefore able to describe all the languages that linearly indexed grammars can describe. Relation to other formalisms. Vijay-Shanker and Weir (1994) demonstrates that Linear Indexed Grammars, Combinatory Categorial Grammars, Tree-adjoining Grammars, and Head Grammars all define the same class of string languages. Their formal definition of linear indexed grammars differs from the above. LIGs (and their weakly equivalents) are strictly less expressive (meaning they generate a proper subset) than the languages generated by another family of weakly equivalent formalism, which include: LCFRS, MCTAG, MCFG and minimalist grammars (MGs). The latter family can (also) be parsed in polynomial time. Distributed index grammars. Another form of indexed grammars, introduced by Staudacher (1993), is the class of Distributed Index grammars (DIGs). What distinguishes DIGs from Aho's Indexed Grammars is the propagation of indexes. Unlike Aho's IGs, which distribute the whole symbol stack to all non-terminals during a rewrite operation, DIGs divide the stack into substacks and distributes the substacks to selected non-terminals. The general rule schema for a binarily distributing rule of DIG is the form "X"["f"1..."f""i""f""i"+1..."f""n"] → "α" "Y"[f1..."f""i"] "β" "Z"["f""i"+1..."f""n"] γ Where α, β, and γ are arbitrary terminal strings. For a ternarily distributing string: "X"["f"1..."f""i""f""i"+1..."f""j""f""j"+1..."f""n"] → "α" "Y"[f1..."f""i"] "β" "Z"["f""i"+1..."f""j"] "γ" "W"["f""j"+1..."f""n"] "η" And so forth for higher numbers of non-terminals in the right hand side of the rewrite rule. In general, if there are "m" non-terminals in the right hand side of a rewrite rule, the stack is partitioned "m" ways and distributed amongst the new non-terminals. Notice that there is a special case where a partition is empty, which effectively makes the rule a LIG rule. The Distributed Index languages are therefore a superset of the Linearly Indexed languages. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X[\\sigma] \\to \\alpha Y[\\sigma] \\beta" }, { "math_id": 1, "text": "\\alpha" }, { "math_id": 2, "text": "\\beta" }, { "math_id": 3, "text": "X[\\sigma] \\to Y[] Z[\\sigma f]" }, { "math_id": 4, "text": "Y[]" }, { "math_id": 5, "text": "Y^{\\prime}[\\sigma]" }, { "math_id": 6, "text": "\\sigma" }, { "math_id": 7, "text": "Y^{\\prime}[\\sigma f] \\to Y^{\\prime}[\\sigma]" }, { "math_id": 8, "text": "Y^{\\prime}[] \\to Y[]" }, { "math_id": 9, "text": "\\{a^n b^n c^n d^m | n \\geq 1, m \\geq 1\\}" }, { "math_id": 10, "text": "S[\\sigma] \\to T[\\sigma]V[]" }, { "math_id": 11, "text": "V[] \\to d ~|~ dV[]" }, { "math_id": 12, "text": "T[\\sigma] \\to aT[\\sigma f]c ~|~ U[\\sigma]" }, { "math_id": 13, "text": "U[\\sigma f] \\to bU[\\sigma]" }, { "math_id": 14, "text": "U[] \\to \\epsilon" }, { "math_id": 15, "text": "V^{\\prime}" }, { "math_id": 16, "text": "S[\\sigma] \\to T[\\sigma]V^{\\prime}[\\sigma]" }, { "math_id": 17, "text": "V^{\\prime}[\\sigma f] \\to V^{\\prime}[\\sigma]" }, { "math_id": 18, "text": "V^{\\prime}[] \\to V[]" } ]
https://en.wikipedia.org/wiki?curid=12656637
12656919
Program structure tree
A program structure tree (PST) is a hierarchical diagram that displays the nesting relationship of single-entry single-exit (SESE) fragments/regions, showing the organization of a computer program. Nodes in this tree represent SESE regions of the program, while edges represent nesting regions. The PST is defined for all control flow graphs. Bibliographical notes. These notes list important works which fueled research on parsing of programs and/or (work)flow graphs (adapted from Section 3.5 in ). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(|V|+|E|)" }, { "math_id": 1, "text": "|V|" }, { "math_id": 2, "text": "|E|" }, { "math_id": 3, "text": "O(|E|)" }, { "math_id": 4, "text": "E" } ]
https://en.wikipedia.org/wiki?curid=12656919
12659803
(2,3,7) triangle group
In the theory of Riemann surfaces and hyperbolic geometry, the triangle group (2,3,7) is particularly important for its connection to Hurwitz surfaces, namely Riemann surfaces of genus "g" with the largest possible order, 84("g" − 1), of its automorphism group. The term "(2,3,7) triangle group" most often refers not to the full triangle group Δ(2,3,7) (the Coxeter group with Schwarz triangle (2,3,7) or a realization as a hyperbolic reflection group), but rather to the ordinary triangle group (the von Dyck group) "D"(2,3,7) of orientation-preserving maps (the rotation group), which is index 2. Torsion-free normal subgroups of the (2,3,7) triangle group are Fuchsian groups associated with Hurwitz surfaces, such as the Klein quartic, Macbeath surface and First Hurwitz triplet. Constructions. Hyperbolic construction. To construct the triangle group, start with a hyperbolic triangle with angles π/2, π/3, and π/7. This triangle, the smallest hyperbolic Schwarz triangle, tiles the plane by reflections in its sides. Consider then the group generated by reflections in the sides of the triangle, which (since the triangle tiles) is a non-Euclidean crystallographic group (a discrete subgroup of hyperbolic isometries) with this triangle for fundamental domain; the associated tiling is the order-3 bisected heptagonal tiling. The (2,3,7) triangle group is defined as the index 2 subgroups consisting of the orientation-preserving isometries, which is a Fuchsian group (orientation-preserving NEC group). Group presentation. It has a presentation in terms of a pair of generators, "g"2, "g"3, modulo the following relations: formula_0 Geometrically, these correspond to rotations by formula_1, and formula_2 about the vertices of the Schwarz triangle. Quaternion algebra. The (2,3,7) triangle group admits a presentation in terms of the group of quaternions of norm 1 in a suitable order in a quaternion algebra. More specifically, the triangle group is the quotient of the group of quaternions by its center ±1. Let η = 2cos(2π/7). Then from the identity formula_3 we see that Q(η) is a totally real cubic extension of Q. The (2,3,7) hyperbolic triangle group is a subgroup of the group of norm 1 elements in the quaternion algebra generated as an associative algebra by the pair of generators "i","j" and relations "i"2 = "j"2 = "η", "ij" = −"ji". One chooses a suitable Hurwitz quaternion order formula_4 in the quaternion algebra. Here the order formula_4 is generated by elements formula_5 formula_6 In fact, the order is a free Z[η]-module over the basis formula_7. Here the generators satisfy the relations formula_8 which descend to the appropriate relations in the triangle group, after quotienting by the center. Relation to SL(2,R). Extending the scalars from Q(η) to R (via the standard imbedding), one obtains an isomorphism between the quaternion algebra and the algebra M(2,R) of real 2 by 2 matrices. Choosing a concrete isomorphism allows one to exhibit the (2,3,7) triangle group as a specific Fuchsian group in SL(2,R), specifically as a quotient of the modular group. This can be visualized by the associated tilings, as depicted at right: the (2,3,7) tiling on the Poincaré disc is a quotient of the modular tiling on the upper half-plane. For many purposes, explicit isomorphisms are unnecessary. Thus, traces of group elements (and hence also translation lengths of hyperbolic elements acting in the upper half-plane, as well as systoles of Fuchsian subgroups) can be calculated by means of the reduced trace in the quaternion algebra, and the formula formula_9 References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "g_2^2=g_3^3= (g_2g_3)^7=1." }, { "math_id": 1, "text": " \\frac{2\\pi}{2},\\frac{2\\pi}{3}" }, { "math_id": 2, "text": "\\frac{2\\pi}{7}" }, { "math_id": 3, "text": "(2-\\eta)^3= 7(\\eta-1)^2." }, { "math_id": 4, "text": "\\mathcal Q_{\\mathrm{Hur}}" }, { "math_id": 5, "text": "g_2 = \\tfrac{1}{\\eta}ij" }, { "math_id": 6, "text": "g_3 = \\tfrac{1}{2}(1+(\\eta^2-2)j+(3-\\eta^2)ij)." }, { "math_id": 7, "text": "1,g_2,g_3, g_2g_3" }, { "math_id": 8, "text": "g_2^2=g_3^3= (g_2g_3)^7=-1, \\, " }, { "math_id": 9, "text": "\\operatorname{tr}(\\gamma)= 2\\cosh(\\ell_{\\gamma}/2)." } ]
https://en.wikipedia.org/wiki?curid=12659803
1266110
Smoothed-particle hydrodynamics
Method of hydrodynamics simulation Smoothed-particle hydrodynamics (SPH) is a computational method used for simulating the mechanics of continuum media, such as solid mechanics and fluid flows. It was developed by Gingold and Monaghan and Lucy in 1977, initially for astrophysical problems. It has been used in many fields of research, including astrophysics, ballistics, volcanology, and oceanography. It is a meshfree Lagrangian method (where the co-ordinates move with the fluid), and the resolution of the method can easily be adjusted with respect to variables such as density. Examples. Fluid dynamics. Smoothed-particle hydrodynamics is being increasingly used to model fluid motion as well. This is due to several benefits over traditional grid-based techniques. First, SPH guarantees conservation of mass without extra computation since the particles themselves represent mass. Second, SPH computes pressure from weighted contributions of neighboring particles rather than by solving linear systems of equations. Finally, unlike grid-based techniques, which must track fluid boundaries, SPH creates a free surface for two-phase interacting fluids directly since the particles represent the denser fluid (usually water) and empty space represents the lighter fluid (usually air). For these reasons, it is possible to simulate fluid motion using SPH in real time. However, both grid-based and SPH techniques still require the generation of renderable free surface geometry using a polygonization technique such as metaballs and marching cubes, point splatting, or 'carpet' visualization. For gas dynamics it is more appropriate to use the kernel function itself to produce a rendering of gas column density (e.g., as done in the SPLASH visualisation package). One drawback over grid-based techniques is the need for large numbers of particles to produce simulations of equivalent resolution. In the typical implementation of both uniform grids and SPH particle techniques, many voxels or particles will be used to fill water volumes that are never rendered. However, accuracy can be significantly higher with sophisticated grid-based techniques, especially those coupled with particle methods (such as particle level sets), since it is easier to enforce the incompressibility condition in these systems. SPH for fluid simulation is being used increasingly in real-time animation and games where accuracy is not as critical as interactivity. Recent work in SPH for fluid simulation has increased performance, accuracy, and areas of application: Astrophysics. Smoothed-particle hydrodynamics's adaptive resolution, numerical conservation of physically conserved quantities, and ability to simulate phenomena covering many orders of magnitude make it ideal for computations in theoretical astrophysics. Simulations of galaxy formation, star formation, stellar collisions, supernovae and meteor impacts are some of the wide variety of astrophysical and cosmological uses of this method. SPH is used to model hydrodynamic flows, including possible effects of gravity. Incorporating other astrophysical processes which may be important, such as radiative transfer and magnetic fields is an active area of research in the astronomical community, and has had some limited success. Solid mechanics. Libersky and Petschek extended SPH to Solid Mechanics. The main advantage of SPH in this application is the possibility of dealing with larger local distortion than grid-based methods. This feature has been exploited in many applications in Solid Mechanics: metal forming, impact, crack growth, fracture, fragmentation, etc. Another important advantage of meshfree methods in general, and of SPH in particular, is that mesh dependence problems are naturally avoided given the meshfree nature of the method. In particular, mesh alignment is related to problems involving cracks and it is avoided in SPH due to the isotropic support of the kernel functions. However, classical SPH formulations suffer from tensile instabilities and lack of consistency. Over the past years, different corrections have been introduced to improve the accuracy of the SPH solution, leading to the RKPM by Liu et al. Randles and Libersky and Johnson and Beissel tried to solve the consistency problem in their study of impact phenomena. Dyka et al. and Randles and Libersky introduced the stress-point integration into SPH and Ted Belytschko et al. showed that the stress-point technique removes the instability due to spurious singular modes, while tensile instabilities can be avoided by using a Lagrangian kernel. Many other recent studies can be found in the literature devoted to improve the convergence of the SPH method. Recent improvements in understanding the convergence and stability of SPH have allowed for more widespread applications in Solid Mechanics. Other examples of applications and developments of the method include: Numerical tools. Interpolations. The Smoothed-Particle Hydrodynamics (SPH) method works by dividing the fluid into a set of discrete moving elements formula_0, referred to as particles. Their Lagrangian nature allows setting their position formula_1 by integration of their velocity formula_2 as: formula_3 These particles interact through a kernel function with characteristic radius known as the "smoothing length", typically represented in equations by formula_4. This means that the physical quantity of any particle can be obtained by summing the relevant properties of all the particles that lie within the range of the kernel, the latter being used as a weighting function formula_5. This can be understood in two steps. First an arbitrary field formula_6 is written as a convolution with formula_5: formula_7 The error in making the above approximation is order formula_8. Secondly, the integral is approximated using a Riemann summation over the particles: formula_9 where the summation over formula_10 includes all particles in the simulation. formula_11 is the volume of particle formula_10, formula_12 is the value of the quantity formula_6 for particle formula_10 and formula_13 denotes position. For example, the density formula_14 of particle formula_15 can be expressed as: formula_16 where formula_17 denotes the particle mass and formula_18 the particle density, while formula_19 is a short notation for formula_20. The error done in approximating the integral by a discrete sum depends on formula_4, on the particle size (i.e. formula_21, formula_22 being the space dimension), and on the particle arrangement in space. The latter effect is still poorly known. Kernel functions commonly used include the Gaussian function, the quintic spline and the Wendland formula_23 kernel. The latter two kernels are compactly supported (unlike the Gaussian, where there is a small contribution at any finite distance away), with support proportional to formula_4. This has the advantage of saving computational effort by not including the relatively minor contributions from distant particles. Although the size of the smoothing length can be fixed in both space and time, this does not take advantage of the full power of SPH. By assigning each particle its own smoothing length and allowing it to vary with time, the resolution of a simulation can be made to automatically adapt itself depending on local conditions. For example, in a very dense region where many particles are close together, the smoothing length can be made relatively short, yielding high spatial resolution. Conversely, in low-density regions where individual particles are far apart and the resolution is low, the smoothing length can be increased, optimising the computation for the regions of interest. Discretization of governing equations. For particles of constant mass, differentiating the interpolated density formula_14 with respect to time yields formula_24 where formula_25 is the gradient of formula_26 with respect to formula_27. Comparing this equation with the continuity equation in the Lagrangian description (using material derivatives), formula_28 it is apparent that its right-hand side is an approximation of formula_29; hence one defines a discrete divergence operator as follows: formula_30 This operator gives an SPH approximation of formula_31 at the particle formula_15 for a given set of particles with given masses formula_32, positions formula_33 and velocities formula_34. The other important equation for a compressible inviscid fluid is the Euler equation for momentum balance: formula_35 Similarly to continuity, the task is to define a discrete gradient operator in order to write formula_36 One choice is formula_37 which has the property of being skew-adjoint with the divergence operator above, in the sense that formula_38 this being a discrete version of the continuum identity formula_39 This property leads to nice conservation properties. Notice also that this choice leads to a symmetric divergence operator and antisymmetric gradient. Although there are several ways of discretizing the pressure gradient in the Euler equations, the above antisymmetric form is the most acknowledged one. It supports strict conservation of linear and angular momentum. This means that a force that is exerted on particle formula_40 by particle formula_41 equals the one that is exerted on particle formula_41 by particle formula_40 including the sign change of the effective direction, thanks to the antisymmetry property formula_25. Nevertheless, other operators have been proposed, which may perform better numerically or physically. For instance, one drawback of these operators is that while the divergence formula_42 is zero-order consistent (i.e. yields zero when applied to a constant vector field), it can be seen that the gradient formula_43 is not. Several techniques have been proposed to circumvent this issue, leading to renormalized operators (see e.g.). Variational principle. The above SPH governing equations can be derived from a least action principle, starting from the Lagrangian of a particle system: formula_44, where formula_45 is the particle specific internal energy. The Euler–Lagrange equation of variational mechanics reads, for each particle: formula_46 When applied to the above Lagrangian, it gives the following momentum equation: formula_47 where the chain rule has been used, since formula_45 depends on formula_18, and the latter, on the position of the particles. Using the thermodynamic property formula_48 we may write formula_49 Plugging the SPH density interpolation and differentiating explicitly formula_50 leads to formula_51 which is the SPH momentum equation already mentioned, where we recognize the formula_43 operator. This explains why linear momentum is conserved, and allows conservation of angular momentum and energy to be conserved as well. Time integration. From the work done in the 80's and 90's on numerical integration of point-like particles in large accelerators, appropriate time integrators have been developed with accurate conservation properties on the long term; they are called symplectic integrators. The most popular in the SPH literature is the leapfrog scheme, which reads for each particle formula_15: formula_52 where formula_53 is the time step, superscripts stand for time iterations while formula_54 is the particle acceleration, given by the right-hand side of the momentum equation. Other symplectic integrators exist (see the reference textbook). It is recommended to use a symplectic (even low-order) scheme instead of a high order non-symplectic scheme, to avoid error accumulation after many iterations. Integration of density has not been studied extensively (see below for more details). Symplectic schemes are conservative but explicit, thus their numerical stability requires stability conditions, analogous to the Courant-Friedrichs-Lewy condition (see below). Boundary techniques. In case the SPH convolution shall be practiced close to a boundary, i.e. closer than "s" · "h", then the integral support is truncated. Indeed, when the convolution is affected by a boundary, the convolution shall be split in 2 integrals, formula_55 where B(r) is the compact support ball centered at r, with radius "s" · "h", and Ω(r) denotes the part of the compact support inside the computational domain, Ω ∩ B(r). Hence, imposing boundary conditions in SPH is completely based on approximating the second integral on the right hand side. The same can be of course applied to the differential operators computation, formula_56 Several techniques has been introduced in the past to model boundaries in SPH. Integral neglect. The most straightforward boundary model is neglecting the integral, formula_57 such that just the bulk interactions are taken into account, formula_58 This is a popular approach when free-surface is considered in monophase simulations. The main benefit of this boundary condition is its obvious simplicity. However, several consistency issues shall be considered when this boundary technique is applied. That's in fact a heavy limitation on its potential applications. Fluid Extension. Probably the most popular methodology, or at least the most traditional one, to impose boundary conditions in SPH, is Fluid Extension technique. Such technique is based on populating the compact support across the boundary with so-called ghost particles, conveniently imposing their field values. Along this line, the integral neglect methodology can be considered as a particular case of fluid extensions, where the field, A, vanish outside the computational domain. The main benefit of this methodology is the simplicity, provided that the boundary contribution is computed as part of the bulk interactions. Also, this methodology has been deeply analyzed in the literature. On the other hand, deploying ghost particles in the truncated domain is not a trivial task, such that modelling complex boundary shapes becomes cumbersome. The 2 most popular approaches to populate the empty domain with ghost particles are Mirrored-Particles and Fixed-Particles. Boundary Integral. The newest Boundary technique is the Boundary Integral methodology. In this methodology, the empty volume integral is replaced by a surface integral, and a renormalization: formula_59 formula_60 with "nj" the normal of the generic "j"-th boundary element. The surface term can be also solved considering a semi-analytic expression. Modelling physics. Hydrodynamics. Weakly compressible approach. Another way to determine the density is based on the SPH smoothing operator itself. Therefore, the density is estimated from the particle distribution utilizing the SPH interpolation. To overcome undesired errors at the free surface through kernel truncation, the density formulation can again be integrated in time. The weakly compressible SPH in fluid dynamics is based on the discretization of the Navier–Stokes equations or Euler equations for compressible fluids. To close the system, an appropriate equation of state is utilized to link pressure formula_61 and density formula_62. Generally, the so-called Cole equation (sometimes mistakenly referred to as the "Tait equation") is used in SPH. It reads formula_63 where formula_64 is the reference density and formula_65 the speed of sound. For water, formula_66 is commonly used. The background pressure formula_67 is added to avoid negative pressure values. Real nearly incompressible fluids such as water are characterized by very high speeds of sound of the order formula_68. Hence, pressure information travels fast compared to the actual bulk flow, which leads to very small Mach numbers formula_69. The momentum equation leads to the following relation: formula_70 where formula_62 is the density change and formula_71 the velocity vector. In practice a value of c smaller than the real one is adopted to avoid time steps too small in the time integration scheme. Generally a numerical speed of sound is adopted such that density variation smaller than 1% are allowed. This is the so-called weak-compressibility assumption. This corresponds to a Mach number smaller than 0.1, which implies: formula_72 where the maximum velocity formula_73 needs to be estimated, for e.g. by Torricelli's law or an educated guess. Since only small density variations occur, a linear equation of state can be adopted: formula_74 Usually the weakly-compressible schemes are affected by a high-frequency spurious noise on the pressure and density fields. This phenomenon is caused by the nonlinear interaction of acoustic waves and by fact that the scheme is explicit in time and centered in space Through the years, several techniques have been proposed to get rid of this problem. They can be classified in three different groups: Density filter technique. The schemes of the first group apply a filter directly on the density field to remove the spurious numerical noise. The most used filters are the MLS (moving least squares) and the Shepard filter which can be applied at each time step or every n time steps. The more frequent is the use of the filtering procedure, the more regular density and pressure fields are obtained. On the other hand, this leads to an increase of the computational costs. In long time simulations, the use of the filtering procedure may lead to the disruption of the hydrostatic pressure component and to an inconsistency between the global volume of fluid and the density field. Further, it does not ensure the enforcement of the dynamic free-surface boundary condition. Diffusive term technique. A different way to smooth out the density and pressure field is to add a diffusive term inside the continuity equation (group 2) : formula_75 The first schemes that adopted such an approach were described in Ferrari and in Molteni where the diffusive term was modeled as a Laplacian of the density field. A similar approach was also used in Fatehi and Manzari In Antuono et al. a correction to the diffusive term of Molteni was proposed to remove some inconsistencies close to the free-surface. In this case the adopted diffusive term is equivalent to a high-order differential operator on the density field. The scheme is called δ-SPH and preserves all the conservation properties of the SPH without diffusion (e.g., linear and angular momenta, total energy, see ) along with a smooth and regular representation of the density and pressure fields. In the third group there are those SPH schemes which employ numerical fluxes obtained through Riemann solvers to model the particle interactions. Riemann solver technique. For an SPH method based on Riemann solvers, an inter-particle Riemann problem is constructed along a unit vector formula_76 pointing form particle formula_15 to particle formula_10. In this Riemann problem the initial left and right states are on particles formula_15 and formula_10 , respectively. The formula_77 and formula_78 states are formula_79 The solution of the Riemann problem results in three waves emanating from the discontinuity. Two waves, which can be shock or rarefaction wave, traveling with the smallest or largest wave speed. The middle wave is always a contact discontinuity and separates two intermediate states, denoted by formula_80 and formula_81. By assuming that the intermediate state satisfies formula_82 and formula_83, a linearized Riemann solver for smooth flows or with only moderately strong shocks can be written as formula_84 where formula_85 and formula_86 are inter-particle averages. With the solution of the Riemann problem, i.e. formula_87 and formula_88, the discretization of the SPH method is formula_89 formula_90 where formula_91. This indicates that the inter-particle average velocity and pressure are simply replaced by the solution of the Riemann problem. By comparing both it can be seen that the intermediate velocity and pressure from the inter-particle averages amount to implicit dissipation, i.e. density regularization and numerical viscosity, respectively. Since the above discretization is very dissipative a straightforward modification is to apply a limiter to decrease the implicit numerical dissipations introduced by limiting the intermediate pressure by formula_92 where the limiter is defined as formula_93 Note that formula_94 ensures that there is no dissipation when the fluid is under the action of an expansion wave, i.e. formula_95, and that the parameter formula_96, is used to modulate dissipation when the fluid is under the action of a compression wave, i.e. formula_97. Numerical experiments found the formula_98 is generally effective. Also note that the dissipation introduced by the intermediate velocity is not limited. Viscosity modelling. In general, the description of hydrodynamic flows require a convenient treatment of diffusive processes to model the viscosity in the Navier–Stokes equations. It needs special consideration because it involves the Laplacian differential operator. Since the direct computation does not provide satisfactory results, several approaches to model the diffusion have been proposed. Introduced by Monaghan and Gingold the artificial viscosity was used to deal with high Mach number fluid flows. It reads formula_99 Here, formula_100 is controlling a volume viscosity while formula_94 acts similar to the Neumann Richtmeyr artificial viscosity. The formula_101 is defined by formula_102 where "ηh" is a small fraction of "h" (e.g. 0.01"h") to prevent possible numerical infinities at close distances. The artificial viscosity also has shown to improve the overall stability of general flow simulations. Therefore, it is applied to inviscid problems in the following form formula_103 It is possible to not only stabilize inviscid simulations but also to model the physical viscosity by this approach. To do so formula_104 is substituted in the equation above, where formula_105 is the number of spatial dimensions of the model. This approach introduces the bulk viscosity formula_106. For low Reynolds numbers the viscosity model by Morris was proposed. formula_107 Astrophysics. Often in astrophysics, one wishes to model self-gravity in addition to pure hydrodynamics. The particle-based nature of SPH makes it ideal to combine with a particle-based gravity solver, for instance tree gravity code, particle mesh, or particle-particle particle-mesh. Solid mechanics and fluid-structure interaction (FSI). Total Lagrangian formulation for solid mechanics. To discretize the governing equations of solid dynamics, a correction matrix formula_108 is first introduced to reproducing rigid-body rotation as where formula_109 stands for the gradient of the kernel function evaluated at the initial reference configuration. Note that subscripts formula_110 and formula_111 are used to denote solid particles, and smoothing length formula_4 is identical to that in the discretization of fluid equations. Using the initial configuration as the reference, the solid density is directly evaluated as where formula_112 is the Jacobian determinant of deformation tensor formula_113. We can now discretize the momentum equation in the following form where inter-particle averaged first Piola-Kirchhoff stress formula_114 is defined as Also formula_115 and formula_116 correspond to the fluid pressure and viscous forces acting on the solid particle formula_110, respectively. Fluid-structure coupling. In fluid-structure coupling, the surrounding solid structure is behaving as a moving boundary for fluid, and the no-slip boundary condition is imposed at the fluid-structure interface. The interaction forces formula_117 and formula_118 acting on a fluid particle formula_15, due to the presence of the neighboring solid particle formula_110, can be obtained as and Here, the imaginary pressure formula_119 and velocity formula_120 are defined by where formula_121 denotes the surface normal direction of the solid structure, and the imaginary particle density formula_122 is calculated through the equation of state. Accordingly, the interaction forces formula_115 and formula_116 acting on a solid particle formula_110 are given by and The anti-symmetric property of the derivative of the kernel function will ensure the momentum conservation for each pair of interacting particles formula_15 and formula_110. Others. The discrete element method, used for simulating granular materials, is related to SPH. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " i,j " }, { "math_id": 1, "text": " \\mathbf{r}_i " }, { "math_id": 2, "text": " \\mathbf{v}_i " }, { "math_id": 3, "text": "\n\\frac{\\mathrm{d}\\boldsymbol{r}_i}{\\mathrm{d}t}=\\boldsymbol{v}_i.\n" }, { "math_id": 4, "text": " h " }, { "math_id": 5, "text": " W " }, { "math_id": 6, "text": " A " }, { "math_id": 7, "text": "\nA(\\boldsymbol{r}) = \\int A\\left(\\boldsymbol{r^{\\prime}}\\right) W(| \\boldsymbol{r}-\\boldsymbol{r^{\\prime}} |,h) \\, \\mathrm{d}V \\! \\left(\\boldsymbol{r'}\\right).\n" }, { "math_id": 8, "text": " h^2 " }, { "math_id": 9, "text": "\nA(\\boldsymbol{r}) = \\sum_j V_j A_j W(| \\boldsymbol{r}-\\boldsymbol{r}_{j} |,h),\n" }, { "math_id": 10, "text": " j " }, { "math_id": 11, "text": " V_j " }, { "math_id": 12, "text": " A_j " }, { "math_id": 13, "text": "\\boldsymbol{r}" }, { "math_id": 14, "text": " \\rho_i " }, { "math_id": 15, "text": " i " }, { "math_id": 16, "text": "\n\\rho_i = \\rho(\\boldsymbol{r}_i) = \\sum_j m_j W_{ij},\n" }, { "math_id": 17, "text": " m_j = \\rho_j V_j " }, { "math_id": 18, "text": " \\rho_j " }, { "math_id": 19, "text": " W_{ij}=W_{ji} " }, { "math_id": 20, "text": " W(| \\boldsymbol{r}_i-\\boldsymbol{r}_j |,h) " }, { "math_id": 21, "text": " V_j^{1/d} " }, { "math_id": 22, "text": " d " }, { "math_id": 23, "text": " C^2 " }, { "math_id": 24, "text": "\n\\frac{d\\rho_i}{dt} = \\sum_j m_j \\left(\\boldsymbol{v}_i - \\boldsymbol{v}_j\\right) \\cdot \\nabla W_{ij},\n" }, { "math_id": 25, "text": " \\nabla W_{ij}=-\\nabla W_{ji} " }, { "math_id": 26, "text": " W_{ij} " }, { "math_id": 27, "text": " \\boldsymbol{r}_i " }, { "math_id": 28, "text": "\n\\frac{d\\rho}{dt} = -\\rho \\nabla \\cdot \\boldsymbol{v} ,\n" }, { "math_id": 29, "text": " -\\rho \\nabla \\cdot \\mathbf{v} " }, { "math_id": 30, "text": "\n\\operatorname{D}_i\\left\\{ \\boldsymbol{v}_j \\right\\} = -\\frac{1}{\\rho_i} \\sum_j m_j \\left(\\boldsymbol{v}_i - \\boldsymbol{v}_j\\right) \\cdot \\nabla W_{ij}.\n" }, { "math_id": 31, "text": " \\nabla \\cdot \\mathbf{v} " }, { "math_id": 32, "text": " m_j " }, { "math_id": 33, "text": " \\left\\{ \\mathbf{r}_j \\right\\} " }, { "math_id": 34, "text": " \\left\\{ \\mathbf{v}_j \\right\\} " }, { "math_id": 35, "text": "\n\\frac{d\\boldsymbol{v}}{dt} = -\\frac{1}{\\rho}\\nabla p + \\boldsymbol{g}\n" }, { "math_id": 36, "text": "\n\\frac{d\\boldsymbol{v}_i}{dt} = -\\frac{1}{\\rho} \\operatorname{\\mathbf{G}}_i \\left\\{ p_j \\right\\} + \\boldsymbol{g}\n" }, { "math_id": 37, "text": "\n\\operatorname{\\mathbf{G}}_i\\left\\{ p_j \\right\\} = \\rho_i \\sum_j m_j \\left(\\frac{p_i}{\\rho_i^2} + \\frac{p_j}{\\rho_j^2}\\right) \\nabla W_{ij},\n" }, { "math_id": 38, "text": "\n\\sum_i V_i \\boldsymbol{v}_i \\cdot \\operatorname{\\mathbf{G}}_i \\left\\{ p_j \\right\\} = \n- \\sum_i V_i p_i \\operatorname{D}_i\\left\\{ \\boldsymbol{v}_j \\right\\} ,\n" }, { "math_id": 39, "text": "\n\\int \\boldsymbol{v} \\cdot \\operatorname{grad} p = \n- \\int p \\operatorname{div} \\cdot \\boldsymbol{v} .\n" }, { "math_id": 40, "text": "i" }, { "math_id": 41, "text": "j" }, { "math_id": 42, "text": " \\operatorname{D} " }, { "math_id": 43, "text": " \\operatorname{\\mathbf{G}} " }, { "math_id": 44, "text": "\n\\mathcal{L} = \\sum_j m_j \\left( \\tfrac{1}{2}\\boldsymbol{v}_j^2 -e_j +\\boldsymbol{g}\\cdot\\boldsymbol{r}_j \\right)\n" }, { "math_id": 45, "text": " e_j " }, { "math_id": 46, "text": "\n\\frac{\\mathrm{d}}{\\mathrm{d}t} \\frac{\\partial\\mathcal{L}}{\\partial\\boldsymbol{v}_i} = \\frac{\\partial\\mathcal{L}}{\\partial\\boldsymbol{r}_i}.\n" }, { "math_id": 47, "text": "\nm_i \\frac{\\mathrm{d}\\boldsymbol{v}_i}{\\mathrm{d}t} =\n -\\sum_j m_j \\frac{\\partial e_j}{\\partial\\boldsymbol{r}_i} + m_i \\boldsymbol{g} =\n -\\sum_j m_j \\frac{\\partial e_j}{\\partial\\rho_j}\\frac{\\partial\\rho_j}{\\partial\\boldsymbol{r}_i} + m_i \\boldsymbol{g}\n" }, { "math_id": 48, "text": " \\mathrm{d}e = \\left(p/\\rho^2\\right)\\mathrm{d}\\rho " }, { "math_id": 49, "text": "\nm_i \\frac{\\mathrm{d}\\boldsymbol{v}_i}{\\mathrm{d}t} =\n -\\sum_j m_j \\frac{p_j}{\\rho_j^2}\\frac{\\partial\\rho_j}{\\partial\\boldsymbol{r}_i} + m_i \\boldsymbol{g} ,\n" }, { "math_id": 50, "text": " \\tfrac{\\partial\\rho_j}{\\partial\\boldsymbol{r}_i} " }, { "math_id": 51, "text": "\n\\frac{\\mathrm{d}\\boldsymbol{v}_i}{\\mathrm{d}t} = - \\sum_j m_j \\left(\\frac{p_i}{\\rho_i^2} + \\frac{p_j}{\\rho_j^2}\\right) \\nabla W_{ij} + \\boldsymbol{g} ,\n" }, { "math_id": 52, "text": "\\begin{align}\n \\boldsymbol{v}_i^{n+1/2} &= \\boldsymbol{v}_i^n + \\boldsymbol{a}_i^n \\frac{\\Delta t}{2}, \\\\\n \\boldsymbol{r}_i^{n+1} &= \\boldsymbol{r}_i^n + \\boldsymbol{v}_i^{i+1/2}\\Delta t,\\\\\n \\boldsymbol{v}_i^{n+1} &= \\boldsymbol{v}_i^{n+1/2} + \\boldsymbol{a}_i^{i+1} \\frac{\\Delta t}{2},\n\\end{align}" }, { "math_id": 53, "text": " \\Delta t " }, { "math_id": 54, "text": " \\boldsymbol{a}_i " }, { "math_id": 55, "text": "\nA(\\boldsymbol{r}) = \\int_{\\Omega(\\boldsymbol{r})} A\\left(\\boldsymbol{r^{\\prime}}\\right) W(| \\boldsymbol{r}-\\boldsymbol{r^{\\prime}} |,h) d\\boldsymbol{r^{\\prime}} + \\int_{B(\\boldsymbol{r}) - \\Omega(\\boldsymbol{r})} A\\left(\\boldsymbol{r^{\\prime}}\\right) W(| \\boldsymbol{r}-\\boldsymbol{r^{\\prime}} |,h) d\\boldsymbol{r^{\\prime}},\n" }, { "math_id": 56, "text": "\n\\nabla A(\\boldsymbol{r}) = \\int_{\\Omega(\\boldsymbol{r})} A\\left(\\boldsymbol{r^{\\prime}}\\right) \\nabla W(\\boldsymbol{r}-\\boldsymbol{r^{\\prime}},h) d\\boldsymbol{r^{\\prime}} + \\int_{B(\\boldsymbol{r}) - \\Omega(\\boldsymbol{r})} A\\left(\\boldsymbol{r^{\\prime}}\\right) \\nabla W(\\boldsymbol{r}-\\boldsymbol{r^{\\prime}},h) d\\boldsymbol{r^{\\prime}}.\n" }, { "math_id": 57, "text": "\n\\int_{B(\\boldsymbol{r}) - \\Omega(\\boldsymbol{r})} A\\left(\\boldsymbol{r^{\\prime}}\\right) \\nabla W(\\boldsymbol{r}-\\boldsymbol{r^{\\prime}},h) d\\boldsymbol{r^{\\prime}} \\simeq \\boldsymbol{0},\n" }, { "math_id": 58, "text": "\n\\nabla A_i = \\sum_{j \\in \\Omega_i} V_j A_j \\nabla W_{ij}.\n" }, { "math_id": 59, "text": "\n\\nabla A_i = \\frac{1}{\\gamma_i} \\left( \\sum_{j \\in \\Omega_i} V_j A_j \\nabla W_{ij} + \\sum_{j \\in \\partial \\Omega_i} S_j A_j \\boldsymbol{n}_j W_{ij} \\right),\n" }, { "math_id": 60, "text": "\n\\gamma_i = \\sum_{j \\in \\Omega_i} V_j W_{ij},\n" }, { "math_id": 61, "text": "p" }, { "math_id": 62, "text": "\\rho" }, { "math_id": 63, "text": "\np = \\frac{\\rho_0c^2}{\\gamma}\\left(\\left(\\frac{\\rho}{\\rho_0}\\right)^{\\gamma}-1\\right) + p_0 ,\n" }, { "math_id": 64, "text": "\\rho_0" }, { "math_id": 65, "text": "c" }, { "math_id": 66, "text": "\\gamma = 7" }, { "math_id": 67, "text": "p_0" }, { "math_id": 68, "text": "10^3\\mathrm{m/s}" }, { "math_id": 69, "text": "M" }, { "math_id": 70, "text": "\n\\frac{\\delta\\rho}{\\rho_0}\\approx\\frac{|\\boldsymbol{v}|^2}{c^2} = M^2\n" }, { "math_id": 71, "text": "v" }, { "math_id": 72, "text": "\nc = 10v_\\text{max}\n" }, { "math_id": 73, "text": "v_\\text{max}" }, { "math_id": 74, "text": "\np = c^2\\left(\\rho-\\rho_0\\right)\n" }, { "math_id": 75, "text": "\n{\\displaystyle {\\frac {d\\rho _{i}}{dt}}=\\sum _{j}m_{j}\\left({\\boldsymbol {v}}_{i}-{\\boldsymbol {v}}_{j}\\right)\\cdot \\nabla W_{ij} + \\mathcal{D}_i(\\rho),}\n" }, { "math_id": 76, "text": " \\mathbf{e}_{ij} = - \\mathbf{r}_{ij}/r_{ij} " }, { "math_id": 77, "text": " L " }, { "math_id": 78, "text": " R " }, { "math_id": 79, "text": "\n \\begin{cases}\n(\\rho_L, U_L, P_L) = (\\rho_i, \\mathbf{v}_i \\cdot \\mathbf{e}_{ij},P_i) \\\\\n(\\rho_R, U_R, P_R) = (\\rho_j, \\mathbf{v}_j \\cdot \\mathbf{e}_{ij},P_j) .\n \\end{cases}\n" }, { "math_id": 80, "text": " (\\rho_L^{\\ast}, U_L^{\\ast},P_L^{\\ast}) " }, { "math_id": 81, "text": " (\\rho_R^{\\ast}, U_R^{\\ast},P_R^{\\ast}) " }, { "math_id": 82, "text": " U_L^{\\ast} = U_R^{\\ast} =U^{\\ast} " }, { "math_id": 83, "text": " P_L^{\\ast} = P_R^{\\ast} =P^{\\ast} " }, { "math_id": 84, "text": "\n\\begin{cases}\nU^{\\ast} = \\overline{U} + \\frac{1}{2} \\frac{(P_L - P_R)}{\\bar{\\rho} c_0}\\\\\nP^{\\ast} = \\overline{P} + \\frac{1}{2} \\bar{\\rho} c_0 {(U_L - U_R)} ,\n\\end{cases} \n" }, { "math_id": 85, "text": " \\overline{U} = (U_L + U_R)/2 " }, { "math_id": 86, "text": " \\overline{P} = (P_L + P_R)/2 " }, { "math_id": 87, "text": " U^{\\ast} " }, { "math_id": 88, "text": " P^{\\ast} " }, { "math_id": 89, "text": "\\frac{d \\rho_i}{d t} = 2 \\rho_i \\sum_j \\frac{m_j}{\\rho_j} (\\mathbf{v}_i - \\mathbf{v}^{\\ast})\\cdot \\nabla_{i} W_{ij}, " }, { "math_id": 90, "text": "\\frac{d \\mathbf{v}_i}{d t} = - 2\\sum_j m_j \\left( \\frac{ P^{\\ast}}{\\rho_i \\rho_j} \\right) \\nabla_i W_{ij}." }, { "math_id": 91, "text": " \\mathbf{v}^{\\ast} = U^{\\ast} \\mathbf{e}_{ij} + ( \\overline{\\mathbf{v}}_{ij} - \\overline{U}\\mathbf{e}_{ij} ) " }, { "math_id": 92, "text": "\nP^{\\ast} = \\overline{P} + \\frac{1}{2} \\beta \\overline{\\rho} {(U_L - U_R)},\n" }, { "math_id": 93, "text": "\\beta = \\min\\big( \\eta \\max(U_L - U_R, 0), \\overline{c} \\big)." }, { "math_id": 94, "text": " \\beta " }, { "math_id": 95, "text": " U_L < U_R " }, { "math_id": 96, "text": "\\eta " }, { "math_id": 97, "text": " U_L \\geq U_R " }, { "math_id": 98, "text": " \\eta = 3 " }, { "math_id": 99, "text": "\n\\Pi _{ij} = \\begin{cases}\n \\dfrac{-\\alpha \\bar{c}_{ij} \\phi_{ij} + \\beta \\phi^2_{ij}}{\\bar{\\rho}_{ij}} & \\quad \\boldsymbol{v}_{ij} \\cdot \\boldsymbol{r}_{ij} < 0\\\\\n 0 & \\quad \\boldsymbol{v}_{ij} \\cdot \\boldsymbol{r}_{ij} \\geq 0\n \\end{cases}\n" }, { "math_id": 100, "text": " \\alpha" }, { "math_id": 101, "text": " \\phi_{ij} " }, { "math_id": 102, "text": "\n \\phi_{ij} = \\frac{h\\boldsymbol{v}_{ij}\\cdot \\boldsymbol{r}_{ij}}{\\Vert \\boldsymbol{r}_{ij} \\Vert^2 + \\eta_h^2},\n" }, { "math_id": 103, "text": "\n \\Pi_{ij} = \\alpha h c \\frac{\\boldsymbol{v}_{ij} \\cdot \\boldsymbol{r}_{ij}}{\\Vert \\boldsymbol{r}_{ij} \\Vert^2 +\\eta_h^2 }.\n" }, { "math_id": 104, "text": "\n \\alpha h c = 2(n+2) \\frac{\\mu}{\\rho}\n" }, { "math_id": 105, "text": " n " }, { "math_id": 106, "text": " \\zeta = \\frac{5}{3} \\mu " }, { "math_id": 107, "text": "\n [\\nu \\Delta \\boldsymbol{v}]_{ij} =\n2\\nu \\frac{ m_j}{\\rho_j} \\,\\frac{\\boldsymbol{r}_{ij} \\cdot \\nabla w_{h,ij}}{\\Vert \\boldsymbol{r}_{ij} \\Vert ^2 +\\eta_h^2} \\, \\boldsymbol{v}_{ij}.\n" }, { "math_id": 108, "text": " \\mathbb{B}^0 " }, { "math_id": 109, "text": " \\nabla_a^0 W_{a} = \\frac{\\partial W\\left( |\\mathbf{r}_{ab}^0|, h \\right)} {\\partial |\\mathbf{r}_{ab}^0|} \\mathbf{e}_{ab}^0 " }, { "math_id": 110, "text": " a " }, { "math_id": 111, "text": " b " }, { "math_id": 112, "text": " J = \\det(\\mathbb{F}) " }, { "math_id": 113, "text": " \\mathbb{F} " }, { "math_id": 114, "text": " \\tilde{\\mathbb{P}} " }, { "math_id": 115, "text": " \\mathbf{f}_a^{F:p} " }, { "math_id": 116, "text": " \\mathbf{f}_a^{F:v} " }, { "math_id": 117, "text": " \\mathbf{f}_i^{S:p} " }, { "math_id": 118, "text": " \\mathbf{f}_i^{S:v} " }, { "math_id": 119, "text": " p_a^d " }, { "math_id": 120, "text": " \\mathbf{v}_a^d " }, { "math_id": 121, "text": " \\mathbf{n}^S " }, { "math_id": 122, "text": " \\rho_a^d " } ]
https://en.wikipedia.org/wiki?curid=1266110
12661531
Hausner ratio
The Hausner ratio is a number that is correlated to the flowability of a powder or granular material. It is named after the engineer Henry H. Hausner (1900–1995). The Hausner ratio is calculated by the formula formula_0 where formula_1 is the freely settled bulk density of the powder, and formula_2 is the tapped bulk density of the powder. The Hausner ratio is not an absolute property of a material; its value can vary depending on the methodology used to determine it. The Hausner ratio is used in a wide variety of industries as an indication of the flowability of a powder. A Hausner ratio greater than 1.25 - 1.4 is considered to be an indication of poor flowability. The Hausner ratio (H) is related to the Carr index (C), another indication of flowability, by the formula formula_3. Both the Hausner ratio and the Carr index are sometimes criticized, despite their relationships to flowability being established empirically, as not having a strong theoretical basis. Use of these measures persists, however, because the equipment required to perform the analysis is relatively cheap and the technique is easy to learn. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H=\\frac{\\rho_T}{\\rho_B}" }, { "math_id": 1, "text": "\\rho_B" }, { "math_id": 2, "text": "\\rho_T" }, { "math_id": 3, "text": "H=100/(100-C)" } ]
https://en.wikipedia.org/wiki?curid=12661531
12661539
Carr index
Indicator of the compressibility of a powder The Carr index (Carr's index or Carr's Compressibility Index) is an indicator of the compressibility of a powder. It is named after the scientist Ralph J. Carr, Jr. The Carr index is calculated by the formula formula_0, where formula_1 is the freely settled bulk density of the powder, and formula_2 is the tapped bulk density of the powder after "tapping down". It can also be expressed as formula_3. The Carr index is frequently used in pharmaceutics as an indication of the compressibility of a powder. In a free-flowing powder, the bulk density and tapped density would be close in value, therefore, the Carr index would be small. On the other hand, in a poor-flowing powder where there are greater interparticle interactions, the difference between the bulk and tapped density observed would be greater, therefore, the Carr index would be larger. A Carr index greater than 25 is considered to be an indication of poor flowability, and below 15, of good flowability. Another way to measure the flow of a powder is the Hausner ratio, which can be expressed as formula_4. Both the Hausner ratio and the Carr index are sometimes criticized, despite their relationships to flowability being established empirically, as not having a strong theoretical basis. Use of these measures persists, however, because the equipment required to perform the analysis is relatively cheap and the technique is easy to learn. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C=100\\frac{\\rho_T-\\rho_B}{\\rho_T}" }, { "math_id": 1, "text": "\\rho_B" }, { "math_id": 2, "text": "\\rho_T" }, { "math_id": 3, "text": "C=100(1-\\rho_B/\\rho_T)" }, { "math_id": 4, "text": "H=\\rho_T/\\rho_B" } ]
https://en.wikipedia.org/wiki?curid=12661539
12662075
Enrique Loedel Palumbo
Enrique Loedel Palumbo (Montevideo Uruguay, June 29, 1901 – La Plata Argentina, July 31, 1962) was an Uruguayan physicist. Loedel Palumbo was born in Montevideo, Uruguay and studied at the University of La Plata in Argentina. His doctoral advisor was the German physicist of Jewish origin Richard Gans. Loedel wrote his Ph.D. thesis in December 1925 on optical and electrical constants of sugar cane. An extract of the thesis was published in German in "Annalen der Physik" in 1926. He then began his career as professor in La Plata. During Einstein's visit to Argentina in 1925 they had a conversation about the differential equation of a point-source gravitational field, which resulted in a paper published by Loedel in "Physikalische Zeitschrift". It is claimed that this is the first research paper on relativity ever published by a Latin American scientist. Loedel Palumbo then spent some time in Germany working with Erwin Schrödinger and Max Planck. He returned to Argentina in 1930 and from there on concentrated on teaching. He published several scientific papers during his career in international journals and wrote several books (in Spanish). Loedel diagram. Max Born (1920) and systematically Paul Gruner (1921) introduced symmetric Minkowski diagrams in German and French papers, where the ct'-axis is perpendicular to the x-axis, as well as the ct-axis perpendicular to the x'-axis (for sources and historical details, see Loedel diagram). In 1948 and in subsequent papers, Loedel independently rediscovered such diagrams. They were again rediscovered in 1955 by Henri Amar, who subsequently wrote in 1957 in "American Journal of Physics": "I regret my unfamiliarity with South American literature and wish to acknowledge the priority of Professor Loedel's work", along with a note by Loedel Palumbo citing his publications on the geometrical representation of Lorentz transformations. Those diagrams are therefore called "Loedel diagrams", and have been cited by some textbook authors on the subject. Suppose there are two collinear velocities "v" and "w". How does one find the frame of reference in which the velocities become equal speeds in opposite directions? One solution uses modern algebra to find it: Suppose formula_0 and formula_1, so that "a" and "b" are rapidities corresponding to velocities "v" and "w". Let "m" = ("a" + "b")/2, the midpoint rapidity. The transformation formula_2 of the split-complex number plane represents the required transformation since formula_3 and formula_4 As the exponents are additive inverses of each other, the images represent equal speeds in opposite directions. Notes and references. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tanh \\ a \\ =\\ v/c" }, { "math_id": 1, "text": "\\tanh \\ b \\ = \\ w/c" }, { "math_id": 2, "text": "z \\mapsto z e^{-mj}" }, { "math_id": 3, "text": "e^{aj} \\mapsto e^{(a-b)j/2}" }, { "math_id": 4, "text": "e^{bj} \\mapsto e^{(b-a)j/2}." } ]
https://en.wikipedia.org/wiki?curid=12662075
12662478
Scaling pattern of occupancy
In spatial ecology and macroecology, scaling pattern of occupancy (SPO), also known as the area-of-occupancy (AOO) is the way in which species distribution changes across spatial scales. In physical geography and image analysis, it is similar to the modifiable areal unit problem. Simon A. Levin (1992) states that "the problem of relating phenomena across scales is the central problem in biology and in all of science". Understanding the SPO is thus one central theme in ecology. Pattern description. This pattern is often plotted as log-transformed grain (cell size) versus log-transformed occupancy. Kunin (1998) presented a log-log linear SPO and suggested a fractal nature for species distributions. It has since been shown to follow a logistic shape, reflecting a percolation process. Furthermore, the SPO is closely related to the intraspecific occupancy-abundance relationship. For instance, if individuals are randomly distributed in space, the number of individuals in an "α"-size cell follows a Poisson distribution, with the occupancy being "P""α" = 1 − exp(−"μα"), where "μ" is the density. Clearly, Pα in this Poisson model for randomly distributed individuals is also the SPO. Other probability distributions, such as the negative binomial distribution, can also be applied for describing the SPO and the occupancy-abundance relationship for non-randomly distributed individuals. Other occupancy-abundance models that can be used to describe the SPO includes Nachman's exponential model, Hanski and Gyllenberg's metapopulation model, He and Gaston's improved negative binomial model by applying Taylor's power law between the mean and variance of species distribution, and Hui and McGeoch's droopy-tail percolation model. One important application of the SPO in ecology is to estimate species abundance based on presence-absence data, or occupancy alone. This is appealing because obtaining presence-absence data is often cost-efficient. Using a dipswitch test consisting of 5 subtests and 15 criteria, Hui et al. confirmed that using the SPO is robust and reliable for assemblage-scale regional abundance estimation. The other application of SPOs includes trends identification in populations, which is extremely valuable for biodiversity conservation. Explanation. Models providing explanations to the observed scaling pattern of occupancy include the fractal model, the cross-scale model and the Bayesian estimation model. The fractal model can be configured by dividing the landscape into quadrats of different sizes, or bisecting into grids with special width-to-length ratio (2:1), and yields the following SPO: formula_0 where "D" is the box-counting fractal dimension. If during each step a quadrat is divided into "q" sub-quadrats, we will find a constant portion ("f") of sub-quadrats is also present in the fractal model, i.e. "D" = 2(1 + log "ƒ"/log "q"). Since this assumption that "f" is scale independent is not always the case in nature, a more general form of "ƒ" can be assumed, "ƒ" = "q"−"λ" ("λ" is a constant), which yields the cross-scale model: formula_1 The Bayesian estimation model follows a different way of thinking. Instead of providing the best-fit model as above, the occupancy at different scales can be estimated by Bayesian rule based on not only the occupancy but also the spatial autocorrelation at one specific scale. For the Bayesian estimation model, Hui et al. provide the following formula to describe the SPO and join-count statistics of spatial autocorrelation: formula_2 formula_3 where Ω = "p"("a")0 − "q"("a")0/+"p"("a")+ and formula_4 = "p"("a")0(1 − "p"("a")+2(2"q"("a")+/+ − 3) + p(a)+("q"("a")+/+2 − 3)). "p"("a")+ is occupancy; "q"("a")+/+ is the conditional probability that a randomly chosen adjacent quadrat of an occupied quadrat is also occupied. The conditional probability "q"("a")0/+ = 1 − "q"("a")+/+ is the absence probability in a quadrate adjacent to an occupied one; "a" and 4"a" are the grains. The R-code of the Bayesian estimation model has been provided elsewhere. The key point of the Bayesian estimation model is that the scaling pattern of species distribution, measured by occupancy and spatial pattern, can be extrapolated across scales. Later on, Hui provides the Bayesian estimation model for continuously changing scales: formula_5 where "b", "c", and "h" are constants. This SPO becomes the Poisson model when "b" = "c" = 1. In the same paper, the scaling pattern of join-count spatial autocorrelation and multi-species association (or co-occurrence) were also provided by the Bayesian model, suggesting that "the Bayesian model can grasp the statistical essence of species scaling patterns." Implications for biological conservation. The probability of species extinction and ecosystem collapse increases rapidly as range size declines. In risk assessment protocols such as the IUCN Red List of Species or the IUCN Red List of Ecosystems, area of occupancy (AOO) is used as a standardized, complementary and widely applicable measure of risk spreading against spatially explicit threats. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P_a=P_0 a^{D/2-1} \\, " }, { "math_id": 1, "text": "P_{a_j}=f_0 f_1 \\cdots f_j. \\, " }, { "math_id": 2, "text": "p\\,{(4a)_+}=1 - \\frac{{\\Omega }^4}{\\mho }" }, { "math_id": 3, "text": "q\\,{(4a)_{+/+}}=\\frac{{\\Omega }^{10} - \n 2\\,{\\Omega }^4\\,\n {\\mho }^2 + {\\mho }^3}\n {{\\mho }^2\\,\n \\left( -{\\Omega }^4 + \n \\mho \\right) }" }, { "math_id": 4, "text": "\\mho" }, { "math_id": 5, "text": "P_a = 1-b c^{2a^{1/2}} h^a \\, " } ]
https://en.wikipedia.org/wiki?curid=12662478
12663171
Laplace limit
In mathematics, the Laplace limit is the maximum value of the eccentricity for which a solution to Kepler's equation, in terms of a power series in the eccentricity, converges. It is approximately 0.66274 34193 49181 58097 47420 97109 25290. Kepler's equation "M" = "E" − ε sin "E" relates the mean anomaly "M" with the eccentric anomaly "E" for a body moving in an ellipse with eccentricity ε. This equation cannot be solved for "E" in terms of elementary functions, but the Lagrange reversion theorem gives the solution as a power series in ε: formula_0 or in general formula_1 Laplace realized that this series converges for small values of the eccentricity, but diverges for any value of "M" other than a multiple of π if the eccentricity exceeds a certain value that does not depend on "M". The Laplace limit is this value. It is the radius of convergence of the power series. It is the unique real solution of the transcendental equation formula_2 No closed-form expression or infinite series is known for the Laplace limit. History. Laplace calculated the value 0.66195 in 1827. The Italian astronomer Francesco Carlini found the limit 0.66 five years before Laplace. Cauchy in the 1829 gave the precise value 0.66274. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " E = M + \\sin(M) \\, \\varepsilon + \\tfrac12 \\sin(2M) \\, \\varepsilon^2 + \\left( \\tfrac38 \\sin(3M) - \\tfrac18 \\sin(M) \\right) \\, \\varepsilon^3 + \\cdots " }, { "math_id": 1, "text": " E = M \\;+\\; \\sum_{n=1}^{\\infty} \\frac{\\varepsilon^n}{2^{n-1}\\,n!} \\sum_{k=0}^{\\lfloor n/2\\rfloor} (-1)^k\\,\\binom{n}{k}\\,(n-2k)^{n-1}\\,\\sin((n-2k)\\,M) " }, { "math_id": 2, "text": "\\frac{x\\exp(\\sqrt{1+x^2})}{1+\\sqrt{1+x^2}}=1" } ]
https://en.wikipedia.org/wiki?curid=12663171
12666
Gluon
Elementary particle that mediates the strong force A gluon ( ) is a type of massless elementary particle that mediates the strong interaction between quarks, acting as the exchange particle for the interaction. Gluons are massless vector bosons, thereby having a spin of 1. Through the strong interaction, gluons bind quarks into groups according to quantum chromodynamics (QCD), forming hadrons such as protons and neutrons. Gluons carry the color charge of the strong interaction, thereby participating in the strong interaction as well as mediating it. Because gluons carry the color charge, QCD is more difficult to analyze compared to quantum electrodynamics (QED) where the photon carries no electric charge. The term was coined by Murray Gell-Mann in 1962 for being similar to an adhesive or glue that keeps the nucleus together. Together with the quarks, these particles were referred to as partons by Richard Feynman. Properties. The gluon is a vector boson, which means it has a spin of 1. While massive spin-1 particles have three polarization states, massless gauge bosons like the gluon have only two polarization states because gauge invariance requires the field polarization to be transverse to the direction that the gluon is traveling. In quantum field theory, unbroken gauge invariance requires that gauge bosons have zero mass. Experiments limit the gluon's rest mass (if any) to less than a few MeV/"c"2. The gluon has negative intrinsic parity. Counting gluons. There are eight independent types of gluons in QCD. This is unlike the photon of QED or the three W and Z bosons of the weak interaction. Additionally, gluons are subject to the color charge phenomena. Quarks carry three types of color charge; antiquarks carry three types of anticolor. Gluons carry both color and anticolor. This gives nine "possible" combinations of color and anticolor in gluons. The following is a list of those combinations (and their schematic names): These "possible" combinations are only "effective" states, not the "actual" observed color states of gluons. To understand how they are combined, it is necessary to consider the mathematics of color charge in more detail. Color singlet states. The stable strongly interacting particles, including hadrons like the proton or the neutron, are observed to be "colorless". More precisely, they are in a "color singlet" state, and mathematically analogous to a "spin" singlet state. The states allow interaction with other color singlets, but not other color states; because long-range gluon interactions do not exist, this illustrates that gluons in the singlet state do not exist either. The color singlet state is: formula_9 If one could measure the color of the state, there would be equal probabilities of it being red–antired, blue–antiblue, or green–antigreen. Eight color states. There are eight remaining independent color states corresponding to the "eight types" or "eight colors" of gluons. Since the states can be mixed together, there are multiple ways of presenting these states. These are known as the "color octet", and a commonly used list for each is: These are equivalent to the Gell-Mann matrices. The critical feature of these particular eight states is that they are linearly independent, and also independent of the singlet state, hence 32 − 1 or 23. There is no way to add any combination of these states to produce any others. It is also impossible to add them to make "rr," "gg," or "bb" the forbidden singlet state. There are many other possible choices, but all are mathematically equivalent, at least equally complicated, and give the same physical results. Group theory details. Formally, QCD is a gauge theory with SU(3) gauge symmetry. Quarks are introduced as spinors in "N"f flavors, each in the fundamental representation (triplet, denoted 3) of the color gauge group, SU(3). The gluons are vectors in the adjoint representation (octets, denoted 8) of color SU(3). For a general gauge group, the number of force-carriers, like photons or gluons, is always equal to the dimension of the adjoint representation. For the simple case of SU("N"), the dimension of this representation is "N"2 − 1. In group theory, there are no color singlet gluons because quantum chromodynamics has an SU(3) rather than a U(3) symmetry. There is no known "a priori" reason for one group to be preferred over the other, but as discussed above, the experimental evidence supports SU(3). If the group were U(3), the ninth (colorless singlet) gluon would behave like a "second photon" and not like the other eight gluons. Confinement. Since gluons themselves carry color charge, they participate in strong interactions. These gluon–gluon interactions constrain color fields to string-like objects called "flux tubes", which exert constant force when stretched. Due to this force, quarks are confined within composite particles called hadrons. This effectively limits the range of the strong interaction to meters, roughly the size of a nucleon. Beyond a certain distance, the energy of the flux tube binding two quarks increases linearly. At a large enough distance, it becomes energetically more favorable to pull a quark–antiquark pair out of the vacuum rather than increase the length of the flux tube. One consequence of the hadron-confinement property of gluons is that they are not directly involved in the nuclear forces between hadrons. The force mediators for these are other hadrons called mesons. Although in the normal phase of QCD single gluons may not travel freely, it is predicted that there exist hadrons that are formed entirely of gluons — called glueballs. There are also conjectures about other exotic hadrons in which real gluons (as opposed to virtual ones found in ordinary hadrons) would be primary constituents. Beyond the normal phase of QCD (at extreme temperatures and pressures), quark–gluon plasma forms. In such a plasma there are no hadrons; quarks and gluons become free particles. Experimental observations. Quarks and gluons (colored) manifest themselves by fragmenting into more quarks and gluons, which in turn hadronize into normal (colorless) particles, correlated in jets. As revealed in 1978 summer conferences, the PLUTO detector at the electron-positron collider DORIS (DESY) produced the first evidence that the hadronic decays of the very narrow resonance Υ(9.46) could be interpreted as three-jet event topologies produced by three gluons. Later, published analyses by the same experiment confirmed this interpretation and also the spin = 1 nature of the gluon (see also the recollection and PLUTO experiments). In summer 1979, at higher energies at the electron-positron collider PETRA (DESY), again three-jet topologies were observed, now clearly visible and interpreted as qq gluon bremsstrahlung, by TASSO, MARK-J and PLUTO experiments (later in 1980 also by JADE). The spin = 1 property of the gluon was confirmed in 1980 by TASSO and PLUTO experiments (see also the review). In 1991 a subsequent experiment at the LEP storage ring at CERN again confirmed this result. The gluons play an important role in the elementary strong interactions between quarks and gluons, described by QCD and studied particularly at the electron-proton collider HERA at DESY. The number and momentum distribution of the gluons in the proton (gluon density) have been measured by two experiments, H1 and ZEUS, in the years 1996–2007. The gluon contribution to the proton spin has been studied by the HERMES experiment at HERA. The gluon density in the proton (when behaving hadronically) also has been measured. Color confinement is verified by the failure of free quark searches (searches of fractional charges). Quarks are normally produced in pairs (quark + antiquark) to compensate the quantum color and flavor numbers; however at Fermilab single production of top quarks has been shown. No glueball has been demonstrated. Deconfinement was claimed in 2000 at CERN SPS in heavy-ion collisions, and it implies a new state of matter: quark–gluon plasma, less interactive than in the nucleus, almost as in a liquid. It was found at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven in the years 2004–2010 by four contemporaneous experiments. A quark–gluon plasma state has been confirmed at the CERN Large Hadron Collider (LHC) by the three experiments ALICE, ATLAS and CMS in 2010. Jefferson Lab's Continuous Electron Beam Accelerator Facility, in Newport News, Virginia, is one of 10 Department of Energy facilities doing research on gluons. The Virginia lab was competing with another facility – Brookhaven National Laboratory, on Long Island, New York – for funds to build a new electron-ion collider. In December 2019, the US Department of Energy selected the Brookhaven National Laboratory to host the electron-ion collider. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r\\bar{r}" }, { "math_id": 1, "text": "r\\bar{g}" }, { "math_id": 2, "text": "r\\bar{b}" }, { "math_id": 3, "text": "g\\bar{r}" }, { "math_id": 4, "text": "g\\bar{g}" }, { "math_id": 5, "text": "g\\bar{b}" }, { "math_id": 6, "text": "b\\bar{r}" }, { "math_id": 7, "text": "b\\bar{g}" }, { "math_id": 8, "text": "b\\bar{b}" }, { "math_id": 9, "text": "(r\\bar{r}+b\\bar{b}+g\\bar{g})/\\sqrt{3}." } ]
https://en.wikipedia.org/wiki?curid=12666
12666251
Mott scattering
In physics, Mott scattering, also referred to as spin-coupling inelastic Coulomb scattering, is the separation of the two spin states of an electron beam by scattering the beam off the Coulomb field of heavy atoms. It is named after Nevill Francis Mott, who first developed the theory. It is mostly used to measure the spin polarization of an electron beam. In lay terms, Mott scattering is similar to Rutherford scattering but electrons are used instead of alpha particles as they do not interact via the strong interaction (only through weak interaction and electromagnetism), which enable electrons to penetrate the atomic nucleus, giving valuable insight into the nuclear structure. Description. The electrons are often fired at gold foil because gold has a high atomic number (Z), is non-reactive (does not form an oxide layer), and can be easily made into a thin film (reducing multiple scattering). The presence of a spin-orbit term in the scattering potential introduces a spin dependence in the scattering cross section. Two detectors at exactly the same scattering angle to the left and right of the foil count the number of scattered electrons. The asymmetry "A", given by: formula_0 is proportional to the degree of spin polarization "P" according to "A" = "SP", where "S" is the Sherman function. The Mott cross section formula is the mathematical description of the scattering of a high energy electron beam from an atomic nucleus-sized positively charged point in space. The Mott scattering is the theoretical diffraction pattern produced by such a mathematical model. It is used as the beginning point in calculations in electron scattering diffraction studies. The equation for the Mott cross section includes an inelastic scattering term to take into account the recoil of the target proton or nucleus. It also can be corrected for relativistic effects of high energy electrons, and for their magnetic moment. When an experimentally found diffraction pattern deviates from the mathematically derived Mott scattering, it gives clues as to the size and shape of an atomic nucleus The reason is that the Mott cross section assumes only point-particle Coulombic and magnetic interactions between the incoming electrons and the target. When the target is a charged sphere rather than a point, additions to the Mott cross section equation (form factor terms) can be used to probe the distribution of the charge inside the sphere. The Born approximation of the diffraction of a beam of electrons by atomic nuclei is an extension of Mott scattering. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A = \\frac{I^{\\rm right}-I^{\\rm left}}{I^{\\rm right}+I^{\\rm left}}" } ]
https://en.wikipedia.org/wiki?curid=12666251
1266658
Crystallization
Process by which a solid with a highly organized atomic or molecular structure forms Crystallization is the process by which solids form, where the atoms or molecules are highly organized into a structure known as a crystal. Some ways by which crystals form are precipitating from a solution, freezing, or more rarely deposition directly from a gas. Attributes of the resulting crystal depend largely on factors such as temperature, air pressure, cooling rate, and in the case of liquid crystals, time of fluid evaporation. Crystallization occurs in two major steps. The first is nucleation, the appearance of a crystalline phase from either a supercooled liquid or a supersaturated solvent. The second step is known as crystal growth, which is the increase in the size of particles and leads to a crystal state. An important feature of this step is that loose particles form layers at the crystal's surface and lodge themselves into open inconsistencies such as pores, cracks, etc. The majority of minerals and organic molecules crystallize easily, and the resulting crystals are generally of good quality, i.e. without visible defects. However, larger biochemical particles, like proteins, are often difficult to crystallize. The ease with which molecules will crystallize strongly depends on the intensity of either atomic forces (in the case of mineral substances), intermolecular forces (organic and biochemical substances) or intramolecular forces (biochemical substances). Crystallization is also a chemical solid–liquid separation technique, in which mass transfer of a solute from the liquid solution to a pure solid crystalline phase occurs. In chemical engineering, crystallization occurs in a crystallizer. Crystallization is therefore related to precipitation, although the result is not amorphous or disordered, but a crystal. Process. The crystallization process consists of two major events, "nucleation" and "crystal growth" which are driven by thermodynamic properties as well as chemical properties. "Nucleation" is the step where the solute molecules or atoms dispersed in the solvent start to gather into clusters, on the microscopic scale (elevating solute concentration in a small region), that become stable under the current operating conditions. These stable clusters constitute the nuclei. Therefore, the clusters need to reach a critical size in order to become stable nuclei. Such critical size is dictated by many different factors (temperature, supersaturation, etc.). It is at the stage of nucleation that the atoms or molecules arrange in a defined and periodic manner that defines the crystal structure – note that "crystal structure" is a special term that refers to the relative arrangement of the atoms or molecules, not the macroscopic properties of the crystal (size and shape), although those are a result of the internal crystal structure. The "crystal growth" is the subsequent size increase of the nuclei that succeed in achieving the critical cluster size. Crystal growth is a dynamic process occurring in equilibrium where solute molecules or atoms precipitate out of solution, and dissolve back into solution. Supersaturation is one of the driving forces of crystallization, as the solubility of a species is an equilibrium process quantified by Ksp. Depending upon the conditions, either nucleation or growth may be predominant over the other, dictating crystal size. Many compounds have the ability to crystallize with some having different crystal structures, a phenomenon called polymorphism. Certain polymorphs may be metastable, meaning that although it is not in thermodynamic equilibrium, it is kinetically stable and requires some input of energy to initiate a transformation to the equilibrium phase. Each polymorph is in fact a different thermodynamic solid state and crystal polymorphs of the same compound exhibit different physical properties, such as dissolution rate, shape (angles between facets and facet growth rates), melting point, etc. For this reason, polymorphism is of major importance in industrial manufacture of crystalline products. Additionally, crystal phases can sometimes be interconverted by varying factors such as temperature, such as in the transformation of anatase to rutile phases of titanium dioxide. In nature. There are many examples of natural process that involve crystallization. Geological time scale process examples include: Human time scale process examples include: Methods. Crystal formation can be divided into two types, where the first type of crystals are composed of a cation and anion, also known as a salt, such as sodium acetate. The second type of crystals are composed of uncharged species, for example menthol. Crystal formation can be achieved by various methods, such as: cooling, evaporation, addition of a second solvent to reduce the solubility of the solute (technique known as antisolvent or drown-out), solvent layering, sublimation, changing the cation or anion, as well as other methods. The formation of a supersaturated solution does not guarantee crystal formation, and often a seed crystal or scratching the glass is required to form nucleation sites. A typical laboratory technique for crystal formation is to dissolve the solid in a solution in which it is partially soluble, usually at high temperatures to obtain supersaturation. The hot mixture is then filtered to remove any insoluble impurities. The filtrate is allowed to slowly cool. Crystals that form are then filtered and washed with a solvent in which they are not soluble, but is miscible with the mother liquor. The process is then repeated to increase the purity in a technique known as recrystallization. For biological molecules in which the solvent channels continue to be present to retain the three dimensional structure intact, microbatch crystallization under oil and vapor diffusion methods have been the common methods. Typical equipment. Equipment for the main industrial processes for crystallization. Thermodynamic view. The crystallization process appears to violate the second principle of thermodynamics. Whereas most processes that yield more orderly results are achieved by applying heat, crystals usually form at lower temperatures – especially by supercooling. However, due to the release of the heat of fusion during crystallization, the entropy of the universe increases, thus this principle remains unaltered. The molecules within a pure, "perfect crystal", when heated by an external source, will become liquid. This occurs at a sharply defined temperature (different for each type of crystal). As it liquifies, the complicated architecture of the crystal collapses. Melting occurs because the entropy ("S") gain in the system by spatial randomization of the molecules has overcome the enthalpy ("H") loss due to breaking the crystal packing forces: formula_0 formula_1 Regarding crystals, there are no exceptions to this rule. Similarly, when the molten crystal is cooled, the molecules will return to their crystalline form once the temperature falls beyond the turning point. This is because the thermal randomization of the surroundings compensates for the loss of entropy that results from the reordering of molecules within the system. Such liquids that crystallize on cooling are the exception rather than the rule. The nature of a crystallization process is governed by both thermodynamic and kinetic factors, which can make it highly variable and difficult to control. Factors such as impurity level, mixing regime, vessel design, and cooling profile can have a major impact on the size, number, and shape of crystals produced. Dynamics. As mentioned above, a crystal is formed following a well-defined pattern, or structure, dictated by forces acting at the molecular level. As a consequence, during its formation process the crystal is in an environment where the solute concentration reaches a certain critical value, before changing status. Solid formation, impossible below the solubility threshold at the given temperature and pressure conditions, may then take place at a concentration higher than the theoretical solubility level. The difference between the actual value of the solute concentration at the crystallization limit and the theoretical (static) solubility threshold is called supersaturation and is a fundamental factor in crystallization. Nucleation. Nucleation is the initiation of a phase change in a small region, such as the formation of a solid crystal from a liquid solution. It is a consequence of rapid local fluctuations on a molecular scale in a homogeneous phase that is in a state of metastable equilibrium. Total nucleation is the sum effect of two categories of nucleation – primary and secondary. Primary nucleation. Primary nucleation is the initial formation of a crystal where there are no other crystals present or where, if there are crystals present in the system, they do not have any influence on the process. This can occur in two conditions. The first is homogeneous nucleation, which is nucleation that is not influenced in any way by solids. These solids include the walls of the crystallizer vessel and particles of any foreign substance. The second category, then, is heterogeneous nucleation. This occurs when solid particles of foreign substances cause an increase in the rate of nucleation that would otherwise not be seen without the existence of these foreign particles. Homogeneous nucleation rarely occurs in practice due to the high energy necessary to begin nucleation without a solid surface to catalyze the nucleation. Primary nucleation (both homogeneous and heterogeneous) has been modeled as follows: formula_2 where "B" is the number of nuclei formed per unit volume per unit time, "N" is the number of nuclei per unit volume, "kn" is a rate constant, "c" is the instantaneous solute concentration, "c"* is the solute concentration at saturation, ("c" − "c"*) is also known as supersaturation, "n" is an empirical exponent that can be as large as 10, but generally ranges between 3 and 4. Secondary nucleation. Secondary nucleation is the formation of nuclei attributable to the influence of the existing microscopic crystals in the magma. More simply put, secondary nucleation is when crystal growth is initiated with contact of other existing crystals or "seeds". The first type of known secondary crystallization is attributable to fluid shear, the other due to collisions between already existing crystals with either a solid surface of the crystallizer or with other crystals themselves. Fluid-shear nucleation occurs when liquid travels across a crystal at a high speed, sweeping away nuclei that would otherwise be incorporated into a crystal, causing the swept-away nuclei to become new crystals. Contact nucleation has been found to be the most effective and common method for nucleation. The benefits include the following: The following model, although somewhat simplified, is often used to model secondary nucleation: formula_3 where "k"1 is a rate constant, "MT" is the suspension density, "j" is an empirical exponent that can range up to 1.5, but is generally 1, "b" is an empirical exponent that can range up to 5, but is generally 2. Growth. Once the first small crystal, the nucleus, forms it acts as a convergence point (if unstable due to supersaturation) for molecules of solute touching – or adjacent to – the crystal so that it increases its own dimension in successive layers. The pattern of growth resembles the rings of an onion, as shown in the picture, where each colour indicates the same mass of solute; this mass creates increasingly thin layers due to the increasing surface area of the growing crystal. The supersaturated solute mass the original nucleus may "capture" in a time unit is called the "growth rate" expressed in kg/(m2*h), and is a constant specific to the process. Growth rate is influenced by several physical factors, such as surface tension of solution, pressure, temperature, relative crystal velocity in the solution, Reynolds number, and so forth. The main values to control are therefore: The first value is a consequence of the physical characteristics of the solution, while the others define a difference between a well- and poorly designed crystallizer. Size distribution. The appearance and size range of a crystalline product is extremely important in crystallization. If further processing of the crystals is desired, large crystals with uniform size are important for washing, filtering, transportation, and storage, because large crystals are easier to filter out of a solution than small crystals. Also, larger crystals have a smaller surface area to volume ratio, leading to a higher purity. This higher purity is due to less retention of mother liquor which contains impurities, and a smaller loss of yield when the crystals are washed to remove the mother liquor. In special cases, for example during drug manufacturing in the pharmaceutical industry, small crystal sizes are often desired to improve drug dissolution rate and bio-availability. The theoretical crystal size distribution can be estimated as a function of operating conditions with a fairly complicated mathematical process called population balance theory (using population balance equations). Main crystallization processes. Some of the important factors influencing solubility are: So one may identify two main families of crystallization processes: This division is not really clear-cut, since hybrid systems exist, where cooling is performed through evaporation, thus obtaining at the same time a concentration of the solution. A crystallization process often referred to in chemical engineering is the fractional crystallization. This is not a different process, rather a special application of one (or both) of the above. Cooling crystallization. Application. Most chemical compounds, dissolved in most solvents, show the so-called "direct" solubility that is, the solubility threshold increases with temperature. So, whenever the conditions are favorable, crystal formation results from simply cooling the solution. Here "cooling" is a relative term: austenite crystals in a steel form well above 1000 °C. An example of this crystallization process is the production of Glauber's salt, a crystalline form of sodium sulfate. In the diagram, where equilibrium temperature is on the x-axis and equilibrium concentration (as mass percent of solute in saturated solution) in y-axis, it is clear that sulfate solubility quickly decreases below 32.5 °C. Assuming a saturated solution at 30 °C, by cooling it to 0 °C (note that this is possible thanks to the freezing-point depression), the precipitation of a mass of sulfate occurs corresponding to the change in solubility from 29% (equilibrium value at 30 °C) to approximately 4.5% (at 0 °C) – actually a larger crystal mass is precipitated, since sulfate entrains hydration water, and this has the side effect of increasing the final concentration. There are limitations in the use of cooling crystallization: Cooling crystallizers. The simplest cooling crystallizers are tanks provided with a mixer for internal circulation, where temperature decrease is obtained by heat exchange with an intermediate fluid circulating in a jacket. These simple machines are used in batch processes, as in processing of pharmaceuticals and are prone to scaling. Batch processes normally provide a relatively variable quality of the product along with the batch. The "Swenson-Walker" crystallizer is a model, specifically conceived by Swenson Co. around 1920, having a semicylindric horizontal hollow trough in which a hollow screw conveyor or some hollow discs, in which a refrigerating fluid is circulated, plunge during rotation on a longitudinal axis. The refrigerating fluid is sometimes also circulated in a jacket around the trough. Crystals precipitate on the cold surfaces of the screw/discs, from which they are removed by scrapers and settle on the bottom of the trough. The screw, if provided, pushes the slurry towards a discharge port. A common practice is to cool the solutions by flash evaporation: when a liquid at a given T0 temperature is transferred in a chamber at a pressure P1 such that the liquid saturation temperature T1 at P1 is lower than T0, the liquid will release heat according to the temperature difference and a quantity of solvent, whose total latent heat of vaporization equals the difference in enthalpy. In simple words, the liquid is cooled by evaporating a part of it. In the sugar industry, vertical cooling crystallizers are used to exhaust the molasses in the last crystallization stage downstream of vacuum pans, prior to centrifugation. The massecuite enters the crystallizers at the top, and cooling water is pumped through pipes in counterflow. Evaporative crystallization. Another option is to obtain, at an approximately constant temperature, the precipitation of the crystals by increasing the solute concentration above the solubility threshold. To obtain this, the solute/solvent mass ratio is increased using the technique of evaporation. This process is insensitive to change in temperature (as long as hydration state remains unchanged). All considerations on control of crystallization parameters are the same as for the cooling models. Evaporative crystallizers. Most industrial crystallizers are of the evaporative type, such as the very large sodium chloride and sucrose units, whose production accounts for more than 50% of the total world production of crystals. The most common type is the "forced circulation" (FC) model (see evaporator). A pumping device (a pump or an axial flow mixer) keeps the crystal slurry in homogeneous suspension throughout the tank, including the exchange surfaces; by controlling pump flow, control of the contact time of the crystal mass with the supersaturated solution is achieved, together with reasonable velocities at the exchange surfaces. The Oslo, mentioned above, is a refining of the evaporative forced circulation crystallizer, now equipped with a large crystals settling zone to increase the retention time (usually low in the FC) and to roughly separate heavy slurry zones from clear liquid. Evaporative crystallizers tend to yield larger average crystal size and narrows the crystal size distribution curve. DTB crystallizer. Whichever the form of the crystallizer, to achieve an effective process control it is important to control the retention time and the crystal mass, to obtain the optimum conditions in terms of crystal specific surface and the fastest possible growth. This is achieved by a separation – to put it simply – of the crystals from the liquid mass, in order to manage the two flows in a different way. The practical way is to perform a gravity settling to be able to extract (and possibly recycle separately) the (almost) clear liquid, while managing the mass flow around the crystallizer to obtain a precise slurry density elsewhere. A typical example is the DTB ("Draft Tube and Baffle") crystallizer, an idea of Richard Chisum Bennett (a Swenson engineer and later President of Swenson) at the end of the 1950s. The DTB crystallizer (see images) has an internal circulator, typically an axial flow mixer – yellow – pushing upwards in a draft tube while outside the crystallizer there is a settling area in an annulus; in it the exhaust solution moves upwards at a very low velocity, so that large crystals settle – and return to the main circulation – while only the fines, below a given grain size are extracted and eventually destroyed by increasing or decreasing temperature, thus creating additional supersaturation. A quasi-perfect control of all parameters is achieved as DTF crystallizers offer superior control over crystal size and characteristics. This crystallizer, and the derivative models (Krystal, CSC, etc.) could be the ultimate solution if not for a major limitation in the evaporative capacity, due to the limited diameter of the vapor head and the relatively low external circulation not allowing large amounts of energy to be supplied to the system. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T(S_\\text{liquid} - S_\\text{solid}) > H_\\text{liquid} - H_\\text{solid}," }, { "math_id": 1, "text": "G_\\text{liquid} < G_\\text{solid}." }, { "math_id": 2, "text": "B = \\dfrac{dN}{dt} = k_n(c - c^*)^n," }, { "math_id": 3, "text": "B = \\dfrac{dN}{dt} = k_1 M_T^j(c - c^*)^b," } ]
https://en.wikipedia.org/wiki?curid=1266658
1266713
Cayley's formula
Number of spanning trees of a complete graph In mathematics, Cayley's formula is a result in graph theory named after Arthur Cayley. It states that for every positive integer formula_0, the number of trees on formula_0 labeled vertices is formula_1. The formula equivalently counts the number of spanning trees of a complete graph with labeled vertices (sequence in the OEIS). Proof. Many proofs of Cayley's tree formula are known. One classical proof of the formula uses Kirchhoff's matrix tree theorem, a formula for the number of spanning trees in an arbitrary graph involving the determinant of a matrix. Prüfer sequences yield a bijective proof of Cayley's formula. Another bijective proof, by André Joyal, finds a one-to-one transformation between "n"-node trees with two distinguished nodes and maximal directed pseudoforests. A proof by double counting due to Jim Pitman counts in two different ways the number of different sequences of directed edges that can be added to an empty graph on n vertices to form from it a rooted tree; see . History. The formula was first discovered by Carl Wilhelm Borchardt in 1860, and proved via a determinant. In a short 1889 note, Cayley extended the formula in several directions, by taking into account the degrees of the vertices. Although he referred to Borchardt's original paper, the name "Cayley's formula" became standard in the field. Other properties. Cayley's formula immediately gives the number of labelled rooted forests on "n" vertices, namely ("n" + 1)"n" − 1. Each labelled rooted forest can be turned into a labelled tree with one extra vertex, by adding a vertex with label "n" + 1 and connecting it to all roots of the trees in the forest. There is a close connection with rooted forests and parking functions, since the number of parking functions on "n" cars is also ("n" + 1)"n" − 1. A bijection between rooted forests and parking functions was given by M. P. Schützenberger in 1968. Generalizations. The following generalizes Cayley's formula to labelled forests: Let "T""n","k" be the number of labelled forests on "n" vertices with "k" connected components, such that vertices 1, 2, ..., "k" all belong to different connected components. Then "T""n","k" "k" "n""n" − "k" − 1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "n^{n-2}" } ]
https://en.wikipedia.org/wiki?curid=1266713
126688
Prevalence
Number of disease cases in a given population at a specific time In epidemiology, prevalence is the proportion of a particular population found to be affected by a medical condition (typically a disease or a risk factor such as smoking or seatbelt use) at a specific time. It is derived by comparing the number of people found to have the condition with the total number of people studied and is usually expressed as a fraction, a percentage, or the number of cases per 10,000 or 100,000 people. Prevalence is most often used in questionnaire studies. Difference between prevalence and incidence. Prevalence is the number of disease cases "present "in a particular population at a given time, whereas incidence is the number of new cases that "develop "during a specified time period. Prevalence answers "How many people have this disease right now?" or "How many people have had this disease during this time period?". Incidence answers "How many people acquired the disease [during a specified time period]?". However, mathematically, prevalence is proportional to the product of the incidence and the average duration of the disease. In particular, when the prevalence is low (&lt;10%), the relationship can be expressed as: formula_0 Caution must be practiced as this relationship is only applicable when the following two conditions are met: 1) prevalence is low and 2) the duration is constant (or an average can be taken). A general formulation requires differential equations. Examples and utility. In science, "prevalence" describes a proportion (typically expressed as a percentage). For example, the prevalence of obesity among American adults in 2001 was estimated by the U. S. Centers for Disease Control (CDC) at approximately 20.9%. Prevalence is a term that means being widespread and it is distinct from incidence. Prevalence is a measurement of "all" individuals affected by the disease at a particular time, whereas incidence is a measurement of the number of "new" individuals who contract a disease during a particular period of time. Prevalence is a useful parameter when talking about long-lasting diseases, such as HIV, but incidence is more useful when talking about diseases of short duration, such as chickenpox. Uses. Lifetime prevalence. Lifetime prevalence (LTP) is the proportion of individuals in a population that at some point in their life (up to the time of assessment) have experienced a "case", e.g., a disease; a traumatic event; or a behavior, such as committing a crime. Often, a 12-month prevalence (or some other type of "period prevalence") is provided in conjunction with lifetime prevalence. "Point prevalence" is the prevalence of disorder at a specific point in time (a month or less). "Lifetime morbid risk" is "the proportion of a population that might become afflicted with a given disease at any point in their lifetime." Period prevalence. Period prevalence is the proportion of the population with a given disease or condition over a specific period of time. It could describe how many people in a population had a cold over the cold season in 2006, for example. It is expressed as a percentage of the population and can be described by the following formula: Period prevalence (proportion) = Number of cases that existed in a given period ÷ Number of people in the population during this period The relationship between incidence (rate), point prevalence (ratio) and period prevalence (ratio) is easily explained via an analogy with photography. Point prevalence is akin to a flashlit photograph: what is happening at this instant frozen in time. Period prevalence is analogous to a long exposure (seconds, rather than an instant) photograph: the number of events recorded in the photo whilst the camera shutter was open. In a movie each frame records an instant (point prevalence); by looking from frame to frame one notices new events (incident events) and can relate the number of such events to a period (number of frames); see incidence rate. Point prevalence. Point prevalence is a measure of the proportion of people in a population who have a disease or condition at a particular time, such as a particular date. It is like a snapshot of the disease in time. It can be used for statistics on the occurrence of chronic diseases. This is in contrast to period prevalence which is a measure of the proportion of people in a population who have a disease or condition over a specific period of time, say a season, or a year. Point prevalence can be described by the formula: Prevalence = Number of existing cases on a specific date ÷ Number of people in the population on this date Limitations. It can be said that a very small error applied over a very large number of individuals (that is, those who are "not affected" by the condition in the general population during their lifetime; for example, over 95%) produces a relevant, non-negligible number of subjects who are incorrectly classified as having the condition or any other condition which is the object of a survey study: these subjects are the so-called false positives; such reasoning applies to the 'false positive' but not the 'false negative' problem where we have an error applied over a relatively very small number of individuals to begin with (that is, those who are "affected" by the condition in the general population; for example, less than 5%). Hence, a very high percentage of subjects who seem to have a history of a disorder at interview are false positives for such a medical condition and apparently never developed a fully clinical syndrome. A different but related problem in evaluating the public health significance of psychiatric conditions has been highlighted by Robert Spitzer of Columbia University: fulfillment of diagnostic criteria and the resulting diagnosis do not necessarily imply need for treatment. A well-known statistical problem arises when ascertaining rates for disorders and conditions with a relatively low population prevalence or base rate. Even assuming that lay interview diagnoses are highly accurate in terms of sensitivity and specificity and their corresponding area under the ROC curve (that is, AUC, or area under the receiver operating characteristic curve), a condition with a relatively low prevalence or base-rate is bound to yield high false positive rates, which exceed false negative rates; in such a circumstance a limited positive predictive value, PPV, yields high false positive rates even in presence of a specificity which is very close to 100%. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Prevalence = incidence \\times duration" } ]
https://en.wikipedia.org/wiki?curid=126688
12669
GM
GM or Gm may refer to: &lt;templatestyles src="Template:TOC_right/styles.css" /&gt; See also. Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "\\mu=GM \\ " } ]
https://en.wikipedia.org/wiki?curid=12669
12669621
Nanosyntax
Approach to linguistic syntax Nanosyntax is an approach to syntax where the terminal nodes of syntactic parse trees may be reduced to units smaller than a morpheme. Each unit may stand as an irreducible element and not be required to form a further "subtree." Due to its reduction to the smallest terminal possible, the terminals are smaller than morphemes. Therefore, morphemes and words cannot be itemised as a single terminal, and instead are composed by several terminals. As a result, nanosyntax can serve as a solution to phenomena that are inadequately explained by other theories of syntax. Some recent work in theoretical linguistics suggests that the "atoms" of syntax are much smaller than words or morphemes. It then follows that the responsibility of syntax is not limited to ordering "preconstructed" words. Instead, within the framework of nanosyntax, the words are derived entities built into syntax, rather than primitive elements supplied by a lexicon. History. Theoretical context. Nanosyntax arose within the context of other syntactic theories, primarily Cartography and Distributed Morphology. Cartographic syntax theories were highly influential for the thought behind nanosyntax, and the theories share many commonalities. Cartography seeks to provide a syntactic theory that fits within Universal Grammar by charting building blocks and structures of syntax present in all languages. Because Cartography is grounded in empirical evidence, smaller and more detailed syntactic units and structures were being developed to accommodate new linguistic data. Cartography also syntacticizes various domains of grammar, particularly semantics, to varying degrees in different frameworks. For example, elements of semantics which serve grammatical functions, such as features conveying number, tense, or case, are viewed as being a part of semantics. This trend towards including other grammatical domains within syntax is also reflected in nanosyntax. Other elements of cartography that are present in nanosyntax include a universal merge order syntactic categories and right branching trees/leftward movement exclusively. However, cartographic syntax conceptualizes the lexicon as a pre-syntactic repository, which contrasts with the Nanosyntactic view of the lexicon/syntax. Distributed Morphology provides an alternative to Lexicalist approaches to how the lexicon and syntax interact, that is, with words independently created in the lexicon and then organized using syntax. In Distributed Morphology, the lexicon does not function independently and is instead distributed across many linguistic processes. Both Distributed Morphology and nanosyntax are late insertion models, meaning that syntax is viewed as a pre-lexical/phonological process, with syntactic categories as abstract concepts. Additionally, both theories see syntax as responsible for both sentence- and word-level structure. Despite their many similarities, nanosyntax and Distributed Morphology still differ in a few key areas, particularly with regards to the architecture of how they theorize grammatical domains interacting. Distributed Morphology makes use of a presyntactic list of abstracted roots, functional morphemes, and vocabulary insertion which follows syntactic processes. In contrast, nanosyntax has syntax, morphology, and semantics working simultaneously as part of one domain which interacts throughout the syntactic process to apply lexical elements (the lexicon is a single domain in nanosyntax, while it is spread over multiple domains in Distributed Morphology). See the section Tools of nanosyntax below for more information. Nanosyntactic theory is in direct conflict with theories that adopt views of the lexicon as an independent domain which generates lexical entries apart from any other grammatical domain. An example of such a theory is the Lexical Integrity Hypothesis, which states that syntax has no access to the internal structure of lexical items. Reasoning. By adopting a theoretic architecture of grammar which does not separate syntactic, morphological, and semantic processes and by allowing terminals to represent sub-morphemic information, nanosyntax is equipped to address various failings and areas of uncertainty in previous theories. One example that supports these tools of nanosyntax is idioms, in which a single lexical item is represented using multiple words whose meaning cannot be determined cumulatively. Because terminals in nanosyntax represent sub-morphemic information, a single morpheme is able to span several terminals, thus creating a subtree. This accommodates the structure of idioms, which are best represented as a subtree representing one morpheme. More evidence for the need for a Nanosyntactic analysis include analyzing irregular plural noun forms and irregular verb inflection (described in more detail in the Nanosyntactic Operations section) and analyzing morphemes which contain multiple grammatical functions (described in more detail in the Tools section). Nanosyntactic operations. Nanosyntax is a theory that seeks to fill in holes left by other theories when they seek to explain phenomena in language. The most notable phenomena that nanosyntax tackles is that of irregular conjugation. For example, "goose" is irregular in that its plural form is not "gooses", but rather, "geese". This poses a problem to simple syntax, as without additional rules and allowances, "geese" should be found to be a suboptimal candidate for the plural of "goose" in comparison to "gooses". Possible solutions. There are three manners by which syntacticians may attempt to resolve this. The first is a word-based treatment. In the above examples, “duck”, “ducks”, “goose”, and “geese” are all counted as separate heads under the category of nouns. Whether a word is singular or plural, is then marked in the lexical entry, and there does not exist a Number head with which affixes can be included to modify the root word. This theory requires significant work on the part of the speaker to retrieve the correct word. It is also considered lacking in the face of morphological concepts such as that displayed by the Wug test wherein children are able to correctly conjugate a previously unheard nonsense noun from its singular to its plural. Distributed Morphology attempts to tackle the question through the process of fusion. Fusion is the process in which a noun head and its numeral head may fuse together under certain parameters to derive an irregular plural. In the above example, the plural of “duck” would simply select its plural allomorph “ducks”, and the plural of “goose” would select its plural allomorph “geese”, created through the fusion of “goose” and “-s”. In this way, distributed morphology is head-based. However, this theory still does not provide a reason as to why "geese" is preferable and a more optimal candidate for a plurality of geese over "gooses". Nanosyntax goes about this dilemma by suggesting that rather than each word being a head, it is instead a phrase and can therefore be made into a subtree. Within the tree, heads can be assigned to override other heads in specific contexts. For example, if there is a head that says "-s" is added to a noun to turn it from a singular noun to a plural noun, but a head overrides it in the case of an irregularly conjugated plural noun such as "goose", it will select for the operation of the superseding head. Since it uses a formula and not rote memorisation of lexical items, it bypasses the challenges brought forth by a word-based treatment, and due to the arrangement of heads and their precedence, also provides a solution to the optimality concerns of Distributed Morphology. Nanosyntax functions based on two principles: phrasal lexicalisation and the Elsewhere Principle. Phrasal lexicalisation. Phrasal lexicalisation is the concept that proposes that only lexical items can constitute terminal nodes. When this principle is applied, we can say that in regular plural nouns, there is no special lexicalisation (denoted in the below example using X) that needs to apply, and so standard pluralisation rules apply. The following is an example using "duck" where because there is no additional lexicalisation of the plural noun, an -s is added to pluralise the noun: X ↔ [PlP [NP DUCK] Pl   duck ↔ [NP DUCK](8) s ↔ Pl This principle also allows for a word like "geese" to lexicalise [goose[Pl0]. When an additional lexicalisation is present, instead of following the standard addition of -s to pluralise the noun, instead the lexicalisation rule takes over in the following manner: geese ↔ [PlP [NP GOOSE] Pl . Elsewhere Principle. The Elsewhere Principle seeks to provide a solution to the question of which lexicalisation applies to the noun in question. In simple terms, the lexicalisation that is more specific will always take precedence over a more general lexicalisation. As illustrated, syntactic structure S seeks either lexicalisation from A ↔ [XP X [YP Y [ZP Z ]]] or B ↔ [YP Y [ZP Z ]], B will win out over A because B lexicalises in a more specific situation whereas A lexicalises more generally. This solves the problem that Distributed Morphology faces when determining optimal pluralisation for irregular nouns. Observable consequences. Caha proposed that there is a hierarchy in case as follows from broadest to narrowest: Dative, Genitive, Accusative, Nominative. Caha also suggested that each of these cases could be broken down into its most basic structures, each of which is a syntactic terminal, as follows: Dative = [WP W [XP X [YP Y [ZP Z ]]]] Genitive = [XP X [YP Y [ZP Z ]]] Accusative = [YP Y [ZP Z ]] Nominative = [ZP Z ] This is further outlined below, in the section Morphological Containment/Nesting. As each is formed with sets within, it is possible for portions of the tense to be lexicalised by a separate noun. Therefore, there are several possibilities in syncretism patterns, namely AAAA, AAAB, AABB, ABBB, AABC, ABBC and ABCC. Some arrangements do not appear as possibilities because of the constraints laid on by the Elsewhere Principle. Notably, once there has been a switch to a separate lexicalisation, the lexicalisations from prior cannot return. In other words, there are no occurrences where once A turns to B or B to C, A or B reappears respectively. The Elsewhere Principle says that narrower lexicalisations win over broader lexicalisations, and once a narrower lexicalisation has been selected for, the broader lexicalisation will not reappear. Nanosyntax of specific categories. Nanosyntactic analyses have been developed for specific lexical categories, including nouns and prepositions. Nouns. Nanosyntax has been found to be a useful analysis for explaining properties of nouns, more specifically patterns in their affixes. Across languages, syntacticians have used principles of nanosyntax such as Spellout (or Phrasal Lexicalisation) and the Elsewhere Principle (described above), and Cyclic Override (i.e. multiple spellout processes) to show that the structure of nouns are smaller than individual morphemes. These structures are underlying in our syntax, and it is the structure which dictates the morphemes as opposed to morphemes dictating structure. Descriptive analyses like those below, where morphemes are broken down into smaller units and given their own sub-morphemic structure, are possible using a nanosyntactic approach where morphemes are able to lexicalize multiple syntactic tree terminals. English plural morpheme. The structure of irregular plural nouns in English, and why they surface in the lexicon can be explained using a nanosyntactic approach. For example, the plural of "mouse" is not "*mouses" (* indicates ungrammaticality), it is "mice". The structure of a plural noun can be broken down into the following morphemes: [Noun Plural] The lexical item "mice" is able to lexicalize both the noun and plural morphemes in one singular morpheme because it is both a larger and more specific structure than "*mouse-s". This means that is more favourable because it spells out an entire syntactic sub-tree rather than an individual daughter node. Nguni noun class prefixes. Research has been done on Nguni languages to develop a structure for the noun class prefixes. Each of the prefixes has a layered structure whereby different layers interact with other morphemes in the language. As a result, different syntactic parts of the morpheme will surface in different environments. The complete Class 2 prefix "aba-" surfaces in the noun "aba-fundi" ('the/some students'). However in the vocative case, the initial vowel is not present: "Molweni, (*a)ba-fundi" ('Hi students'). The initial vowel is syntactically independent from the rest of the morpheme, able to appear in some environments and not others. Nanosyntax provides an explanation by positing that the initial vowel has a separate node from the rest of the morpheme in a subtree. Prepositions. There lies some uncertainty in deriving prepositions in nanosyntax. Prepositions must precede the item with which they combine and not be moved due to Spellout (i.e., the need to replace a portion of the syntactic tree with a lexical item). Three proposed approaches that aim to justify this are spanning, head movement, and the existence of an additional workspace for complex heads. Spanning refers to the operation in which the categorical features of a lexical item can associate it with various heads (i.e., a head and its related heads, determined by mutually selected maximal projections). The approach restricts that all associated heads must be contiguous, but the item need not form a constituent. Under this approach, prepositions may be antecedently lexicalized even if they do not hold constituent status. Head movement has been suggested to be responsible for the order of morphemes within a word. The Nanosyntactic approach results in an ordering of affixes that poses a problem for head movement, which has caused some debate. This motivates the argument posited by some researchers that phrasal movements are responsible for specific orderings of morphemes, meaning that head movement is dispensable and only need be understood as a special case of phrasal movement called “roll-up.” Nonetheless, other researchers abide by movement restrictions as they only apply movement for constituents containing a head. Such attempts to keep movement operations simple make it possible for those following a Nanosyntactic approach to use the same operations as conventional minimalist syntax (in which terminal nodes are lexical items). The suggestion of an additional workspace for complex heads conceptualizes that the prefixal element is created in a separate space. Thereafter, it combines antecedently with the remaining structure in the primary workspace, allowing the assembled item to maintain its internal ordering of features. Tools. Nanosyntax uses a handful of tools in order to map out fine-grained elements of the language being analyzed. Beyond Spellout Principles, there are three main tools for this system based on the writings of Baunaz, Haegeman, De Clercq, and Lander in "Exploring Nanosyntax". Semantics. The universal structure of compositionality is used in order for mapping within sentences semantically. This deals with mapping which feature words are composed of which structures a given word is semantically "constructed on". Semantic considerations impact the parameters of structural seize of a sentence, based on semantic categories of things such as verbs. This is an important guiding feature on what elements of syntax need to be aligned with semantic markers. Syncretism. Syncretism has played a central role in the development of nanosyntax. This system combines two distinct morphosyntactic structures on the surface of a sentence: such as two grammar functions that are contained within a single lexical form. An example of this could be something like the French "à" which can be used to indicate a location or a goal; this is therefore a Location-Goal syncretism. This observation of a syncretism comes from work investigating patterns of the readings of words such as goal "to", route "via", and location "at" cross-linguistically performed by linguistics as suggested by Svenonius. Case syncretism have been determined to only be possible with adjacent cases, based on the ABA theorem. This therefore can be used to target adjacent elements in the ordering of cases, such as nominative and accusative cases in languages such as English. Through using syncretism in nanosyntax, a universal order of cases can be identified, through determining which cases sit beside one another. This finding allows linguists to understand which features are present, as well as their order. Morphological containment. formula_0The nesting of cases assumed in nanosyntax. Morphological containment relates to the hierarchy of linear order in syntactic structures. Syncretism may reveal linear order, but is unable to determine in which direction the linear order occurs. This is where morphological containment is required. It is used in this context to posit the hierarchy of cases. Syncretism can determine the linear order of cases is COM &gt; INS &gt; DAT &gt; GEN &gt; ACC &gt; NOM "or" NOM &gt; ACC &gt; GEN &gt; DAT &gt; INS &gt; COM, but morphological containment decides whether it is nominative or comitative initial. These case features can be understood as sets of each other, where features build on top of one another, with the first feature being a singleton, but the next feature is the first and second nested within itself- and so on. These sets can be referred to as the above mentioned features. Alternatively, to simplify the nesting of the features, one can label them as K1/etc as proposed by Pavel Caha. Arguments for nominative case being the simplest and first case can be linked to its simplicity in structure and features. Examples are found in natural language which suggest an order beginning with NOM and ending with COM, such as in West Tocharian, where the ACC plural ending -m is found nested in the GEN/DAT ending -mts. This is a surface representation of the ordering of case through use of nesting in nanosyntax. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\mathsf{\n\\underbrace{\\qquad K_6 \\qquad >\n\\underbrace{\\qquad K_5 \\qquad >\n\\underbrace{\\qquad K_4 \\qquad >\n\\underbrace{\\qquad K_3 \\qquad >\n\\underbrace{\\qquad K_2 \\qquad >\n\\underbrace{\\qquad K_1 \\qquad}\n_{NOM}}\n_{ACC}}\n_{GEN}}\n_{DAT}}\n_{INS}}\n_{COM}}\n" } ]
https://en.wikipedia.org/wiki?curid=12669621
1267
Alpha decay
Type of radioactive decay Alpha decay or α-decay is a type of radioactive decay in which an atomic nucleus emits an alpha particle (helium nucleus) and thereby transforms or "decays" into a different atomic nucleus, with a mass number that is reduced by four and an atomic number that is reduced by two. An alpha particle is identical to the nucleus of a helium-4 atom, which consists of two protons and two neutrons. It has a charge of and a mass of . For example, uranium-238 decays to form thorium-234. While alpha particles have a charge , this is not usually shown because a nuclear equation describes a nuclear reaction without considering the electrons – a convention that does not imply that the nuclei necessarily occur in neutral atoms. Alpha decay typically occurs in the heaviest nuclides. Theoretically, it can occur only in nuclei somewhat heavier than nickel (element 28), where the overall binding energy per nucleon is no longer a maximum and the nuclides are therefore unstable toward spontaneous fission-type processes. In practice, this mode of decay has only been observed in nuclides considerably heavier than nickel, with the lightest known alpha emitter being the second lightest isotope of antimony, 104Sb. Exceptionally, however, beryllium-8 decays to two alpha particles. Alpha decay is by far the most common form of cluster decay, where the parent atom ejects a defined daughter collection of nucleons, leaving another defined product behind. It is the most common form because of the combined extremely high nuclear binding energy and relatively small mass of the alpha particle. Like other cluster decays, alpha decay is fundamentally a quantum tunneling process. Unlike beta decay, it is governed by the interplay between both the strong nuclear force and the electromagnetic force. Alpha particles have a typical kinetic energy of 5 MeV (or ≈ 0.13% of their total energy, 110 TJ/kg) and have a speed of about 15,000,000 m/s, or 5% of the speed of light. There is surprisingly small variation around this energy, due to the strong dependence of the half-life of this process on the energy produced. Because of their relatively large mass, the electric charge of and relatively low velocity, alpha particles are very likely to interact with other atoms and lose their energy, and their forward motion can be stopped by a few centimeters of air. Approximately 99% of the helium produced on Earth is the result of the alpha decay of underground deposits of minerals containing uranium or thorium. The helium is brought to the surface as a by-product of natural gas production. History. Alpha particles were first described in the investigations of radioactivity by Ernest Rutherford in 1899, and by 1907 they were identified as He2+ ions. By 1928, George Gamow had solved the theory of alpha decay via tunneling. The alpha particle is trapped inside the nucleus by an attractive nuclear potential well and a repulsive electromagnetic potential barrier. Classically, it is forbidden to escape, but according to the (then) newly discovered principles of quantum mechanics, it has a tiny (but non-zero) probability of "tunneling" through the barrier and appearing on the other side to escape the nucleus. Gamow solved a model potential for the nucleus and derived, from first principles, a relationship between the half-life of the decay, and the energy of the emission, which had been previously discovered empirically and was known as the Geiger–Nuttall law. Mechanism. The nuclear force holding an atomic nucleus together is very strong, in general much stronger than the repulsive electromagnetic forces between the protons. However, the nuclear force is also short-range, dropping quickly in strength beyond about 3 femtometers, while the electromagnetic force has an unlimited range. The strength of the attractive nuclear force keeping a nucleus together is thus proportional to the number of the nucleons, but the total disruptive electromagnetic force of proton-proton repulsion trying to break the nucleus apart is roughly proportional to the square of its atomic number. A nucleus with 210 or more nucleons is so large that the strong nuclear force holding it together can just barely counterbalance the electromagnetic repulsion between the protons it contains. Alpha decay occurs in such nuclei as a means of increasing stability by reducing size. One curiosity is why alpha particles, helium nuclei, should be preferentially emitted as opposed to other particles like a single proton or neutron or other atomic nuclei. Part of the reason is the high binding energy of the alpha particle, which means that its mass is less than the sum of the masses of two free protons and two free neutrons. This increases the disintegration energy. Computing the total disintegration energy given by the equation formula_0 where "m"i is the initial mass of the nucleus, "m"f is the mass of the nucleus after particle emission, and "m"p is the mass of the emitted (alpha-)particle, one finds that in certain cases it is positive and so alpha particle emission is possible, whereas other decay modes would require energy to be added. For example, performing the calculation for uranium-232 shows that alpha particle emission releases 5.4 MeV of energy, while a single proton emission would "require" 6.1 MeV. Most of the disintegration energy becomes the kinetic energy of the alpha particle, although to fulfill conservation of momentum, part of the energy goes to the recoil of the nucleus itself (see atomic recoil). However, since the mass numbers of most alpha-emitting radioisotopes exceed 210, far greater than the mass number of the alpha particle (4), the fraction of the energy going to the recoil of the nucleus is generally quite small, less than 2%. Nevertheless, the recoil energy (on the scale of keV) is still much larger than the strength of chemical bonds (on the scale of eV), so the daughter nuclide will break away from the chemical environment the parent was in. The energies and ratios of the alpha particles can be used to identify the radioactive parent via alpha spectrometry. These disintegration energies, however, are substantially smaller than the repulsive potential barrier created by the interplay between the strong nuclear and the electromagnetic force, which prevents the alpha particle from escaping. The energy needed to bring an alpha particle from infinity to a point near the nucleus just outside the range of the nuclear force's influence is generally in the range of about 25 MeV. An alpha particle within the nucleus can be thought of as being inside a potential barrier whose walls are 25 MeV above the potential at infinity. However, decay alpha particles only have energies of around 4 to 9 MeV above the potential at infinity, far less than the energy needed to overcome the barrier and escape. Quantum tunneling. Quantum mechanics, however, allows the alpha particle to escape via quantum tunneling. The quantum tunneling theory of alpha decay, independently developed by George Gamow and by Ronald Wilfred Gurney and Edward Condon in 1928, was hailed as a very striking confirmation of quantum theory. Essentially, the alpha particle escapes from the nucleus not by acquiring enough energy to pass over the wall confining it, but by tunneling through the wall. Gurney and Condon made the following observation in their paper on it: It has hitherto been necessary to postulate some special arbitrary 'instability' of the nucleus, but in the following note, it is pointed out that disintegration is a natural consequence of the laws of quantum mechanics without any special hypothesis... Much has been written of the explosive violence with which the α-particle is hurled from its place in the nucleus. But from the process pictured above, one would rather say that the α-particle almost slips away unnoticed. The theory supposes that the alpha particle can be considered an independent particle within a nucleus, that is in constant motion but held within the nucleus by strong interaction. At each collision with the repulsive potential barrier of the electromagnetic force, there is a small non-zero probability that it will tunnel its way out. An alpha particle with a speed of 1.5×107 m/s within a nuclear diameter of approximately 10−14 m will collide with the barrier more than 1021 times per second. However, if the probability of escape at each collision is very small, the half-life of the radioisotope will be very long, since it is the time required for the total probability of escape to reach 50%. As an extreme example, the half-life of the isotope bismuth-209 is . The isotopes in beta-decay stable isobars that are also stable with regards to double beta decay with mass number "A" = 5, "A" = 8, 143 ≤ "A" ≤ 155, 160 ≤ "A" ≤ 162, and "A" ≥ 165 are theorized to undergo alpha decay. All other mass numbers (isobars) have exactly one theoretically stable nuclide. Those with mass 5 decay to helium-4 and a proton or a neutron, and those with mass 8 decay to two helium-4 nuclei; their half-lives (helium-5, lithium-5, and beryllium-8) are very short, unlike the half-lives for all other such nuclides with "A" ≤ 209, which are very long. (Such nuclides with "A" ≤ 209 are primordial nuclides except 146Sm.) Working out the details of the theory leads to an equation relating the half-life of a radioisotope to the decay energy of its alpha particles, a theoretical derivation of the empirical Geiger–Nuttall law. Uses. Americium-241, an alpha emitter, is used in smoke detectors. The alpha particles ionize air in an open ion chamber and a small current flows through the ionized air. Smoke particles from the fire that enter the chamber reduce the current, triggering the smoke detector's alarm. Radium-223 is also an alpha emitter. It is used in the treatment of skeletal metastases (cancers in the bones). Alpha decay can provide a safe power source for radioisotope thermoelectric generators used for space probes and were used for artificial heart pacemakers. Alpha decay is much more easily shielded against than other forms of radioactive decay. Static eliminators typically use polonium-210, an alpha emitter, to ionize the air, allowing the "static cling" to dissipate more rapidly. Toxicity. Highly charged and heavy, alpha particles lose their several MeV of energy within a small volume of material, along with a very short mean free path. This increases the chance of double-strand breaks to the DNA in cases of internal contamination, when ingested, inhaled, injected or introduced through the skin. Otherwise, touching an alpha source is typically not harmful, as alpha particles are effectively shielded by a few centimeters of air, a piece of paper, or the thin layer of dead skin cells that make up the epidermis; however, many alpha sources are also accompanied by beta-emitting radio daughters, and both are often accompanied by gamma photon emission. Relative biological effectiveness (RBE) quantifies the ability of radiation to cause certain biological effects, notably either cancer or cell-death, for equivalent radiation exposure. Alpha radiation has a high linear energy transfer (LET) coefficient, which is about one ionization of a molecule/atom for every angstrom of travel by the alpha particle. The RBE has been set at the value of 20 for alpha radiation by various government regulations. The RBE is set at 10 for neutron irradiation, and at 1 for beta radiation and ionizing photons. However, the recoil of the parent nucleus (alpha recoil) gives it a significant amount of energy, which also causes ionization damage (see ionizing radiation). This energy is roughly the weight of the alpha () divided by the weight of the parent (typically about 200 Da) times the total energy of the alpha. By some estimates, this might account for most of the internal radiation damage, as the recoil nucleus is part of an atom that is much larger than an alpha particle, and causes a very dense trail of ionization; the atom is typically a heavy metal, which preferentially collect on the chromosomes. In some studies, this has resulted in an RBE approaching 1,000 instead of the value used in governmental regulations. The largest natural contributor to public radiation dose is radon, a naturally occurring, radioactive gas found in soil and rock. If the gas is inhaled, some of the radon particles may attach to the inner lining of the lung. These particles continue to decay, emitting alpha particles, which can damage cells in the lung tissue. The death of Marie Curie at age 66 from aplastic anemia was probably caused by prolonged exposure to high doses of ionizing radiation, but it is not clear if this was due to alpha radiation or X-rays. Curie worked extensively with radium, which decays into radon, along with other radioactive materials that emit beta and gamma rays. However, Curie also worked with unshielded X-ray tubes during World War I, and analysis of her skeleton during a reburial showed a relatively low level of radioisotope burden. The Russian defector Alexander Litvinenko's 2006 murder by radiation poisoning is thought to have been carried out with polonium-210, an alpha emitter. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_{di} = (m_\\text{i} - m_\\text{f} - m_\\text{p})c^2," } ]
https://en.wikipedia.org/wiki?curid=1267
126706
Pattern recognition
Automated recognition of patterns and regularities in data &lt;templatestyles src="Machine learning/styles.css"/&gt; Pattern recognition is the task of assigning a class to an observation based on patterns extracted from data. While similar, pattern recognition (PR) is not to be confused with pattern machines (PM) which may possess (PR) capabilities but their primary function is to distinguish and create emergent patterns. PR has applications in statistical data analysis, signal processing, image analysis, information retrieval, bioinformatics, data compression, computer graphics and machine learning. Pattern recognition has its origins in statistics and engineering; some modern approaches to pattern recognition include the use of machine learning, due to the increased availability of big data and a new abundance of processing power. Pattern recognition systems are commonly trained from labeled "training" data. When no labeled data are available, other algorithms can be used to discover previously unknown patterns. KDD and data mining have a larger focus on unsupervised methods and stronger connection to business use. Pattern recognition focuses more on the signal and also takes acquisition and signal processing into consideration. It originated in engineering, and the term is popular in the context of computer vision: a leading computer vision conference is named Conference on Computer Vision and Pattern Recognition. In machine learning, pattern recognition is the assignment of a label to a given input value. In statistics, discriminant analysis was introduced for this same purpose in 1936. An example of pattern recognition is classification, which attempts to assign each input value to one of a given set of "classes" (for example, determine whether a given email is "spam"). Pattern recognition is a more general problem that encompasses other types of output as well. Other examples are regression, which assigns a real-valued output to each input; sequence labeling, which assigns a class to each member of a sequence of values (for example, part of speech tagging, which assigns a part of speech to each word in an input sentence); and parsing, which assigns a parse tree to an input sentence, describing the syntactic structure of the sentence. Pattern recognition algorithms generally aim to provide a reasonable answer for all possible inputs and to perform "most likely" matching of the inputs, taking into account their statistical variation. This is opposed to "pattern matching" algorithms, which look for exact matches in the input with pre-existing patterns. A common example of a pattern-matching algorithm is regular expression matching, which looks for patterns of a given sort in textual data and is included in the search capabilities of many text editors and word processors. Overview. A modern definition of pattern recognition is: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Pattern recognition is generally categorized according to the type of learning procedure used to generate the output value. "Supervised learning" assumes that a set of training data (the training set) has been provided, consisting of a set of instances that have been properly labeled by hand with the correct output. A learning procedure then generates a model that attempts to meet two sometimes conflicting objectives: Perform as well as possible on the training data, and generalize as well as possible to new data (usually, this means being as simple as possible, for some technical definition of "simple", in accordance with Occam's Razor, discussed below). Unsupervised learning, on the other hand, assumes training data that has not been hand-labeled, and attempts to find inherent patterns in the data that can then be used to determine the correct output value for new data instances. A combination of the two that has been explored is semi-supervised learning, which uses a combination of labeled and unlabeled data (typically a small set of labeled data combined with a large amount of unlabeled data). In cases of unsupervised learning, there may be no training data at all. Sometimes different terms are used to describe the corresponding supervised and unsupervised learning procedures for the same type of output. The unsupervised equivalent of classification is normally known as "clustering", based on the common perception of the task as involving no training data to speak of, and of grouping the input data into clusters based on some inherent similarity measure (e.g. the distance between instances, considered as vectors in a multi-dimensional vector space), rather than assigning each input instance into one of a set of pre-defined classes. In some fields, the terminology is different. In community ecology, the term "classification" is used to refer to what is commonly known as "clustering". The piece of input data for which an output value is generated is formally termed an "instance". The instance is formally described by a vector of features, which together constitute a description of all known characteristics of the instance. These feature vectors can be seen as defining points in an appropriate multidimensional space, and methods for manipulating vectors in vector spaces can be correspondingly applied to them, such as computing the dot product or the angle between two vectors. Features typically are either categorical (also known as nominal, i.e., consisting of one of a set of unordered items, such as a gender of "male" or "female", or a blood type of "A", "B", "AB" or "O"), ordinal (consisting of one of a set of ordered items, e.g., "large", "medium" or "small"), integer-valued (e.g., a count of the number of occurrences of a particular word in an email) or real-valued (e.g., a measurement of blood pressure). Often, categorical and ordinal data are grouped together, and this is also the case for integer-valued and real-valued data. Many algorithms work only in terms of categorical data and require that real-valued or integer-valued data be "discretized" into groups (e.g., less than 5, between 5 and 10, or greater than 10). Probabilistic classifiers. Many common pattern recognition algorithms are "probabilistic" in nature, in that they use statistical inference to find the best label for a given instance. Unlike other algorithms, which simply output a "best" label, often probabilistic algorithms also output a probability of the instance being described by the given label. In addition, many probabilistic algorithms output a list of the "N"-best labels with associated probabilities, for some value of "N", instead of simply a single best label. When the number of possible labels is fairly small (e.g., in the case of classification), "N" may be set so that the probability of all possible labels is output. Probabilistic algorithms have many advantages over non-probabilistic algorithms: Number of important feature variables. Feature selection algorithms attempt to directly prune out redundant or irrelevant features. A general introduction to feature selection which summarizes approaches and challenges, has been given. The complexity of feature-selection is, because of its non-monotonous character, an optimization problem where given a total of formula_0 features the powerset consisting of all formula_1 subsets of features need to be explored. The Branch-and-Bound algorithm does reduce this complexity but is intractable for medium to large values of the number of available features formula_0 Techniques to transform the raw feature vectors (feature extraction) are sometimes used prior to application of the pattern-matching algorithm. Feature extraction algorithms attempt to reduce a large-dimensionality feature vector into a smaller-dimensionality vector that is easier to work with and encodes less redundancy, using mathematical techniques such as principal components analysis (PCA). The distinction between feature selection and feature extraction is that the resulting features after feature extraction has taken place are of a different sort than the original features and may not easily be interpretable, while the features left after feature selection are simply a subset of the original features. Problem statement. The problem of pattern recognition can be stated as follows: Given an unknown function formula_2 (the "ground truth") that maps input instances formula_3 to output labels formula_4, along with training data formula_5 assumed to represent accurate examples of the mapping, produce a function formula_6 that approximates as closely as possible the correct mapping formula_7. (For example, if the problem is filtering spam, then formula_8 is some representation of an email and formula_9 is either "spam" or "non-spam"). In order for this to be a well-defined problem, "approximates as closely as possible" needs to be defined rigorously. In decision theory, this is defined by specifying a loss function or cost function that assigns a specific value to "loss" resulting from producing an incorrect label. The goal then is to minimize the expected loss, with the expectation taken over the probability distribution of formula_10. In practice, neither the distribution of formula_10 nor the ground truth function formula_2 are known exactly, but can be computed only empirically by collecting a large number of samples of formula_10 and hand-labeling them using the correct value of formula_11 (a time-consuming process, which is typically the limiting factor in the amount of data of this sort that can be collected). The particular loss function depends on the type of label being predicted. For example, in the case of classification, the simple zero-one loss function is often sufficient. This corresponds simply to assigning a loss of 1 to any incorrect labeling and implies that the optimal classifier minimizes the error rate on independent test data (i.e. counting up the fraction of instances that the learned function formula_6 labels wrongly, which is equivalent to maximizing the number of correctly classified instances). The goal of the learning procedure is then to minimize the error rate (maximize the correctness) on a "typical" test set. For a probabilistic pattern recognizer, the problem is instead to estimate the probability of each possible output label given a particular input instance, i.e., to estimate a function of the form formula_12 where the feature vector input is formula_13, and the function "f" is typically parameterized by some parameters formula_14. In a discriminative approach to the problem, "f" is estimated directly. In a generative approach, however, the inverse probability formula_16 is instead estimated and combined with the prior probability formula_17 using Bayes' rule, as follows: formula_18 When the labels are continuously distributed (e.g., in regression analysis), the denominator involves integration rather than summation: formula_19 The value of formula_15 is typically learned using maximum a posteriori (MAP) estimation. This finds the best value that simultaneously meets two conflicting objects: To perform as well as possible on the training data (smallest error-rate) and to find the simplest possible model. Essentially, this combines maximum likelihood estimation with a regularization procedure that favors simpler models over more complex models. In a Bayesian context, the regularization procedure can be viewed as placing a prior probability formula_20 on different values of formula_15. Mathematically: formula_21 where formula_22 is the value used for formula_15 in the subsequent evaluation procedure, and formula_23, the posterior probability of formula_15, is given by formula_24 In the Bayesian approach to this problem, instead of choosing a single parameter vector formula_25, the probability of a given label for a new instance formula_13 is computed by integrating over all possible values of formula_15, weighted according to the posterior probability: formula_26 Frequentist or Bayesian approach to pattern recognition. The first pattern classifier – the linear discriminant presented by Fisher – was developed in the frequentist tradition. The frequentist approach entails that the model parameters are considered unknown, but objective. The parameters are then computed (estimated) from the collected data. For the linear discriminant, these parameters are precisely the mean vectors and the covariance matrix. Also the probability of each class formula_17 is estimated from the collected dataset. Note that the usage of 'Bayes rule' in a pattern classifier does not make the classification approach Bayesian. Bayesian statistics has its origin in Greek philosophy where a distinction was already made between the 'a priori' and the 'a posteriori' knowledge. Later Kant defined his distinction between what is a priori known – before observation – and the empirical knowledge gained from observations. In a Bayesian pattern classifier, the class probabilities formula_17 can be chosen by the user, which are then a priori. Moreover, experience quantified as a priori parameter values can be weighted with empirical observations – using e.g., the Beta- (conjugate prior) and Dirichlet-distributions. The Bayesian approach facilitates a seamless intermixing between expert knowledge in the form of subjective probabilities, and objective observations. Probabilistic pattern classifiers can be used according to a frequentist or a Bayesian approach. Uses. Within medical science, pattern recognition is the basis for computer-aided diagnosis (CAD) systems. CAD describes a procedure that supports the doctor's interpretations and findings. Other typical applications of pattern recognition techniques are automatic speech recognition, speaker identification, classification of text into several categories (e.g., spam or non-spam email messages), the automatic recognition of handwriting on postal envelopes, automatic recognition of images of human faces, or handwriting image extraction from medical forms. The last two examples form the subtopic image analysis of pattern recognition that deals with digital images as input to pattern recognition systems. Optical character recognition is an example of the application of a pattern classifier. The method of signing one's name was captured with stylus and overlay starting in 1990. The strokes, speed, relative min, relative max, acceleration and pressure is used to uniquely identify and confirm identity. Banks were first offered this technology, but were content to collect from the FDIC for any bank fraud and did not want to inconvenience customers. Pattern recognition has many real-world applications in image processing. Some examples include: In psychology, pattern recognition is used to make sense of and identify objects, and is closely related to perception. This explains how the sensory inputs humans receive are made meaningful. Pattern recognition can be thought of in two different ways. The first concerns template matching and the second concerns feature detection. A template is a pattern used to produce items of the same proportions. The template-matching hypothesis suggests that incoming stimuli are compared with templates in the long-term memory. If there is a match, the stimulus is identified. Feature detection models, such as the Pandemonium system for classifying letters (Selfridge, 1959), suggest that the stimuli are broken down into their component parts for identification. One observation is a capital E having three horizontal lines and one vertical line. Algorithms. Algorithms for pattern recognition depend on the type of label output, on whether learning is supervised or unsupervised, and on whether the algorithm is statistical or non-statistical in nature. Statistical algorithms can further be categorized as generative or discriminative. Pattern recognition in traditional Chinese Medical diagnosis. In Traditional Chinese medicine symptoms and signs are compared to previous historical patterns or cases also defined as Syndrome or Zheng differentiation, All diagnostic and therapeutic methods in TCM are based on the differentiation of TCM pattern. The presentation or possibility of disease or illness as a outcome is recognised when the pattern fits with a patient's presentation. Classification methods (methods predicting categorical labels). Parametric: Nonparametric: Multilinear subspace learning algorithms (predicting labels of multidimensional data using tensor representations). Unsupervised: See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "2^n-1" }, { "math_id": 2, "text": "g:\\mathcal{X}\\rightarrow\\mathcal{Y}" }, { "math_id": 3, "text": "\\boldsymbol{x} \\in \\mathcal{X}" }, { "math_id": 4, "text": "y \\in \\mathcal{Y}" }, { "math_id": 5, "text": "\\mathbf{D} = \\{(\\boldsymbol{x}_1,y_1),\\dots,(\\boldsymbol{x}_n, y_n)\\}" }, { "math_id": 6, "text": "h:\\mathcal{X}\\rightarrow\\mathcal{Y}" }, { "math_id": 7, "text": "g" }, { "math_id": 8, "text": "\\boldsymbol{x}_i" }, { "math_id": 9, "text": "y" }, { "math_id": 10, "text": "\\mathcal{X}" }, { "math_id": 11, "text": "\\mathcal{Y}" }, { "math_id": 12, "text": "p({\\rm label}|\\boldsymbol{x},\\boldsymbol\\theta) = f\\left(\\boldsymbol{x};\\boldsymbol{\\theta}\\right)" }, { "math_id": 13, "text": "\\boldsymbol{x}" }, { "math_id": 14, "text": "\\boldsymbol{\\theta}" }, { "math_id": 15, "text": "\\boldsymbol\\theta" }, { "math_id": 16, "text": "p({\\boldsymbol{x}|\\rm label})" }, { "math_id": 17, "text": "p({\\rm label}|\\boldsymbol\\theta)" }, { "math_id": 18, "text": "p({\\rm label}|\\boldsymbol{x},\\boldsymbol\\theta) = \\frac{p({\\boldsymbol{x}|\\rm label,\\boldsymbol\\theta}) p({\\rm label|\\boldsymbol\\theta})}{\\sum_{L \\in \\text{all labels}} p(\\boldsymbol{x}|L) p(L|\\boldsymbol\\theta)}." }, { "math_id": 19, "text": "p({\\rm label}|\\boldsymbol{x},\\boldsymbol\\theta) = \\frac{p({\\boldsymbol{x}|\\rm label,\\boldsymbol\\theta}) p({\\rm label|\\boldsymbol\\theta})}{\\int_{L \\in \\text{all labels}} p(\\boldsymbol{x}|L) p(L|\\boldsymbol\\theta) \\operatorname{d}L}." }, { "math_id": 20, "text": "p(\\boldsymbol\\theta)" }, { "math_id": 21, "text": "\\boldsymbol\\theta^* = \\arg \\max_{\\boldsymbol\\theta} p(\\boldsymbol\\theta|\\mathbf{D})" }, { "math_id": 22, "text": "\\boldsymbol\\theta^*" }, { "math_id": 23, "text": "p(\\boldsymbol\\theta|\\mathbf{D})" }, { "math_id": 24, "text": "p(\\boldsymbol\\theta|\\mathbf{D}) = \\left[\\prod_{i=1}^n p(y_i|\\boldsymbol{x}_i,\\boldsymbol\\theta) \\right] p(\\boldsymbol\\theta)." }, { "math_id": 25, "text": "\\boldsymbol{\\theta}^*" }, { "math_id": 26, "text": "p({\\rm label}|\\boldsymbol{x}) = \\int p({\\rm label}|\\boldsymbol{x},\\boldsymbol\\theta)p(\\boldsymbol{\\theta}|\\mathbf{D}) \\operatorname{d}\\boldsymbol{\\theta}." } ]
https://en.wikipedia.org/wiki?curid=126706
12670711
Scyllo-Inositol
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound "scyllo"-Inositol, also called scyllitol, cocositol, or quercinitol, is a chemical compound with formula , one of the nine inositols, the stereoisomers of cyclohexane-1,2,3,4,5,6-hexol. The molecule has a ring of six carbon atoms, each bound to one hydrogen atom and one hydroxyl group (–OH); if the ring is assumed horizontal, the hydroxyls lie alternatively above and below the respective hydrogens. "scyllo"-Inositol is a naturally occurring carbohydrate, specifically a sugar alcohol. It occurs in small amounts in the tissues of humans and other animals, certain bacteria, and more abundantly in some plants. Around 2000, "scyllo"-inositol attracted attention as a possible treatment for neurodegenerative disorders such as Alzheimer's. For this use it received the codes AZD-103 and ELND005. Chemical and physical properties. Crystal structure. Anhydrous "scyllo"-inositol exists in at least two polymorphs (crystal forms). In both forms the molecules have symmetry formula_0 and are in the chair conformation, that puts all the hydroxyls in nearly equatorial positions. The "A" form readily crystallizes from water. It has a lower density 1.57 g/ml and decomposes at 358 °C. It crystallizes in the monoclinic system with group is formula_1. The cell parameters are "a" = 508.9 pm, "b" = 664.5 pm, "c" = 1194.8 pm, β = 116.98°, "Z" = 2. The ring puckering parameter "Q" is 58.1 pm. The "B" form is hard to obtain in pure form, as it often crystallizes mixed with the "A" form. Its density is 1.66 g/ml and decomposes at about 360 °C. Its crystal system is triclinic with group formula_2. The cell parameters are "a" = 672.5 pm, "b" = 679.7 pm, "c" = 863.5 pm, α = 95.45°, β = 99.49°, γ = 99.19°, "Z" = 2. The puckering "Q" is 56.6 pm. The density of the "A" form is similar to that of "myo"-inositol but about 0.05 to 0.10 g/mL lower than that of the other inositol stereoisomers, and of the "B" form. The melting (decomposition) point of both forms is the highest among all inositols. Like all of them, the crystals feature infinite chains of hydrogen bonds. Synthesis. Scyllitol and other stereo isomers can be synthesized from "para"-benzoquinone via a conduritol intermediate. It can also be obtained from "myo"-inositol by the Mitsunobu reaction. Various animals, plants, insects, and bacteria have been found to convert "myo"-inositol to "scyllo"-inositol, including "Streptomyces griseus", where that conversion is part of the synthesis of streptomycin. Scyllitol was known to be a facultative intermediate in the metabolism of "myo"-inositol by the bacterium "Bacillus subtilis". In 2011 a genetically engineered strain of this organism was developed which interrupted that pathway and converted part of the "myo"-inositol in the medium to "scyllo"-inositol in 48 hours. Eventually the process was able to produce 27.6 g/L of "scyllo"-inositol in the medium, from 50 g/L of "myo"-inositol, in 48 h. In 2021 another process was developed using the bacterium "Corynebacterium glutamicum", producing 1.8 g/L of scyllitol from 20 g/L glucose and 4.4 g/L from 20 g/L sucrose in 72 h. The conversion involves NAD+-dependent oxidation of "myo"-inositol to 2-keto-"myo"-inositol ("scyllo"-inosose), followed by NADPH-dependent reduction to scyllitol. Derivatives. Several derivatives of "scyllo"-inositol have been synthesized and studied in the laboratory, such as phosphates (variants of phytic acid) and orthoformates with an adamantane structure. Biochemistry. Natural occurrence. Scyllitol is widely distributed in nature in fish, insects, mammalian tissues and urine, certain bacteria, and plants such as "Calycanthus occidentalis". It is particularly abundant in coconut milk. The scyllitol derivative O-methyl-"scyllo"-inositol is one of the predominant soluble carbohydrate derivatives in the root nodules of the pea plant created bythe bacterium "Rhizobium leguminosarum", together with the isomer ononitol (4-O-methyl-"myo"-inositol), which are not found elsewhere in the plant. Scyllitol hexakis dihydrogenphosphate, the "scyllo" isomer of phytic acid (but not lower phosphates) has been detected in pasture soils from England and Wales at concentrations up to 130 mg of phosphorus per kg of soil, accounting for up to 15% of the soil organic phosphorus. The ratio of the "scyllo" isomer to the "myo" isomer ranged between 0.29 and 0.79. The concentration of "scyllo"-inositol in coconut milk (the fluid inside the fruit of "Cocos nucifera") is 0.5 g/L, five times that of "myo"-inositol. Physiology. The concentration of "scyllo"-inositol in human brain can be measured by NMR; typical values are 0.35 mM for white matter, 0.4 mM for grey matter and 0.5 mM for cerebellum Another study compared the concentrations of "myo" and "scyllo"-inositol in brains of 24 healthy volunteers. Averages were about 0.36 mM for "scyllo" and 4.31 mM for "myo", with large deviations. The study found a significant increase of both isomers in the older 14 (46-71 yrs) compared to the younger 10 (26-29 yrs), namely about 40% for "scyllo", 20% for "myo"; and a weak correlation between the two values. However a concentration of "scyllo"-inositol 300% higher than normal was measured in a healthy volunteer, without a corresponding increase in "myo"-inositol; suggesting that metabolism of the two isomers are independently regulated. Researchers at the Harvard Medical School-affiliated McLean Hospital found that chronic users of anabolic steroids had lower levels of brain "scyllo"-inositol levels than non-users. Brain concentration of "scyllo"-inositol was found to be about 75% lower than average in patients with hepatic encephalopathy, which also lowers the levels of "myo"-inositol. Scyllitol was found to inhibit in vitro the aggregation of α-synuclein into fibrils, a phenomenon implicated in Parkinson's disease. Previous intravenous administration of either "myo"- os "scyllo"-inositol was found to reduce the duration and intensity of chemically-induced seizures in rats. Since the 1940s, 5–20% of coconut milk has been used as a growth-promoting agent in formulations of plant cell culture medium. Part of its effectiveness in this application is due to its "myo"- and "scyllo"-inositol contents. Clinical evaluation. Alzheimer's disease. In the early 2000s it was reported that "scyllo"–inositol crossed blood-brain barrier and, when given to mice (TgCRND8) that were genetically engineered to exhibit Alzheimers-like symptoms, it inhibited cognitive deficits and significantly improved the disease pathology. The compound was found to decrease the amount of insoluble amyloid proteins Aβ40, Aβ42 and amyloid plaque accumulation in the brain, without interfering with the synthesis of phosphatidylinositol lipids from "myo"-inositol. More recently, it has also been found to inhibit the binding of Aβ oligomers to plasma membranes and interfering with synaptic function. Motivated by these and other results, in about 2008 Transition Therapeutics set to investigate "scyllo"-inositol as a disease-modifying therapy for Alzheimer's disease, under the designation AZD-103. Transition partnered with Elan Corporation for the development of the compound, relabeled ELND005, and a patent for this use (U.S. patent 7521481) was issued on April 21, 2009. In 2014, ELND005 reverted to Transition Therapeutics, which was acquired by OPKO Health in 2016. A clinical investigation of ELND005 with approximately 353 patients, planned to take 18 months, was started in 2008 and received fast track designation from the U.S. Food and Drug Administration. The study initially used daily doses of 500, 2000, and 4000 mg; however, the last two were discontinued by December 2009, due to suspected adverse effects, including 9 deaths. Results of this trial were not positive but considered inconclusive. A new 12-week fast-track trial with 296 moderate to advanced Alzheimer's was started in November 2012, to investigate the effect of a single dose of ELND005 on the NPI-C agitation and aggression scores. In June 2015, the results of this trial were reported as negative, and the company abandoned plans to extend the trial further. Bipolar disorder. In 2012, Elan started a Phase 2 study of AZD-103 as an add-on therapy in 400 patients with bipolar disorder; this program was discontinued in 2014. Down's syndrome. In 2013, a four-week Phase 2 trial began evaluating 250 and 500 mg daily of AZD-103 in 23 young adults with Down's syndrome. This trial was completed in November 2014, without significant positive results, and was considered inconclusive.
[ { "math_id": 0, "text": "\\bar 1" }, { "math_id": 1, "text": "P2_1/c" }, { "math_id": 2, "text": "P\\bar 1" } ]
https://en.wikipedia.org/wiki?curid=12670711
1267288
Poincaré–Hopf theorem
Counts 0s of a vector field on a differentiable manifold using its Euler characteristic In mathematics, the Poincaré–Hopf theorem (also known as the Poincaré–Hopf index formula, Poincaré–Hopf index theorem, or Hopf index theorem) is an important theorem that is used in differential topology. It is named after Henri Poincaré and Heinz Hopf. The Poincaré–Hopf theorem is often illustrated by the special case of the hairy ball theorem, which simply states that there is no smooth vector field on an even-dimensional n-sphere having no sources or sinks. Formal statement. Let formula_0 be a differentiable manifold, of dimension formula_1, and formula_2 a vector field on formula_0. Suppose that formula_3 is an isolated zero of formula_2, and fix some local coordinates near formula_3. Pick a closed ball formula_4 centered at formula_3, so that formula_3 is the only zero of formula_2 in formula_4. Then the index of formula_2 at formula_3, formula_5, can be defined as the degree of the map formula_6 from the boundary of formula_4 to the formula_7-sphere given by formula_8. Theorem. Let formula_0 be a compact differentiable manifold. Let formula_2 be a vector field on formula_0 with isolated zeroes. If formula_0 has boundary, then we insist that formula_2 be pointing in the outward normal direction along the boundary. Then we have the formula formula_9 where the sum of the indices is over all the isolated zeroes of formula_2 and formula_10 is the Euler characteristic of formula_0. A particularly useful corollary is when there is a non-vanishing vector field implying Euler characteristic 0. The theorem was proven for two dimensions by Henri Poincaré and later generalized to higher dimensions by Heinz Hopf. Significance. The Euler characteristic of a closed surface is a purely topological concept, whereas the index of a vector field is purely analytic. Thus, this theorem establishes a deep link between two seemingly unrelated areas of mathematics. It is perhaps as interesting that the proof of this theorem relies heavily on integration, and, in particular, Stokes' theorem, which states that the integral of the exterior derivative of a differential form is equal to the integral of that form over the boundary. In the special case of a manifold without boundary, this amounts to saying that the integral is 0. But by examining vector fields in a sufficiently small neighborhood of a source or sink, we see that sources and sinks contribute integer amounts (known as the index) to the total, and they must all sum to 0. This result may be considered one of the earliest of a whole series of theorems (e.g. Atiyah–Singer index theorem, De Rham's theorem, Grothendieck–Riemann–Roch theorem) establishing deep relationships between geometric and analytical or physical concepts. They play an important role in the modern study of both fields. Generalization. It is still possible to define the index for a vector field with nonisolated zeroes. A construction of this index and the extension of Poincaré–Hopf theorem for vector fields with nonisolated zeroes is outlined in Section 1.1.2 of . References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "v" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "D" }, { "math_id": 5, "text": "\\operatorname{index}_x(v)" }, { "math_id": 6, "text": "u : \\partial D \\to \\mathbb S^{n-1}" }, { "math_id": 7, "text": "(n-1)" }, { "math_id": 8, "text": "u(z)=v(z)/\\|v(z)\\|" }, { "math_id": 9, "text": "\\sum_i \\operatorname{index}_{x_i}(v) = \\chi(M)\\," }, { "math_id": 10, "text": "\\chi(M)" } ]
https://en.wikipedia.org/wiki?curid=1267288
12673
Galois group
Mathematical group In mathematics, in the area of abstract algebra known as Galois theory, the Galois group of a certain type of field extension is a specific group associated with the field extension. The study of field extensions and their relationship to the polynomials that give rise to them via Galois groups is called Galois theory, so named in honor of Évariste Galois who first discovered them. For a more elementary discussion of Galois groups in terms of permutation groups, see the article on Galois theory. Definition. Suppose that formula_0 is an extension of the field formula_1 (written as formula_2 and read ""E" over "F""). An automorphism of formula_2 is defined to be an automorphism of formula_0 that fixes formula_1 pointwise. In other words, an automorphism of formula_2 is an isomorphism formula_3 such that formula_4 for each formula_5. The set of all automorphisms of formula_2 forms a group with the operation of function composition. This group is sometimes denoted by formula_6 If formula_2 is a Galois extension, then formula_7 is called the Galois group of formula_2, and is usually denoted by formula_8. If formula_2 is not a Galois extension, then the Galois group of formula_2 is sometimes defined as formula_9, where formula_10 is the Galois closure of formula_0. Galois group of a polynomial. Another definition of the Galois group comes from the Galois group of a polynomial formula_11. If there is a field formula_12 such that formula_13 factors as a product of linear polynomials formula_14 over the field formula_10, then the Galois group of the polynomial formula_13 is defined as the Galois group of formula_12 where formula_10 is minimal among all such fields. Structure of Galois groups. Fundamental theorem of Galois theory. One of the important structure theorems from Galois theory comes from the fundamental theorem of Galois theory. This states that given a finite Galois extension formula_15, there is a bijection between the set of subfields formula_16 and the subgroups formula_17 Then, formula_0 is given by the set of invariants of formula_10 under the action of formula_18, so formula_19 Moreover, if formula_18 is a normal subgroup then formula_20. And conversely, if formula_21 is a normal field extension, then the associated subgroup in formula_22 is a normal group. Lattice structure. Suppose formula_23 are Galois extensions of formula_24 with Galois groups formula_25 The field formula_26 with Galois group formula_27 has an injection formula_28 which is an isomorphism whenever formula_29. Inducting. As a corollary, this can be inducted finitely many times. Given Galois extensions formula_30 where formula_31 then there is an isomorphism of the corresponding Galois groups: formula_32 Examples. In the following examples formula_1 is a field, and formula_33 are the fields of complex, real, and rational numbers, respectively. The notation "F"("a") indicates the field extension obtained by adjoining an element "a" to the field "F". Computational tools. Cardinality of the Galois group and the degree of the field extension. One of the basic propositions required for completely determining the Galois groups of a finite field extension is the following: Given a polynomial formula_34, let formula_2 be its splitting field extension. Then the order of the Galois group is equal to the degree of the field extension; that is, formula_35 Eisenstein's criterion. A useful tool for determining the Galois group of a polynomial comes from Eisenstein's criterion. If a polynomial formula_11 factors into irreducible polynomials formula_36 the Galois group of formula_13 can be determined using the Galois groups of each formula_37 since the Galois group of formula_13 contains each of the Galois groups of the formula_38 Trivial group. formula_39 is the trivial group that has a single element, namely the identity automorphism. Another example of a Galois group which is trivial is formula_40 Indeed, it can be shown that any automorphism of formula_41 must preserve the ordering of the real numbers and hence must be the identity. Consider the field formula_42 The group formula_43 contains only the identity automorphism. This is because formula_10 is not a normal extension, since the other two cube roots of formula_44, formula_45 and formula_46 are missing from the extension—in other words "K" is not a splitting field. Finite abelian groups. The Galois group formula_47 has two elements, the identity automorphism and the complex conjugation automorphism. Quadratic extensions. The degree two field extension formula_48 has the Galois group formula_49 with two elements, the identity automorphism and the automorphism formula_50 which exchanges formula_51 and formula_52. This example generalizes for a prime number formula_53 Product of quadratic extensions. Using the lattice structure of Galois groups, for non-equal prime numbers formula_54 the Galois group of formula_55 is formula_56 Cyclotomic extensions. Another useful class of examples comes from the splitting fields of cyclotomic polynomials. These are polynomials formula_57 defined as formula_58 whose degree is formula_59, Euler's totient function at formula_60. Then, the splitting field over formula_61 is formula_62 and has automorphisms formula_63 sending formula_64 for formula_65 relatively prime to formula_60. Since the degree of the field is equal to the degree of the polynomial, these automorphisms generate the Galois group. If formula_66 then formula_67 If formula_60 is a prime formula_68, then a corollary of this is formula_69 In fact, any finite abelian group can be found as the Galois group of some subfield of a cyclotomic field extension by the Kronecker–Weber theorem. Finite fields. Another useful class of examples of Galois groups with finite abelian groups comes from finite fields. If "q" is a prime power, and if formula_70 and formula_71 denote the Galois fields of order formula_72 and formula_73 respectively, then formula_8 is cyclic of order "n" and generated by the Frobenius homomorphism. Degree 4 examples. The field extension formula_74 is an example of a degree formula_75 field extension. This has two automorphisms formula_76 where formula_77 and formula_78 Since these two generators define a group of order formula_75, the Klein four-group, they determine the entire Galois group. Another example is given from the splitting field formula_79 of the polynomial formula_80 Note because formula_81 the roots of formula_82 are formula_83 There are automorphisms formula_84 generating a group of order formula_75. Since formula_85 generates this group, the Galois group is isomorphic to formula_86. Finite non-abelian groups. Consider now formula_87 where formula_88 is a primitive cube root of unity. The group formula_89 is isomorphic to "S"3, the dihedral group of order 6, and "L" is in fact the splitting field of formula_90 over formula_91 Quaternion group. The Quaternion group can be found as the Galois group of a field extension of formula_61. For example, the field extension formula_92 has the prescribed Galois group. Symmetric group of prime order. If formula_13 is an irreducible polynomial of prime degree formula_93 with rational coefficients and exactly two non-real roots, then the Galois group of formula_13 is the full symmetric group formula_94 For example, formula_95 is irreducible from Eisenstein's criterion. Plotting the graph of formula_13 with graphing software or paper shows it has three real roots, hence two complex roots, showing its Galois group is formula_96. Comparing Galois groups of field extensions of global fields. Given a global field extension formula_15 (such as formula_97) and equivalence classes of valuations formula_98 on formula_10 (such as the formula_93-adic valuation) and formula_99 on formula_24 such that their completions give a Galois field extensionformula_100of local fields, there is an induced action of the Galois group formula_101 on the set of equivalence classes of valuations such that the completions of the fields are compatible. This means if formula_102 then there is an induced isomorphism of local fieldsformula_103Since we have taken the hypothesis that formula_98 lies over formula_99 (i.e. there is a Galois field extension formula_100), the field morphism formula_104 is in fact an isomorphism of formula_105-algebras. If we take the isotropy subgroup of formula_106 for the valuation class formula_98formula_107then there is a surjection of the global Galois group to the local Galois group such that there is an isomorphism between the local Galois group and the isotropy subgroup. Diagrammatically, this meansformula_108where the vertical arrows are isomorphisms. This gives a technique for constructing Galois groups of local fields using global Galois groups. Infinite groups. A basic example of a field extension with an infinite group of automorphisms is formula_109, since it contains every algebraic field extension formula_79. For example, the field extensions formula_110 for a square-free element formula_111 each have a unique degree formula_44 automorphism, inducing an automorphism in formula_112 One of the most studied classes of infinite Galois group is the absolute Galois group, which is an infinite, profinite group defined as the inverse limit of all finite Galois extensions formula_2 for a fixed field. The inverse limit is denoted formula_113, where formula_114 is the separable closure of the field formula_1. Note this group is a topological group. Some basic examples include formula_115 and formula_116. Another readily computable example comes from the field extension formula_117 containing the square root of every positive prime. It has Galois group formula_118, which can be deduced from the profinite limit formula_119 and using the computation of the Galois groups. Properties. The significance of an extension being Galois is that it obeys the fundamental theorem of Galois theory: the closed (with respect to the Krull topology) subgroups of the Galois group correspond to the intermediate fields of the field extension. If formula_2 is a Galois extension, then formula_8 can be given a topology, called the Krull topology, that makes it into a profinite group. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": "F" }, { "math_id": 2, "text": "E/F" }, { "math_id": 3, "text": "\\alpha:E\\to E" }, { "math_id": 4, "text": "\\alpha(x) = x" }, { "math_id": 5, "text": "x\\in F" }, { "math_id": 6, "text": "\\operatorname{Aut}(E/F)." }, { "math_id": 7, "text": "\\operatorname{Aut}(E/F)" }, { "math_id": 8, "text": "\\operatorname{Gal}(E/F)" }, { "math_id": 9, "text": "\\operatorname{Aut}(K/F)" }, { "math_id": 10, "text": "K" }, { "math_id": 11, "text": "f \\in F[x]" }, { "math_id": 12, "text": "K/F" }, { "math_id": 13, "text": "f" }, { "math_id": 14, "text": "f(x) = (x-\\alpha_1)\\cdots (x - \\alpha_k) \\in K[x]" }, { "math_id": 15, "text": "K/k" }, { "math_id": 16, "text": "k \\subset E \\subset K" }, { "math_id": 17, "text": "H \\subset G." }, { "math_id": 18, "text": "H" }, { "math_id": 19, "text": "E = K^H = \\{ a\\in K : ga = a \\text{ where } g \\in H \\}" }, { "math_id": 20, "text": "G/H \\cong \\operatorname{Gal}(E/k)" }, { "math_id": 21, "text": "E/k" }, { "math_id": 22, "text": "\\operatorname{Gal}(K/k)" }, { "math_id": 23, "text": "K_1,K_2" }, { "math_id": 24, "text": "k" }, { "math_id": 25, "text": "G_1,G_2." }, { "math_id": 26, "text": "K_1K_2" }, { "math_id": 27, "text": "G = \\operatorname{Gal}(K_1K_2/k)" }, { "math_id": 28, "text": "G \\to G_1 \\times G_2" }, { "math_id": 29, "text": "K_1 \\cap K_2 = k" }, { "math_id": 30, "text": "K_1,\\ldots, K_n / k" }, { "math_id": 31, "text": "K_{i+1} \\cap (K_1\\cdots K_i) = k," }, { "math_id": 32, "text": "\\operatorname{Gal}(K_1\\cdots K_n/k) \\cong \\operatorname{Gal}(K_1/k)\\times \\cdots \\times \\operatorname{Gal}(K_n/k)." }, { "math_id": 33, "text": "\\Complex, \\R, \\Q" }, { "math_id": 34, "text": "f(x) \\in F[x]" }, { "math_id": 35, "text": "\\left|\\operatorname{Gal}(E/F)\\right| = [E:F]" }, { "math_id": 36, "text": "f = f_1\\cdots f_k" }, { "math_id": 37, "text": "f_i" }, { "math_id": 38, "text": "f_i." }, { "math_id": 39, "text": "\\operatorname{Gal}(F/F)" }, { "math_id": 40, "text": "\\operatorname{Aut}(\\R/\\Q)." }, { "math_id": 41, "text": "\\R" }, { "math_id": 42, "text": "K = \\Q(\\sqrt[3]{2})." }, { "math_id": 43, "text": "\\operatorname{Aut}(K/\\Q)" }, { "math_id": 44, "text": "2" }, { "math_id": 45, "text": "\\exp \\left (\\tfrac{2\\pi i}{3} \\right ) \\sqrt[3]{2}" }, { "math_id": 46, "text": "\\exp \\left (\\tfrac{4\\pi i}{3} \\right ) \\sqrt[3]{2}," }, { "math_id": 47, "text": "\\operatorname{Gal}(\\Complex/\\R)" }, { "math_id": 48, "text": "\\Q(\\sqrt{2})/\\Q" }, { "math_id": 49, "text": "\\operatorname{Gal}(\\Q(\\sqrt{2})/\\Q)" }, { "math_id": 50, "text": "\\sigma" }, { "math_id": 51, "text": "\\sqrt2" }, { "math_id": 52, "text": "-\\sqrt2" }, { "math_id": 53, "text": "p \\in \\N." }, { "math_id": 54, "text": "p_1, \\ldots, p_k" }, { "math_id": 55, "text": "\\Q \\left (\\sqrt{p_1},\\ldots, \\sqrt{p_k} \\right)/\\Q" }, { "math_id": 56, "text": "\\operatorname{Gal} \\left (\\Q(\\sqrt{p_1},\\ldots, \\sqrt{p_k})/\\Q \\right ) \\cong \\operatorname{Gal}\\left (\\Q(\\sqrt{p_1})/\\Q \\right )\\times \\cdots \\times \\operatorname{Gal} \\left (\\Q(\\sqrt{p_k})/\\Q \\right ) \\cong (\\Z/2\\Z)^k" }, { "math_id": 57, "text": "\\Phi_n" }, { "math_id": 58, "text": "\\Phi_n(x) = \\prod_{\\begin{matrix} 1 \\leq k \\leq n \\\\ \\gcd(k,n) = 1\\end{matrix}} \\left(x-e^{\\frac{2ik\\pi}{n}} \\right)" }, { "math_id": 59, "text": "\\phi(n)" }, { "math_id": 60, "text": "n" }, { "math_id": 61, "text": "\\Q" }, { "math_id": 62, "text": "\\Q(\\zeta_n)" }, { "math_id": 63, "text": "\\sigma_a" }, { "math_id": 64, "text": "\\zeta_n \\mapsto \\zeta_n^a" }, { "math_id": 65, "text": "1 \\leq a < n" }, { "math_id": 66, "text": "n = p_1^{a_1}\\cdots p_k^{a_k}," }, { "math_id": 67, "text": "\\operatorname{Gal}(\\Q(\\zeta_n)/\\Q) \\cong \\prod_{a_i} \\operatorname{Gal}\\left (\\Q(\\zeta_{p_i^{a_i}})/\\Q \\right )" }, { "math_id": 68, "text": "p " }, { "math_id": 69, "text": "\\operatorname{Gal}(\\Q(\\zeta_p)/\\Q) \\cong \\Z/(p-1)\\Z" }, { "math_id": 70, "text": "F = \\mathbb{F}_q" }, { "math_id": 71, "text": "E=\\mathbb{F}_{q^n}" }, { "math_id": 72, "text": "q" }, { "math_id": 73, "text": "q^n" }, { "math_id": 74, "text": "\\Q(\\sqrt{2},\\sqrt{3})/\\Q" }, { "math_id": 75, "text": "4" }, { "math_id": 76, "text": "\\sigma, \\tau" }, { "math_id": 77, "text": "\\sigma(\\sqrt{2}) = -\\sqrt{2}" }, { "math_id": 78, "text": "\\tau(\\sqrt{3})=-\\sqrt{3}." }, { "math_id": 79, "text": "E/\\Q" }, { "math_id": 80, "text": "f(x) = x^4 + x^3 + x^2 + x + 1" }, { "math_id": 81, "text": "(x-1)f(x)= x^5-1," }, { "math_id": 82, "text": "f(x)" }, { "math_id": 83, "text": "\\exp \\left (\\tfrac{2k\\pi i}{5} \\right)." }, { "math_id": 84, "text": "\\begin{cases}\\sigma_l : E \\to E \\\\ \\exp \\left (\\frac{2\\pi i}{5} \\right) \\mapsto \\left (\\exp \\left (\\frac{2\\pi i}{5} \\right ) \\right )^l \\end{cases}" }, { "math_id": 85, "text": "\\sigma_2" }, { "math_id": 86, "text": "\\Z/4\\Z" }, { "math_id": 87, "text": "L = \\Q(\\sqrt[3]{2}, \\omega)," }, { "math_id": 88, "text": "\\omega" }, { "math_id": 89, "text": "\\operatorname{Gal}(L/\\Q)" }, { "math_id": 90, "text": "x^3-2" }, { "math_id": 91, "text": "\\Q." }, { "math_id": 92, "text": "\\Q \\left (\\sqrt{2}, \\sqrt{3}, \\sqrt{(2+\\sqrt{2})(3+\\sqrt{3})} \\right )" }, { "math_id": 93, "text": "p" }, { "math_id": 94, "text": "S_p." }, { "math_id": 95, "text": "f(x)=x^5-4x+2 \\in \\Q[x]" }, { "math_id": 96, "text": "S_5" }, { "math_id": 97, "text": "\\mathbb{Q}(\\sqrt[5]{3},\\zeta_5 )/\\mathbb{Q}" }, { "math_id": 98, "text": "w" }, { "math_id": 99, "text": "v" }, { "math_id": 100, "text": "K_w/k_v" }, { "math_id": 101, "text": "G = \\operatorname{Gal}(K/k)" }, { "math_id": 102, "text": "s \\in G" }, { "math_id": 103, "text": "s_w:K_w \\to K_{sw}" }, { "math_id": 104, "text": "s_w" }, { "math_id": 105, "text": "k_v" }, { "math_id": 106, "text": "G" }, { "math_id": 107, "text": "G_w = \\{s \\in G : sw = w \\}" }, { "math_id": 108, "text": "\\begin{matrix}\n\\operatorname{Gal}(K/v)& \\twoheadrightarrow & \\operatorname{Gal}(K_w/k_v) \\\\\n\\downarrow & & \\downarrow \\\\\nG & \\twoheadrightarrow & G_w\n\\end{matrix}" }, { "math_id": 109, "text": "\\operatorname{Aut}(\\Complex/\\Q)" }, { "math_id": 110, "text": "\\Q(\\sqrt{a})/\\Q" }, { "math_id": 111, "text": "a \\in \\Q" }, { "math_id": 112, "text": "\\operatorname{Aut}(\\Complex/\\Q)." }, { "math_id": 113, "text": "\\operatorname{Gal}(\\overline{F}/F) := \\varprojlim_{E/F \\text{ finite separable}}{\\operatorname{Gal}(E/F)}" }, { "math_id": 114, "text": "\\overline{F}" }, { "math_id": 115, "text": "\\operatorname{Gal}(\\overline{\\Q}/\\Q)" }, { "math_id": 116, "text": "\\operatorname{Gal}(\\overline{\\mathbb{F}}_q/\\mathbb{F}_q) \\cong \\hat{\\Z} \\cong \\prod_p \\Z_p" }, { "math_id": 117, "text": "\\Q(\\sqrt{2},\\sqrt{3},\\sqrt{5}, \\ldots)/ \\Q" }, { "math_id": 118, "text": "\\operatorname{Gal}(\\Q(\\sqrt{2},\\sqrt{3},\\sqrt{5}, \\ldots)/ \\Q) \\cong \\prod_{p} \\Z/2" }, { "math_id": 119, "text": "\\cdots \\to \\operatorname{Gal}(\\Q(\\sqrt{2},\\sqrt{3},\\sqrt{5})/\\Q) \\to \\operatorname{Gal}(\\Q(\\sqrt{2},\\sqrt{3})/\\Q) \\to \\operatorname{Gal}(\\Q(\\sqrt{2})/\\Q)" } ]
https://en.wikipedia.org/wiki?curid=12673
12674074
Arrhenius plot
Linear graph commonly used in chemical kinetics In chemical kinetics, an Arrhenius plot displays the logarithm of a reaction rate constant, (formula_0, ordinate axis) plotted against reciprocal of the temperature (formula_1, abscissa). Arrhenius plots are often used to analyze the effect of temperature on the rates of chemical reactions. For a single rate-limited thermally activated process, an Arrhenius plot gives a straight line, from which the activation energy and the pre-exponential factor can both be determined. The Arrhenius equation can be given in the form: formula_2 where: The only difference between the two forms of the expression is the quantity used for the activation energy: the former would have the unit joule/mole, which is common in chemistry, while the latter would have the unit joule and would be for one molecular reaction event, which is common in physics. The different units are accounted for in using either the gas constant formula_6 or the Boltzmann constant formula_10. Taking the natural logarithm of the former equation gives: formula_12 When plotted in the manner described above, the value of the y-intercept (at formula_13) will correspond to formula_14, and the slope of the line will be equal to formula_15. The values of y-intercept and slope can be determined from the experimental points using simple linear regression with a spreadsheet. The pre-exponential factor, formula_4, is an empirical constant of proportionality which has been estimated by various theories which take into account factors such as the frequency of collision between reacting particles, their relative orientation, and the entropy of activation. The expression formula_16 represents the fraction of the molecules present in a gas which have energies equal to or in excess of activation energy at a particular temperature. In almost all practical cases, formula_17, so that this fraction is very small and increases rapidly with formula_11. In consequence, the reaction rate constant formula_3 increases rapidly with temperature formula_11, as shown in the direct plot of formula_3 against formula_11. (Mathematically, at very high temperatures so that formula_18, formula_3 would level off and approach formula_4 as a limit, but this case does not occur under practical conditions.) Worked example. Considering as example the decomposition of nitrogen dioxide into nitrogen monoxide and molecular oxygen: Based on the red "line of best fit" plotted in the graph given above: &lt;templatestyles src="Block indent/styles.css"/&gt;Let y = ln(k [10−4 cm3 mol−1 s−1]) &lt;templatestyles src="Block indent/styles.css"/&gt;Let x = 1/T [K] Points read from graph: &lt;templatestyles src="Block indent/styles.css"/&gt;y = 4.1 at x = 0.0015 &lt;templatestyles src="Block indent/styles.css"/&gt;y = 2.2 at x = 0.00165 Slope of red line = (4.1 − 2.2) / (0.0015 − 0.00165) = −12,667 Intercept ["y-value at x = 0"] of red line = 4.1 + (0.0015 × 12667) = 23.1 Inserting these values into the form above: formula_19 yields: formula_20 formula_21 as shown in the plot at the right. formula_22 for: Substituting for the quotient in the exponent of formula_23: formula_24 where the approximate value for R is 8.31446 J K−1  mol−1 The activation energy of this reaction from these data is then: &lt;templatestyles src="Block indent/styles.css"/&gt; Ea = R × 12,667 K = 105,300 J mol−1 = 105.3 kJ mol−1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ln(k)" }, { "math_id": 1, "text": "1/T" }, { "math_id": 2, "text": "k = A \\exp\\left(\\frac{-E_\\text{a}}{RT}\\right) = A \\exp\\left(\\frac{-E_\\text{a}'}{k_\\text{B}T}\\right)" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "E_\\text{a}" }, { "math_id": 6, "text": "R" }, { "math_id": 7, "text": "R=k_\\text{B} N_\\text{A}" }, { "math_id": 8, "text": "N_\\text{A}" }, { "math_id": 9, "text": "E_\\text{a}'" }, { "math_id": 10, "text": "k_\\text{B}" }, { "math_id": 11, "text": "T" }, { "math_id": 12, "text": "\\ln(k) = \\ln(A) - \\frac{E_\\text{a}}{R}\\left(\\frac{1}{T}\\right)" }, { "math_id": 13, "text": "x = 1/T = 0" }, { "math_id": 14, "text": "\\ln(A)" }, { "math_id": 15, "text": "-E_\\text{a}/R" }, { "math_id": 16, "text": "\\exp(-E_\\text{a}/RT)" }, { "math_id": 17, "text": "E_\\text{a} \\gg RT" }, { "math_id": 18, "text": "E_\\text{a} \\ll RT" }, { "math_id": 19, "text": "\\ln(k) = \\ln(A) - \\frac{E_a}{R}\\left(\\frac{1}{T}\\right)" }, { "math_id": 20, "text": "\\ln(k) = 23.1 - 12,667 (1/T)" }, { "math_id": 21, "text": "k = e^{23.1} \\cdot e^{-12,667/T}" }, { "math_id": 22, "text": "k = 1.08 \\times 10^{10} \\cdot e^{-12,667/T}" }, { "math_id": 23, "text": "e" }, { "math_id": 24, "text": "E_a / R = -12\\,667\\,\\mathrm{K}" } ]
https://en.wikipedia.org/wiki?curid=12674074
1267442
Electrotonic potential
In physiology, electrotonus refers to the passive spread of charge inside a neuron and between cardiac muscle cells or smooth muscle cells. "Passive" means that voltage-dependent changes in membrane conductance do not contribute. Neurons and other excitable cells produce two types of electrical potential: Electrotonic potentials represent changes to the neuron's membrane potential that do not lead to the generation of new current by action potentials. However, all action potentials are begun by electrotonic potentials depolarizing the membrane above the threshold potential which converts the electrotonic potential into an action potential. Neurons which are small in relation to their length, such as some neurons in the brain, have only electrotonic potentials (starburst amacrine cells in the retina are believed to have these properties); longer neurons utilize electrotonic potentials to trigger the action potential. Electrotonic potentials have an amplitude that is usually 5-20 mV and they can last from 1 ms up to several seconds long. In order to quantify the behavior of electrotonic potentials there are two constants that are commonly used: the membrane time constant τ, and the membrane length constant λ. The membrane time constant measures the amount of time for an electrotonic potential to passively fall to 1/e or 37% of its maximum. A typical value for neurons can be from 1 to 20 ms. The membrane length constant measures how far it takes for an electrotonic potential to fall to 1/e or 37% of its amplitude at the place where it began. Common values for the length constant of dendrites are from .1 to 1 mm. Electrotonic potentials are conducted faster than action potentials, but attenuate rapidly so are unsuitable for long-distance signaling. The phenomenon was first discovered by Eduard Pflüger. Summation. The electrotonic potential travels via electrotonic spread, which amounts to attraction of opposite and repulsion of like-charged ions within the cell. Electrotonic potentials can sum spatially or temporally. Spatial summation is the combination of multiple sources of ion influx (multiple channels within a dendrite, or channels within multiple dendrites), whereas temporal summation is a gradual increase in overall charge due to repeated influxes in the same location. Because the ionic charge enters in one location and dissipates to others, losing intensity as it spreads, electrotonic spread is a graded response. It is important to contrast this with the all-or-none law propagation of the action potential down the axon of the neuron. EPSPs. Electrotonic potential can either increase the membrane potential with positive charge or decrease it with negative charge. Electrotonic potentials that increase the membrane potential are called excitatory postsynaptic potentials (EPSPs). This is because they depolarize the membrane, increasing the likelihood of an action potential. As they sum together they can depolarize the membrane sufficiently to push it above the threshold potential, which will then cause an action potential to occur. EPSPs are often caused by either Na+ or Ca2+ coming into the cell. IPSPs. Electrotonic potentials which decrease the membrane potential are called inhibitory postsynaptic potentials (IPSPs). They hyperpolarize the membrane and make it harder for a cell to have an action potential. IPSPs are associated with Cl− entering the cell or K+ leaving the cell. IPSPs can interact with EPSPs to "cancel out" their effect. Information Transfer. Because of the continuously varying nature of the electrotonic potential versus the binary response of the action potential, this creates implications for how much information can be encoded by each respective potential. Electrotonic potentials are able to transfer more information within a given time period than action potentials. This difference in information rates can be up to almost an order of magnitude greater for electrotonic potentials. Cable theory. Cable theory can be useful for understanding how currents flow through the axons of a neuron. In 1855, Lord Kelvin devised this theory as a way to describe electrical properties of transatlantic telegraph cables. Almost a century later in 1946, Hodgkin and Rushton discovered cable theory could be applied to neurons as well. This theory has the neuron approximated as a cable whose radius does not change, and allows it to be represented with the partial differential equation formula_0 where "V"("x", "t") is the voltage across the membrane at a time "t" and a position "x" along the length of the neuron, and where λ and τ are the characteristic length and time scales on which those voltages decay in response to a stimulus. Referring to the circuit diagram on the right, these scales can be determined from the resistances and capacitances per unit length. formula_1 formula_2 From these equations one can understand how properties of a neuron affect the current passing through it. The length constant λ, increases as membrane resistance becomes larger and as the internal resistance becomes smaller, allowing current to travel farther down the neuron. The time constant τ, increases as the resistance and capacitance of the membrane increase, which causes current to travel more slowly through the neuron. Ribbon synapses. Ribbon synapses are a type of synapse often found in sensory neurons and are of a unique structure that specially equips them to respond dynamically to inputs from electrotonic potentials. They are so named for an organelle they contain, the synaptic ribbon. This organelle can hold thousands of synaptic vesicles close to the presynaptic membrane, enabling neurotransmitter release that can quickly react to a wide range of changes in the membrane potential. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\tau \\frac{\\partial V}{\\partial t} = \\lambda^2 \\frac{\\partial^2 V}{\\partial x^2} - V\n" }, { "math_id": 1, "text": "\n\\lambda = \\sqrt \\frac{r_m}{r_\\ell}\n" }, { "math_id": 2, "text": "\n\\tau =\\ r_m c_m \\,\n" } ]
https://en.wikipedia.org/wiki?curid=1267442
1267453
Rail fence cipher
Type of transposition cipher The rail fence cipher (also called a zigzag cipher) is a classical type of transposition cipher. It derives its name from the manner in which encryption is performed, in analogy to a fence built with horizontal rails. Encryption. In the rail fence cipher, the plaintext is written downwards diagonally on successive "rails" of an imaginary fence, then moving up when the bottom rail is reached, down again when the top rail is reached, and so on until the whole plaintext is written out. The ciphertext is then read off in rows. For example, to encrypt the message 'WE ARE DISCOVERED. RUN AT ONCE.' with 3 "rails", write the text as: W . . . E . . . C . . . R . . . U . . . O . . . . E . R . D . S . O . E . E . R . N . T . N . E . . A . . . I . . . V . . . D . . . A . . . C . (Note that spaces and punctuation are omitted.) Then read off the text horizontally to get the ciphertext: WECRUO ERDSOEERNTNE AIVDAC Decryption. Let formula_0 be the number of rails used during encryption. Observe that as the plaintext is written, the sequence of each letter's vertical position on the rails varies up and down in a repeating cycle. In the above example (where formula_1) the vertical position repeats with a period of 4. In general the sequence repeats with a period of formula_2. Let formula_3 be the length of the string to be decrypted. Suppose for a moment that formula_3 is a multiple of formula_2 and let formula_4. One begins by splitting the ciphertext into strings such that the length of the first and last string is formula_5 and the length of each intermediate string is formula_6. For the above example with formula_7, we have formula_8, so we split the ciphertext as follows: WECRUO ERDSOEERNTNE AIVDAC Write each string on a separate line with spaces after each letter in the first and last line: W E C R U O E R D S O E E R N T N E A I V D A C Then one can read off the plaintext down the first column, diagonally up, down the next column, and so on. If formula_3 is not a multiple of formula_2, the determination of how to split up the ciphertext is slightly more complicated than as described above, but the basic approach is the same. Alternatively, for simplicity in decrypting, one can pad the plaintext with extra letters to make its length a multiple of formula_2. If the ciphertext has not been padded, but you either know or are willing to brute-force the number of rails used, you can decrypt it using the following steps. As above, let formula_3 be the length of the string to be decrypted and let formula_0 be the number of rails used during encryption. We will add two variables, formula_9 and formula_10, where formula_11 = the number of diagonals in the decrypted Rail Fence, and formula_10 = the number of empty spaces in the last diagonal. formula_12 Next solve for formula_9 and formula_10 algebraically, where both values are the smallest number possible. This is easily done by incrementing formula_9 by 1 until the denominator is larger than formula_3, and then simply solving for formula_10. Consider the example cipher, modified to use 6 rails instead of 3. W...V...O .E...O.E...T.N ..A...C...R...A...C ...R...S...E...N...E ...E.I...D.U... ...D...R... The resulting cipher text is: WVO EOETN ACRAC RSENE EIDU DR We know that formula_7, and if we use formula_13 we can solve the equation above. formula_14 formula_15 Simplify the fraction. formula_16 Solve for formula_9 formula_17 Solve for formula_10 We now have formula_13, formula_18, and formula_19. Or, 6 rails, 5 diagonals (4+1), and 2 empty spaces at the end. By blocking out the empty spaces at the end of the last diagonal, we can simply fill in the Rail Fence line by line using the ciphertext. _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ X _ _ X W V O E O E T N A C R A C _ _ _ _ _ _ _ _ _ X _ _ X Cryptanalysis. The cipher's key is formula_0, the number of rails. If formula_0 is known, the ciphertext can be decrypted by using the above algorithm. Values of formula_0 equal to or greater than formula_3, the length of the ciphertext, are not usable, since then the ciphertext is the same as the plaintext. Therefore the number of usable keys is low, allowing the brute-force attack of trying all possible keys. As a result, the rail-fence cipher is considered weak. Zigzag cipher. The term zigzag cipher may refer to the rail fence cipher as described above. However, it may also refer to a different type of cipher described by Fletcher Pratt in "Secret and Urgent". It is "written by ruling a sheet of paper in vertical columns, with a letter at the head of each column. A dot is made for each letter of the message in the proper column, reading from top to bottom of the sheet. The letters at the head of the columns are then cut off, the ruling erased and the message of dots sent along to the recipient, who, knowing the width of the columns and the arrangement of the letters at the top, reconstitutes the diagram and reads what it has to say." References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N" }, { "math_id": 1, "text": "N=3" }, { "math_id": 2, "text": "2(N-1)" }, { "math_id": 3, "text": "L" }, { "math_id": 4, "text": "K = { L \\over {2(N-1)}}" }, { "math_id": 5, "text": "K" }, { "math_id": 6, "text": "2K" }, { "math_id": 7, "text": "L=24" }, { "math_id": 8, "text": "K=6" }, { "math_id": 9, "text": "x" }, { "math_id": 10, "text": "y" }, { "math_id": 11, "text": "x+1" }, { "math_id": 12, "text": "1=\\frac{L+y}{N+((N-1)*x)}" }, { "math_id": 13, "text": "N=6" }, { "math_id": 14, "text": "1=\\frac{24+y}{6+(5*x)}" }, { "math_id": 15, "text": "1=\\frac{18+y}{5*x}" }, { "math_id": 16, "text": "1=\\frac{18+y}{5*4}" }, { "math_id": 17, "text": "1=\\frac{18+2}{20}" }, { "math_id": 18, "text": "x=4" }, { "math_id": 19, "text": "y=2" } ]
https://en.wikipedia.org/wiki?curid=1267453
12675093
Chebyshev linkage
Four-bar straight-line mechanism In kinematics, Chebyshev's linkage is a four-bar linkage that converts rotational motion to approximate linear motion. It was invented by the 19th-century mathematician Pafnuty Chebyshev, who studied theoretical problems in kinematic mechanisms. One of the problems was the construction of a linkage that converts a rotary motion into an approximate straight-line motion (a straight line mechanism). This was also studied by James Watt in his improvements to the steam engine, which resulted in Watt's linkage. Equations of motion. The motion of the linkage can be constrained to an input angle that may be changed through velocities, forces, etc. The input angles can be either link "L"2 with the horizontal or link "L"4 with the horizontal. Regardless of the input angle, it is possible to compute the motion of two end-points for link "L"3 that we will name A and B, and the middle point. formula_0 formula_1 while the motion of point B will be computed with the other angle, formula_2 formula_3 And ultimately, we will write the output angle in terms of the input angle, formula_4 Consequently, we can write the motion of point P, using the two points defined above and the definition of the middle point. formula_5 formula_6 Input angles. The limits to the input angles, in both cases, are: formula_7 formula_8 Usage. Chebyshev linkages did not receive widespread usage in steam engines, but are commonly used as the 'Horse head' design of level luffing crane. In this application the approximate straight movement is translated away from the line's midpoint, but it is still essentially the same mechanism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " x_A = L_2\\cos(\\varphi_1) \\, " }, { "math_id": 1, "text": " y_A = L_2\\sin(\\varphi_1) \\, " }, { "math_id": 2, "text": " x_B = L_1 - L_4\\cos(\\varphi_2) \\, " }, { "math_id": 3, "text": " y_B = L_4\\sin(\\varphi_2) \\, " }, { "math_id": 4, "text": " \\varphi_2 = \\arcsin\\left[\\frac{L_2\\,\\sin(\\varphi_1)}{\\overline{A O_2}}\\right] - \\arccos\\left(\\frac{L_4^2 + \\overline{A O_2}^2 -L_3^2}{2\\,L_4\\,\\overline{A O_2}}\\right) \\, " }, { "math_id": 5, "text": " x_P = \\frac{x_A + x_B}{2} \\, " }, { "math_id": 6, "text": " y_P = \\frac{y_A + y_B}{2} \\, " }, { "math_id": 7, "text": " \\varphi_{\\text{min}} = \\arccos\\left( \\frac{4}{5}\\right) \\approx 36.8699^\\circ. \\, " }, { "math_id": 8, "text": " \\varphi_{\\text{max}} = \\arccos\\left( \\frac{-1}{5}\\right) \\approx 101.537^\\circ. \\, " } ]
https://en.wikipedia.org/wiki?curid=12675093
1267541
Check mark
Symbol often denoting 'yes' or 'correct' A check or check mark (American English), checkmark (Philippine English), tickmark (Indian English) or tick (Australian, New Zealand and British English) is a mark (✓, ✔, etc.) used in many countries, including the English-speaking world, to indicate the concept "yes" (e.g. "yes; this has been verified", "yes; that is the correct answer", "yes; this has been completed", or "yes; this [item or option] applies"). The x mark is also sometimes used for this purpose (most notably on election ballot papers, e.g. in the United Kingdom), but otherwise usually indicates "no", incorrectness, or failure. One of the earliest usages of a check mark as an indication of completion is on ancient Babylonian tablets "where small indentations were sometimes made with a stylus, usually placed at the left of a worker's name, presumably to indicate whether the listed ration has been issued." As a verb, to check (off) means to add such a mark. Printed forms, printed documents, and computer software (see checkbox) commonly include squares in which to place check marks. International differences. The check mark is a predominant affirmative symbol of convenience in the English-speaking world because of its instant and simple composition. In other language communities, there may be different conventions. It is common in Swedish schools for a ✓ to indicate that an answer is incorrect, while "R", from the Swedish , i.e., "correct", is used to indicate that an answer is correct. In Finnish, ✓ stands for , i.e., "wrong", due to its similarity to a slanted v. The opposite, "correct", is marked with formula_0, a slanted vertical line emphasized with two dots (see also commercial minus sign). In Japan, the O mark is used instead of the check mark, and the X or ✓ mark are commonly used for wrong. In the Netherlands (and former Dutch colonies) the flourish of approval (or "krul") is used for approving a section or sum. In German-speaking countries, ✓ is used for “correct” or “done”, but not usually for ticking boxes, which are crossed instead. The opposite of ✓ is ƒ (short for "falsch" “wrong”). Unicode. Unicode provides various check marks: Keyboard entry. The heavy check mark ✔ is available in the fonts Marlett and Webdings. On the QWERTY keyboard, it can be produced by striking lower-case with one of these fonts in effect. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\cdot \\! / \\! \\cdot" } ]
https://en.wikipedia.org/wiki?curid=1267541
12676307
Golomb–Dickman constant
In mathematics, the Golomb–Dickman constant, named after Solomon W. Golomb and Karl Dickman, arises in the theory of random permutations and in number theory. Its value is formula_0 (sequence in the OEIS) It is not known whether this constant is rational or irrational. Definitions. Let "a""n" be the average — taken over all permutations of a set of size "n" — of the length of the longest cycle in each permutation. Then the Golomb–Dickman constant is formula_1 In the language of probability theory, formula_2 is asymptotically the expected length of the longest cycle in a uniformly distributed random permutation of a set of size "n". In number theory, the Golomb–Dickman constant appears in connection with the average size of the largest prime factor of an integer. More precisely, formula_3 where formula_4 is the largest prime factor of "k" (sequence in the OEIS) . So if "k" is a "d" digit integer, then formula_5 is the asymptotic average number of digits of the largest prime factor of "k". The Golomb–Dickman constant appears in number theory in a different way. What is the probability that second largest prime factor of "n" is smaller than the square root of the largest prime factor of "n"? Asymptotically, this probability is formula_6. More precisely, formula_7 where formula_8 is the second largest prime factor "n". The Golomb-Dickman constant also arises when we consider the average length of the largest cycle of any function from a finite set to itself. If "X" is a finite set, if we repeatedly apply a function "f": "X" → "X" to any element "x" of this set, it eventually enters a cycle, meaning that for some "k" we have formula_9 for sufficiently large "n"; the smallest "k" with this property is the length of the cycle. Let "b""n" be the average, taken over all functions from a set of size "n" to itself, of the length of the largest cycle. Then Purdom and Williams proved that formula_10 Formulae. There are several expressions for formula_6. These include: formula_11 where formula_12 is the logarithmic integral, formula_13 where formula_14 is the exponential integral, and formula_15 and formula_16 where formula_17 is the Dickman function. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\lambda = 0.62432 99885 43550 87099 29363 83100 83724\\dots" }, { "math_id": 1, "text": "\\lambda = \\lim_{n\\to\\infty} \\frac{a_n}{n}." }, { "math_id": 2, "text": "\\lambda n" }, { "math_id": 3, "text": "\\lambda = \\lim_{n\\to\\infty} \\frac1n \\sum_{k=2}^n \\frac{\\log(P_1(k))}{\\log(k)}," }, { "math_id": 4, "text": "P_1(k)" }, { "math_id": 5, "text": "\\lambda d" }, { "math_id": 6, "text": "\\lambda" }, { "math_id": 7, "text": "\\lambda = \\lim_{n\\to\\infty} \\text{Prob}\\left\\{P_2(n) \\le \\sqrt{P_1(n)}\\right\\}" }, { "math_id": 8, "text": "P_2(n)" }, { "math_id": 9, "text": "f^{n+k}(x) = f^n(x)" }, { "math_id": 10, "text": " \\lim_{n\\to\\infty} \\frac{b_n}{\\sqrt{n}} = \\sqrt{\\frac{\\pi}{2} } \\lambda. " }, { "math_id": 11, "text": "\\lambda = \\int_0^1 e^{\\mathrm{Li}(t)} \\, dt " }, { "math_id": 12, "text": "\\mathrm{Li}(t)" }, { "math_id": 13, "text": "\\lambda = \\int_0^\\infty e^{-t - E_1(t)} \\, dt " }, { "math_id": 14, "text": "E_1(t)" }, { "math_id": 15, "text": "\\lambda = \\int_0^\\infty \\frac{\\rho(t)}{t+2} \\, dt " }, { "math_id": 16, "text": "\\lambda = \\int_0^\\infty \\frac{\\rho(t)}{(t+1)^2} \\, dt " }, { "math_id": 17, "text": "\\rho(t)" } ]
https://en.wikipedia.org/wiki?curid=12676307
1267644
Ferdinand Reich
German chemist (1799–1882) Ferdinand Reich (19 February 1799 – 27 April 1882) was a German chemist who co-discovered indium in 1863 with Hieronymous Theodor Richter. Reich was born in Bernburg, Anhalt-Bernburg, Holy Roman Empire and died in Freiberg. He was color blind, or could only see in whites and blacks, and that is why Theodor Richter became his science partner. Richter would examine the colors produced in reactions that they studied. Reich and Richter ended up isolating the indium, creating a small supply, although it was later found in more regions. They isolated the indium at the Freiberg University of Mining and Technology in Germany. In 1803, Laplace and Gauss both derived that, if a heavy object is dropped from a height formula_0 at latitude formula_1, and the earth rotates from west to east with angular velocity formula_2, then the object would be deflected to the east by a distance of formula_3. In 1831, Reich set out to test this prediction by actually dropping objects in a mine pit (, with latitude 50° 53′ 12.5″  N) 158.5 m deep, in Freiberg, Saxony, for 106 times. The average deflection is 2.84 cm to the east and 0.44 cm to the south. The eastward deflection is almost exactly equal to the theoretical value of 2.75 cm, but the southward deflection remains unexplained to this day. The experiment is published in References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h" }, { "math_id": 1, "text": "\\Phi" }, { "math_id": 2, "text": "\\Omega" }, { "math_id": 3, "text": "d=2 / 3 \\Omega \\cos (\\Phi) \\sqrt{\\left(2 h^3 / g\\right)}" } ]
https://en.wikipedia.org/wiki?curid=1267644
12677098
Geodesic convexity
In mathematics — specifically, in Riemannian geometry — geodesic convexity is a natural generalization of convexity for sets and functions to Riemannian manifolds. It is common to drop the prefix "geodesic" and refer simply to "convexity" of a set or function. Definitions. Let ("M", "g") be a Riemannian manifold. formula_1 is a (strictly) convex function in the usual sense for every unit speed geodesic arc "γ" : [0, "T"] → "M" contained within "C".
[ { "math_id": 0, "text": "f:C\\to\\mathbf{R}" }, { "math_id": 1, "text": "f \\circ \\gamma : [0, T] \\to \\mathbf{R}" } ]
https://en.wikipedia.org/wiki?curid=12677098
12677528
Liouville's equation
Equation in differential geometry "For Liouville's equation in dynamical systems, see Liouville's theorem (Hamiltonian)." "For Liouville's equation in quantum mechanics, see Von Neumann equation." "For Liouville's equation in Euclidean space, see Liouville–Bratu–Gelfand equation." In differential geometry, Liouville's equation, named after Joseph Liouville, is the nonlinear partial differential equation satisfied by the conformal factor f of a metric "f"2(d"x"2 + d"y"2) on a surface of constant Gaussian curvature K: formula_0 where ∆0 is the flat Laplace operator formula_1 Liouville's equation appears in the study of isothermal coordinates in differential geometry: the independent variables x,y are the coordinates, while f can be described as the conformal factor with respect to the flat metric. Occasionally it is the square "f"2 that is referred to as the conformal factor, instead of f itself. Liouville's equation was also taken as an example by David Hilbert in the formulation of his nineteenth problem. Other common forms of Liouville's equation. By using the change of variables log "f" ↦ "u", another commonly found form of Liouville's equation is obtained: formula_2 Other two forms of the equation, commonly found in the literature, are obtained by using the slight variant 2 log "f" ↦ "u" of the previous change of variables and Wirtinger calculus: formula_3 Note that it is exactly in the first one of the preceding two forms that Liouville's equation was cited by David Hilbert in the formulation of his nineteenth problem. A formulation using the Laplace–Beltrami operator. In a more invariant fashion, the equation can be written in terms of the "intrinsic" Laplace–Beltrami operator formula_4 as follows: formula_5 Properties. Relation to Gauss–Codazzi equations. Liouville's equation is equivalent to the Gauss–Codazzi equations for minimal immersions into the 3-space, when the metric is written in isothermal coordinates formula_6 such that the Hopf differential is formula_7. General solution of the equation. In a simply connected domain Ω, the general solution of Liouville's equation can be found by using Wirtinger calculus. Its form is given by formula_8 where "f" ("z") is any meromorphic function such that Application. Liouville's equation can be used to prove the following classification results for surfaces: Theorem. A surface in the Euclidean 3-space with metric d"l"2   "g"("z",_"z")d"z"d_"z", and with constant scalar curvature K is locally isometric to:  0; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Works cited. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta_0\\log f = -K f^2," }, { "math_id": 1, "text": "\\Delta_0 = \\frac{\\partial^2}{\\partial x^2} +\\frac{\\partial^2}{\\partial y^2}\n= 4 \\frac{\\partial}{\\partial z} \\frac{\\partial}{\\partial \\bar z}." }, { "math_id": 2, "text": "\\Delta_0 u = - K e^{2u}." }, { "math_id": 3, "text": "\\Delta_0 u = - 2K e^{u}\\quad\\Longleftrightarrow\\quad \\frac{\\partial^2 u}{{\\partial z}{\\partial \\bar z}} = - \\frac{K}{2} e^{u}." }, { "math_id": 4, "text": "\\Delta_{\\mathrm{LB}} = \\frac{1}{f^2} \\Delta_0" }, { "math_id": 5, "text": "\\Delta_{\\mathrm{LB}}\\log\\; f = -K." }, { "math_id": 6, "text": "z" }, { "math_id": 7, "text": "\\mathrm{d}z^2" }, { "math_id": 8, "text": "\nu(z,\\bar z) = \\ln \\left(\n4 \\frac{ \\left|{\\mathrm{d} f(z)}/{\\mathrm{d} z}\\right|^2 }{ ( 1+K \\left|f(z)\\right|^2)^2 }\n\\right)\n" } ]
https://en.wikipedia.org/wiki?curid=12677528
1267762
Variable speed of light
Non-mainstream theory in physics A variable speed of light (VSL) is a feature of a family of hypotheses stating that the speed of light may in some way not be constant, for example, that it varies in space or time, or depending on frequency. Accepted classical theories of physics, and in particular general relativity, predict a constant speed of light in any local frame of reference and in some situations these predict apparent variations of the speed of light depending on frame of reference, but this article does not refer to this as a variable speed of light. Various alternative theories of gravitation and cosmology, many of them non-mainstream, incorporate variations in the local speed of light. Attempts to incorporate a variable speed of light into physics were made by Robert Dicke in 1957, and by several researchers starting from the late 1980s. VSL should not be confused with faster than light theories, its dependence on a medium's refractive index or its measurement in a remote observer's frame of reference in a gravitational potential. In this context, the "speed of light" refers to the limiting speed "c" of the theory rather than to the velocity of propagation of photons. Historical proposals. Background. Einstein's equivalence principle, on which general relativity is founded, requires that in any local, freely falling reference frame, the speed of light is always the same. This leaves open the possibility, however, that an inertial observer inferring the apparent speed of light in a distant region might calculate a different value. Spatial variation of the speed of light in a gravitational potential as measured against a distant observer's time reference is implicitly present in general relativity. The apparent speed of light will change in a gravity field and, in particular, go to zero at an event horizon as viewed by a distant observer. In deriving the gravitational redshift due to a spherically symmetric massive body, a radial speed of light "dr"/"dt" can be defined in Schwarzschild coordinates, with "t" being the time recorded on a stationary clock at infinity. The result is formula_0 where "m" is "MG"/"c"2 and where natural units are used such that "c"0 is equal to one. Dicke's proposal (1957). Robert Dicke, in 1957, developed a VSL theory of gravity, a theory in which (unlike general relativity) the speed of light measured locally by a free-falling observer could vary. Dicke assumed that both frequencies and wavelengths could vary, which since formula_1 resulted in a relative change of "c". Dicke assumed a refractive index formula_2 (eqn. 5) and proved it to be consistent with the observed value for light deflection. In a comment related to Mach's principle, Dicke suggested that, while the right part of the term in eq. 5 is small, the left part, 1, could have "its origin in the remainder of the matter in the universe". Given that in a universe with an increasing horizon more and more masses contribute to the above refractive index, Dicke considered a cosmology where "c" decreased in time, providing an alternative explanation to the cosmological redshift. Subsequent proposals. Variable speed of light models, including Dicke's, have been developed which agree with all known tests of general relativity. Other models make a link to Dirac's large numbers hypothesis. Several hypotheses for varying speed of light, seemingly in contradiction to general relativity theory, have been published, including those of Giere and Tan (1986) and Sanejouand (2009). In 2003, Magueijo gave a review of such hypotheses. Cosmological models with varying speeds of light have been proposed independently by Jean-Pierre Petit in 1988, John Moffat in 1992, and the team of Andreas Albrecht and João Magueijo in 1998 to explain the horizon problem of cosmology and propose an alternative to cosmic inflation. Relation to other constants and their variation. Gravitational constant "G". In 1937, Paul Dirac and others began investigating the consequences of natural constants changing with time. For example, Dirac proposed a change of only 5 parts in 1011 per year of the Newtonian constant of gravitation "G" to explain the relative weakness of the gravitational force compared to other fundamental forces. This has become known as the Dirac large numbers hypothesis. However, Richard Feynman showed that the gravitational constant most likely could not have changed this much in the past 4 billion years based on geological and solar system observations, although this may depend on assumptions about "G" varying in isolation. (See also strong equivalence principle.) Fine-structure constant "α". One group, studying distant quasars, has claimed to detect a variation of the fine-structure constant at the level in one part in 105. Other authors dispute these results. Other groups studying quasars claim no detectable variation at much higher sensitivities. The natural nuclear reactor of Oklo has been used to check whether the atomic fine-structure constant "α" might have changed over the past 2 billion years. That is because "α" influences the rate of various nuclear reactions. For example, [&lt;noinclude /&gt;[samarium-149|Sm]&lt;noinclude /&gt;] captures a neutron to become Sm, and since the rate of neutron capture depends on the value of "α", the ratio of the two samarium isotopes in samples from Oklo can be used to calculate the value of "α" from 2 billion years ago. Several studies have analysed the relative concentrations of radioactive isotopes left behind at Oklo, and most have concluded that nuclear reactions then were much the same as they are today, which implies "α" was the same too. Paul Davies and collaborators have suggested that it is in principle possible to disentangle which of the dimensionful constants (the elementary charge, the Planck constant, and the speed of light) of which the fine-structure constant is composed is responsible for the variation. However, this has been disputed by others and is not generally accepted. Criticisms of various VSL concepts. Dimensionless and dimensionful quantities. To clarify what a variation in a dimensionful quantity actually means, since any such quantity can be changed merely by changing one's choice of units, John Barrow wrote: "[An] important lesson we learn from the way that pure numbers like "α" define the world is what it really means for worlds to be different. The pure number we call the fine-structure constant and denote by "α" is a combination of the electron charge, "e", the speed of light, "c", and the Planck constant, "h". At first we might be tempted to think that a world in which the speed of light was slower would be a different world. But this would be a mistake. If "c", "h", and "e" were all changed so that the values they have in metric (or any other) units were different when we looked them up in our tables of physical constants, but the value of "α" remained the same, this new world would be "observationally indistinguishable" from our world. The only thing that counts in the definition of worlds are the values of the dimensionless constants of Nature. If all masses were doubled in value [including the Planck mass "m"P] you cannot tell because all the pure numbers defined by the ratios of any pair of masses are unchanged." Any equation of physical law can be expressed in a form in which all dimensional quantities are normalized against like-dimensioned quantities (called "nondimensionalization"), resulting in only dimensionless quantities remaining. Physicists can "choose" their units so that the physical constants "c", "G", "ħ" = "h"/(2π), 4π"ε"0, and "k"B take the value one, resulting in every physical quantity being normalized against its corresponding Planck unit. For that, it has been claimed that specifying the evolution of a dimensional quantity is meaningless and does not make sense. When Planck units are used and such equations of physical law are expressed in this nondimensionalized form, "no" dimensional physical constants such as "c", "G", "ħ", "ε"0, nor "k"B remain, only dimensionless quantities, as predicted by the Buckingham π theorem. Short of their anthropometric unit dependence, there is no speed of light, gravitational constant, nor the Planck constant, remaining in mathematical expressions of physical reality to be subject to such hypothetical variation. For example, in the case of a hypothetically varying gravitational constant, "G", the relevant dimensionless quantities that potentially vary ultimately become the ratios of the Planck mass to the masses of the fundamental particles. Some key dimensionless quantities (thought to be constant) that are related to the speed of light (among other dimensional quantities such as "ħ", "e", "ε"0), notably the fine-structure constant or the proton-to-electron mass ratio, could in principle have meaningful variance and their possible variation continues to be studied. General critique of varying "c" cosmologies. From a very general point of view, G. F. R. Ellis and Jean-Philippe Uzan expressed concerns that a varying "c" would require a rewrite of much of modern physics to replace the current system which depends on a constant "c". Ellis claimed that any varying "c" theory (1) must redefine distance measurements; (2) must provide an alternative expression for the metric tensor in general relativity; (3) might contradict Lorentz invariance; (4) must modify Maxwell's equations; and (5) must be done consistently with respect to all other physical theories. VSL cosmologies remain out of mainstream physics. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{dr}{dt} = 1 - \\frac{2m}{r}, " }, { "math_id": 1, "text": " c = \\nu \\lambda " }, { "math_id": 2, "text": " n= \\frac{c}{c_0} = 1+\\frac{2 GM}{r c^2} " } ]
https://en.wikipedia.org/wiki?curid=1267762
12677802
7-cubic honeycomb
The 7-cubic honeycomb or hepteractic honeycomb is the only regular space-filling tessellation (or honeycomb) in Euclidean 7-space. It is analogous to the square tiling of the plane and to the cubic honeycomb of 3-space. There are many different Wythoff constructions of this honeycomb. The most symmetric form is regular, with Schläfli symbol {4,35,4}. Another form has two alternating 7-cube facets (like a checkerboard) with Schläfli symbol {4,34,31,1}. The lowest symmetry Wythoff construction has 128 types of facets around each vertex and a prismatic product Schläfli symbol {∞}(7). Related honeycombs. The [4,35,4], , Coxeter group generates 255 permutations of uniform tessellations, 135 with unique symmetry and 134 with unique geometry. The expanded 7-cubic honeycomb is geometrically identical to the 7-cubic honeycomb. The "7-cubic honeycomb" can be alternated into the 7-demicubic honeycomb, replacing the 7-cubes with 7-demicubes, and the alternated gaps are filled by 7-orthoplex facets. Quadritruncated 7-cubic honeycomb. A quadritruncated 7-cubic honeycomb, , contains all tritruncated 7-orthoplex facets and is the Voronoi tessellation of the D7* lattice. Facets can be identically colored from a doubled formula_0×2, 4,35,4 symmetry, alternately colored from formula_0, [4,35,4] symmetry, three colors from formula_1, [4,34,31,1] symmetry, and 4 colors from formula_2, [31,1,33,31,1] symmetry.
[ { "math_id": 0, "text": "{\\tilde{C}}_7" }, { "math_id": 1, "text": "{\\tilde{B}}_7" }, { "math_id": 2, "text": "{\\tilde{D}}_7" } ]
https://en.wikipedia.org/wiki?curid=12677802
12677924
8-cubic honeycomb
In geometry, the 8-cubic honeycomb or octeractic honeycomb is the only regular space-filling tessellation (or honeycomb) in Euclidean 8-space. It is analogous to the square tiling of the plane and to the cubic honeycomb of 3-space, and the tesseractic honeycomb of 4-space. There are many different Wythoff constructions of this honeycomb. The most symmetric form is regular, with Schläfli symbol {4,36,4}. Another form has two alternating hypercube facets (like a checkerboard) with Schläfli symbol {4,35,31,1}. The lowest symmetry Wythoff construction has 256 types of facets around each vertex and a prismatic product Schläfli symbol {∞}(8). Related honeycombs. The [4,36,4], , Coxeter group generates 511 permutations of uniform tessellations, 271 with unique symmetry and 270 with unique geometry. The expanded 8-cubic honeycomb is geometrically identical to the 8-cubic honeycomb. The "8-cubic honeycomb" can be alternated into the 8-demicubic honeycomb, replacing the 8-cubes with 8-demicubes, and the alternated gaps are filled by 8-orthoplex facets. Quadrirectified 8-cubic honeycomb. A quadrirectified 8-cubic honeycomb, , contains all trirectified 8-orthoplex facets and is the Voronoi tessellation of the D8* lattice. Facets can be identically colored from a doubled formula_0×2, 4,36,4 symmetry, alternately colored from formula_0, [4,36,4] symmetry, three colors from formula_1, [4,35,31,1] symmetry, and 4 colors from formula_2, [31,1,34,31,1] symmetry.
[ { "math_id": 0, "text": "{\\tilde{C}}_8" }, { "math_id": 1, "text": "{\\tilde{B}}_8" }, { "math_id": 2, "text": "{\\tilde{D}}_8" } ]
https://en.wikipedia.org/wiki?curid=12677924
12677988
8-demicubic honeycomb
The 8-demicubic honeycomb, or demiocteractic honeycomb is a uniform space-filling tessellation (or honeycomb) in Euclidean 8-space. It is constructed as an alternation of the regular 8-cubic honeycomb. It is composed of two different types of facets. The 8-cubes become alternated into 8-demicubes h{4,3,3,3,3,3,3} and the alternated vertices create 8-orthoplex {3,3,3,3,3,3,4} facets . D8 lattice. The vertex arrangement of the 8-demicubic honeycomb is the D8 lattice. The 112 vertices of the rectified 8-orthoplex vertex figure of the "8-demicubic honeycomb" reflect the kissing number 112 of this lattice. The best known is 240, from the E8 lattice and the 521 honeycomb. formula_1 contains formula_0 as a subgroup of index 270. Both formula_1 and formula_0 can be seen as affine extensions of formula_2 from different nodes: The D lattice (also called D) can be constructed by the union of two D8 lattices. This packing is only a lattice for even dimensions. The kissing number is 240. (2n-1 for n&lt;8, 240 for n=8, and 2n(n-1) for n&gt;8). It is identical to the E8 lattice. At 8-dimensions, the 240 contacts contain both the 27=128 from lower dimension contact progression (2n-1), and 16*7=112 from higher dimensions (2n(n-1)). The D lattice (also called D and C) can be constructed by the union of all four "D8 lattices": It is also the 7-dimensional body centered cubic, the union of two 7-cube honeycombs in dual positions. The kissing number of the D lattice is 16 ("2n" for n≥5). and its Voronoi tessellation is a quadrirectified 8-cubic honeycomb, , containing all trirectified 8-orthoplex Voronoi cell, . Symmetry constructions. There are three uniform construction symmetries of this tessellation. Each symmetry can be represented by arrangements of different colors on the 256 8-demicube facets around each vertex. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{\\tilde{D}}_8" }, { "math_id": 1, "text": "{\\tilde{E}}_8" }, { "math_id": 2, "text": "D_8" } ]
https://en.wikipedia.org/wiki?curid=12677988
12680566
Chow–Liu tree
In probability theory and statistics Chow–Liu tree is an efficient method for constructing a second-order product approximation of a joint probability distribution, first described in a paper by . The goals of such a decomposition, as with such Bayesian networks in general, may be either data compression or inference. The Chow–Liu representation. The Chow–Liu method describes a joint probability distribution formula_0 as a product of second-order conditional and marginal distributions. For example, the six-dimensional distribution formula_1 might be approximated as formula_2 where each new term in the product introduces just one new variable, and the product can be represented as a first-order dependency tree, as shown in the figure. The Chow–Liu algorithm (below) determines which conditional probabilities are to be used in the product approximation. In general, unless there are no third-order or higher-order interactions, the Chow–Liu approximation is indeed an "approximation", and cannot capture the complete structure of the original distribution. provides a modern analysis of the Chow–Liu tree as a Bayesian network. The Chow–Liu algorithm. Chow and Liu show how to select second-order terms for the product approximation so that, among all such second-order approximations (first-order dependency trees), the constructed approximation formula_3 has the minimum Kullback–Leibler divergence to the actual distribution formula_4, and is thus the "closest" approximation in the classical information-theoretic sense. The Kullback–Leibler divergence between a second-order product approximation and the actual distribution is shown to be formula_5 where formula_6 is the mutual information between variable formula_7 and its parent formula_8 and formula_9 is the joint entropy of variable set formula_10. Since the terms formula_11 and formula_9 are independent of the dependency ordering in the tree, only the sum of the pairwise mutual informations, formula_12, determines the quality of the approximation. Thus, if every branch (edge) on the tree is given a weight corresponding to the mutual information between the variables at its vertices, then the tree which provides the optimal second-order approximation to the target distribution is just the "maximum-weight tree". The equation above also highlights the role of the dependencies in the approximation: When no dependencies exist, and the first term in the equation is absent, we have only an approximation based on first-order marginals, and the distance between the approximation and the true distribution is due to the redundancies that are not accounted for when the variables are treated as independent. As we specify second-order dependencies, we begin to capture some of that structure and reduce the distance between the two distributions. Chow and Liu provide a simple algorithm for constructing the optimal tree; at each stage of the procedure the algorithm simply adds the maximum mutual information pair to the tree. See the original paper, , for full details. A more efficient tree construction algorithm for the common case of sparse data was outlined in . Chow and Wagner proved in a later paper that the learning of the Chow–Liu tree is consistent given samples (or observations) drawn i.i.d. from a tree-structured distribution. In other words, the probability of learning an incorrect tree decays to zero as the number of samples tends to infinity. The main idea in the proof is the continuity of the mutual information in the pairwise marginal distribution. More recently, the exponential rate of convergence of the error probability was provided. Variations on Chow–Liu trees. The obvious problem which occurs when the actual distribution is not in fact a second-order dependency tree can still in some cases be addressed by fusing or aggregating together densely connected subsets of variables to obtain a "large-node" Chow–Liu tree , or by extending the idea of greedy maximum branch weight selection to non-tree (multiple parent) structures . (Similar techniques of variable substitution and construction are common in the Bayes network literature, e.g., for dealing with loops. See .) Generalizations of the Chow–Liu tree are the so-called t-cherry junction trees. It is proved that the t-cherry junction trees provide a better or at least as good approximation for a discrete multivariate probability distribution as the Chow–Liu tree gives. For the third order t-cherry junction tree see , for the "k"th-order t-cherry junction tree see . The second order t-cherry junction tree is in fact the Chow–Liu tree. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "P(X_{1},X_{2},\\ldots,X_{n})" }, { "math_id": 1, "text": "P(X_{1},X_{2},X_{3},X_{4},X_{5},X_{6})" }, { "math_id": 2, "text": "\nP^{\\prime\n}(X_{1},X_{2},X_{3},X_{4},X_{5},X_{6})=P(X_{6}|X_{5})P(X_{5}|X_{2})P(X_{4}|X_{2})P(X_{3}|X_{2})P(X_{2}|X_{1})P(X_{1})\n" }, { "math_id": 3, "text": "P^{\\prime}" }, { "math_id": 4, "text": "P" }, { "math_id": 5, "text": "\nD(P\\parallel P^{\\prime })=-\\sum I(X_{i};X_{j(i)})+\\sum\nH(X_{i})-H(X_{1},X_{2},\\ldots ,X_{n})\n" }, { "math_id": 6, "text": "I(X_{i};X_{j(i)})" }, { "math_id": 7, "text": "X_{i}" }, { "math_id": 8, "text": "X_{j(i)}" }, { "math_id": 9, "text": "H(X_{1},X_{2},\\ldots ,X_{n})" }, { "math_id": 10, "text": "\\{X_{1},X_{2},\\ldots ,X_{n}\\}" }, { "math_id": 11, "text": "\\sum H(X_{i})" }, { "math_id": 12, "text": "\\sum I(X_{i};X_{j(i)})" } ]
https://en.wikipedia.org/wiki?curid=12680566
12683145
Elastic instability
Elastic instability is a form of instability occurring in elastic systems, such as buckling of beams and plates subject to large compressive loads. There are a lot of ways to study this kind of instability. One of them is to use the method of incremental deformations based on superposing a small perturbation on an equilibrium solution. Single degree of freedom-systems. Consider as a simple example a rigid beam of length "L", hinged in one end and free in the other, and having an angular spring attached to the hinged end. The beam is loaded in the free end by a force "F" acting in the compressive axial direction of the beam, see the figure to the right. Moment equilibrium condition. Assuming a clockwise angular deflection formula_0, the clockwise moment exerted by the force becomes formula_1. The moment equilibrium equation is given by formula_2 where formula_3 is the spring constant of the angular spring (Nm/radian). Assuming formula_0 is small enough, implementing the Taylor expansion of the sine function and keeping the two first terms yields formula_4 which has three solutions, the trivial formula_5, and formula_6 which is imaginary (i.e. not physical) for formula_7 and real otherwise. This implies that for small compressive forces, the only equilibrium state is given by formula_5, while if the force exceeds the value formula_8 there is suddenly another mode of deformation possible. Energy method. The same result can be obtained by considering energy relations. The energy stored in the angular spring is formula_9 and the work done by the force is simply the force multiplied by the vertical displacement of the beam end, which is formula_10. Thus, formula_11 The energy equilibrium condition formula_12 now yields formula_13 as before (besides from the trivial formula_5). Stability of the solutions. Any solution formula_0 is stable iff a small change in the deformation angle formula_14 results in a reaction moment trying to restore the original angle of deformation. The net clockwise moment acting on the beam is formula_15 An infinitesimal clockwise change of the deformation angle formula_0 results in a moment formula_16 which can be rewritten as formula_17 since formula_18 due to the moment equilibrium condition. Now, a solution formula_0 is stable iff a clockwise change formula_19 results in a negative change of moment formula_20 and vice versa. Thus, the condition for stability becomes formula_21 The solution formula_5 is stable only for formula_22, which is expected. By expanding the cosine term in the equation, the approximate stability condition is obtained: formula_23 for formula_24, which the two other solutions satisfy. Hence, these solutions are stable. Multiple degrees of freedom-systems. By attaching another rigid beam to the original system by means of an angular spring a two degrees of freedom-system is obtained. Assume for simplicity that the beam lengths and angular springs are equal. The equilibrium conditions become formula_25 formula_26 where formula_27 and formula_28 are the angles of the two beams. Linearizing by assuming these angles are small yields formula_29 The non-trivial solutions to the system is obtained by finding the roots of the determinant of the system matrix, i.e. for formula_30 Thus, for the two degrees of freedom-system there are two critical values for the applied force "F". These correspond to two different modes of deformation which can be computed from the nullspace of the system matrix. Dividing the equations by formula_27 yields formula_31 For the lower critical force the ratio is positive and the two beams deflect in the same direction while for the higher force they form a "banana" shape. These two states of deformation represent the buckling mode shapes of the system.
[ { "math_id": 0, "text": "\\theta" }, { "math_id": 1, "text": "M_F = F L \\sin\\theta" }, { "math_id": 2, "text": "\nF L \\sin \\theta = k_\\theta \\theta \n" }, { "math_id": 3, "text": "k_\\theta" }, { "math_id": 4, "text": "\nF L \\Bigg(\\theta - \\frac{1}{6} \\theta^3\\Bigg) \\approx k_\\theta \\theta\n" }, { "math_id": 5, "text": "\\theta = 0" }, { "math_id": 6, "text": "\n\\theta \\approx \\pm \\sqrt{6 \\Bigg( 1 - \\frac{k_\\theta}{F L} \\Bigg)} \n" }, { "math_id": 7, "text": "F L < k_\\theta" }, { "math_id": 8, "text": "k_\\theta/L" }, { "math_id": 9, "text": "\nE_\\mathrm{spring} = \\int k_\\theta \\theta \\mathrm{d} \\theta = \\frac{1}{2} k_\\theta \\theta^2\n" }, { "math_id": 10, "text": "L (1 - \\cos\\theta)" }, { "math_id": 11, "text": "\nE_\\mathrm{force} = \\int{F \\mathrm{d} x = F L (1 - \\cos \\theta )}\n" }, { "math_id": 12, "text": "E_\\mathrm{spring} = E_\\mathrm{force}" }, { "math_id": 13, "text": "F = k_\\theta / L" }, { "math_id": 14, "text": "\\Delta \\theta" }, { "math_id": 15, "text": "\nM(\\theta) = F L \\sin \\theta - k_\\theta \\theta\n" }, { "math_id": 16, "text": "\nM(\\theta + \\Delta \\theta) = M + \\Delta M = F L (\\sin \\theta + \\Delta \\theta \\cos \\theta ) - k_\\theta (\\theta + \\Delta \\theta) \n" }, { "math_id": 17, "text": "\n\\Delta M = \\Delta \\theta (F L \\cos \\theta - k_\\theta) \n" }, { "math_id": 18, "text": "F L \\sin \\theta = k_\\theta \\theta" }, { "math_id": 19, "text": "\\Delta \\theta > 0" }, { "math_id": 20, "text": "\\Delta M < 0" }, { "math_id": 21, "text": "\n\\frac{\\Delta M}{\\Delta \\theta} = \\frac{\\mathrm{d} M}{\\mathrm{d} \\theta} = FL \\cos \\theta - k_\\theta < 0\n" }, { "math_id": 22, "text": "FL < k_\\theta" }, { "math_id": 23, "text": "\n|\\theta| > \\sqrt{2\\Bigg( 1 - \\frac{k_\\theta}{F L} \\Bigg)}\n" }, { "math_id": 24, "text": "FL > k_\\theta" }, { "math_id": 25, "text": "\nF L ( \\sin \\theta_1 + \\sin \\theta_2 ) = k_\\theta \\theta_1\n" }, { "math_id": 26, "text": "\nF L \\sin \\theta_2 = k_\\theta ( \\theta_2 - \\theta_1 )\n" }, { "math_id": 27, "text": "\\theta_1" }, { "math_id": 28, "text": "\\theta_2" }, { "math_id": 29, "text": "\n\\begin{pmatrix}\nF L - k_\\theta & F L \\\\\nk_\\theta & F L - k_\\theta\n\\end{pmatrix}\n\\begin{pmatrix}\n\\theta_1 \\\\\n\\theta_2\n\\end{pmatrix} = \n\\begin{pmatrix}\n0 \\\\\n0\n\\end{pmatrix}\n" }, { "math_id": 30, "text": "\n\\frac{F L}{k_\\theta} = \\frac{3}{2} \\mp \\frac{\\sqrt{5}}{2} \\approx \\left\\{\\begin{matrix} 0.382\\\\2.618 \\end{matrix}\\right.\n" }, { "math_id": 31, "text": "\n\\frac{\\theta_2}{\\theta_1} \\Big|_{\\theta_1 \\ne 0} = \\frac{k_\\theta}{F L} - 1 \\approx \\left\\{\\begin{matrix} 1.618 & \\text{for } F L/k_\\theta \\approx 0.382\\\\ -0.618 & \\text{for } F L/k_\\theta \\approx 2.618 \\end{matrix}\\right.\n" } ]
https://en.wikipedia.org/wiki?curid=12683145
1268394
DR
DR, Dr, dr, may refer to: &lt;templatestyles src="Template:TOC_right/styles.css" /&gt; See also. Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "H_{\\text{dR}}^k" } ]
https://en.wikipedia.org/wiki?curid=1268394
12684962
Fisher–Yates shuffle
Algorithm for generating a random permutation of a finite set The Fisher–Yates shuffle is an algorithm for shuffling a finite sequence. The algorithm takes a list of all the elements of the sequence, and continually determines the next element in the shuffled sequence by randomly drawing an element from the list until no elements remain. The algorithm produces an unbiased permutation: every permutation is equally likely. The modern version of the algorithm takes time proportional to the number of items being shuffled and shuffles them in place. The Fisher–Yates shuffle is named after Ronald Fisher and Frank Yates, who first described it. It is also known as the Knuth shuffle after Donald Knuth. A variant of the Fisher–Yates shuffle, known as Sattolo's algorithm, may be used to generate random cyclic permutations of length "n" instead of random permutations. Fisher and Yates' original method. The Fisher–Yates shuffle, in its original form, was described in 1938 by Ronald Fisher and Frank Yates in their book "Statistical tables for biological, agricultural and medical research". Their description of the algorithm used pencil and paper; a table of random numbers provided the randomness. The basic method given for generating a random permutation of the numbers 1 through "N" goes as follows: Provided that the random numbers picked in step 2 above are truly random and unbiased, so will be the resulting permutation. Fisher and Yates took care to describe how to obtain such random numbers in any desired range from the supplied tables in a manner which avoids any bias. They also suggested the possibility of using a simpler method — picking random numbers from one to "N" and discarding any duplicates—to generate the first half of the permutation, and only applying the more complex algorithm to the remaining half, where picking a duplicate number would otherwise become frustratingly common. The modern algorithm. The modern version of the Fisher–Yates shuffle, designed for computer use, was introduced by Richard Durstenfeld in 1964 and popularized by Donald E. Knuth in "The Art of Computer Programming" as "Algorithm P (Shuffling)". Neither Durstenfeld's article nor Knuth's first edition of "The Art of Computer Programming" acknowledged the work of Fisher and Yates; they may not have been aware of it. Subsequent editions of Knuth's "The Art of Computer Programming" mention Fisher and Yates' contribution. The algorithm described by Durstenfeld is more efficient than that given by Fisher and Yates: whereas a naïve computer implementation of Fisher and Yates' method would spend needless time counting the remaining numbers in step 3 above, Durstenfeld's solution is to move the "struck" numbers to the end of the list by swapping them with the last unstruck number at each iteration. This reduces the algorithm's time complexity to formula_0 compared to formula_1 for the naïve implementation. This change gives the following algorithm (for a zero-based array). -- To shuffle an array "a" of "n" elements (indices 0.."n"-1): for "i" from "n"−1 down to 1 do "j" ← random integer such that 0 ≤ "j" ≤ "i" exchange "a"["j"] and "a"["i"] An equivalent version which shuffles the array in the opposite direction (from lowest index to highest) is: -- To shuffle an array "a" of "n" elements (indices 0.."n"-1): for "i" from 0 to "n"−2 do "j" ← random integer such that "i" ≤ "j" &lt; "n" exchange "a"["i"] and "a"["j"] Examples. Pencil-and-paper method. This example permutes the letters from A to H using Fisher and Yates' original method, starting with the letters in alphabetical order: A random number "j" from 1 to 8 is selected. If "j"=3, for example, then one strikes out the third letter on the scratch pad and writes it down as the result: A second random number is chosen, this time from 1 to 7. If the number is 4, then one strikes out the fourth letter not yet struck off the scratch pad and adds it to the result: The next random number is selected from 1 to 6, and then from 1 to 5, and so on, always repeating the strike-out process as above: Modern method. In Durstenfeld's version of the algorithm, instead of striking out the chosen letters and copying them elsewhere, they are swapped with the last letter not yet chosen. Starting with the letters from A to H as before: Select a random number "j" from 1 to 8, and then swap the "j"th and 8th letters. So, if the random number is 6, for example, swap the 6th and 8th letters in the list: The next random number is selected from 1 to 7, and the selected letter is swapped with the 7th letter. If it is 2, for example, swap the 2nd and 7th letters: The process is repeated until the permutation is complete: After eight steps, the algorithm is complete and the resulting permutation is G E D C A H B F. Variants. The "inside-out" algorithm. The Fisher–Yates shuffle, as implemented by Durstenfeld, is an "in-place shuffle". That is, given a preinitialized array, it shuffles the elements of the array in place, rather than producing a shuffled copy of the array. This can be an advantage if the array to be shuffled is large. To simultaneously initialize and shuffle an array, a bit more efficiency can be attained by doing an "inside-out" version of the shuffle. In this version, one successively places element number "i" into a random position among the first "i" positions in the array, after moving the element previously occupying that position to position "i". In case the random position happens to be number "i", this "move" (to the same place) involves an uninitialised value, but that does not matter, as the value is then immediately overwritten. No separate initialization is needed, and no exchange is performed. In the common case where "source" is defined by some simple function, such as the integers from 0 to "n" − 1, "source" can simply be replaced with the function since "source" is never altered during execution. To initialize an array "a" of "n" elements to a randomly shuffled copy of "source", both 0-based: for "i" from 0 to "n" − 1 do "j" ← random integer such that 0 ≤ "j" ≤ "i" if "j" ≠ "i" "a"["i"] ← "a"["j"] "a"["j"] ← "source"["i"] The inside-out shuffle can be seen to be correct by induction. Assuming a perfect random number generator, every one of the "n"! different sequences of random numbers that could be obtained from the calls of "random" will produce a different permutation of the values, so all of these are obtained exactly once. The condition that checks if "j" ≠ "i" may be omitted in languages that have no problems accessing uninitialized array values. This eliminates "n" conditional branches at an expected cost of "Hn" ≈ ln "n" + "γ" redundant assignments. Another advantage of this technique is that "n", the number of elements in the "source", does not need to be known in advance; we only need to be able to detect the end of the source data when it is reached. Below the array "a" is built iteratively starting from empty, and "a".length represents the "current" number of elements seen. To initialize an empty array "a" to a randomly shuffled copy of "source" whose length is not known: while "source".moreDataAvailable "j" ← random integer such that 0 ≤ "j" ≤ "a".length if j = "a".length "a".append("source".next) else "a".append("a"["j"]) "a"["j"] ← "source".next Sattolo's algorithm. A very similar algorithm was published in 1986 by Sandra Sattolo for generating uniformly distributed cycles of (maximal) length "n". The only difference between Durstenfeld's and Sattolo's algorithms is that in the latter, in step 2 above, the random number "j" is chosen from the range between 1 and "i"−1 (rather than between 1 and "i") inclusive. This simple change modifies the algorithm so that the resulting permutation always consists of a single cycle. In fact, as described below, it is quite easy to "accidentally" implement Sattolo's algorithm when the ordinary Fisher–Yates shuffle is intended. This will bias the results by causing the permutations to be picked from the smaller set of ("n"−1)! cycles of length "N", instead of from the full set of all "n"! possible permutations. The fact that Sattolo's algorithm always produces a cycle of length "n" can be shown by induction. Assume by induction that after the initial iteration of the loop, the remaining iterations permute the first "n" − 1 elements according to a cycle of length "n" − 1 (those remaining iterations are just Sattolo's algorithm applied to those first "n" − 1 elements). This means that tracing the initial element to its new position "p", then the element originally at position "p" to its new position, and so forth, one only gets back to the initial position after having visited all other positions. Suppose the initial iteration swapped the final element with the one at (non-final) position "k", and that the subsequent permutation of first "n" − 1 elements then moved it to position "l"; we compare the permutation "π" of all "n" elements with that remaining permutation "σ" of the first "n" − 1 elements. Tracing successive positions as just mentioned, there is no difference between "π" and "σ" until arriving at position "k". But then, under "π" the element originally at position "k" is moved to the final position rather than to position "l", and the element originally at the final position is moved to position "l". From there on, the sequence of positions for "π" again follows the sequence for "σ", and all positions will have been visited before getting back to the initial position, as required. As for the equal probability of the permutations, it suffices to observe that the modified algorithm involves ("n"−1)! distinct possible sequences of random numbers produced, each of which clearly produces a different permutation, and each of which occurs—assuming the random number source is unbiased—with equal probability. The ("n"−1)! different permutations so produced precisely exhaust the set of cycles of length "n": each such cycle has a unique cycle notation with the value "n" in the final position, which allows for ("n"−1)! permutations of the remaining values to fill the other positions of the cycle notation. A sample implementation of Sattolo's algorithm in Python is: from random import randrange def sattolo_cycle(items) -&gt; None: """Sattolo's algorithm.""" i = len(items) while i &gt; 1: i = i - 1 j = randrange(i) # 0 &lt;= j &lt;= i-1 items[j], items[i] = items[i], items[j] Comparison with other shuffling algorithms. The asymptotic time and space complexity of the Fisher–Yates shuffle are optimal. Combined with a high-quality unbiased random number source, it is also guaranteed to produce unbiased results. Compared to some other solutions, it also has the advantage that, if only part of the resulting permutation is needed, it can be stopped halfway through, or even stopped and restarted repeatedly, generating the permutation incrementally as needed. Naïve method. The naïve method of swapping each element with another element chosen randomly from all elements is biased. Different permutations will have different probabilities of being generated, for every formula_2, because the number of different permutations, formula_3, does not evenly divide the number of random outcomes of the algorithm, formula_4. In particular, by Bertrand's postulate there will be at least one prime number between formula_5 and formula_6, and this number will divide formula_3 but not divide formula_4. from random import randrange def naive_shuffle(items) -&gt; None: """A naive method. This is an example of what not to do -- use Fisher-Yates instead.""" n = len(items) for i in range(n): j = randrange(n) # 0 &lt;= j &lt;= n-1 items[j], items[i] = items[i], items[j] Sorting. An alternative method assigns a random number to each element of the set to be shuffled and then sorts the set according to the assigned numbers. The sorting method has the same asymptotic time complexity as Fisher–Yates: although general sorting is "O"("n" log "n"), numbers are efficiently sorted using Radix sort in "O"("n") time. Like the Fisher–Yates shuffle, the sorting method produces unbiased results. However, care must be taken to ensure that the assigned random numbers are never duplicated, since sorting algorithms typically do not order elements randomly in case of a tie. Additionally, this method requires asymptotically larger space: "O"("n") additional storage space for the random numbers, versus "O"(1) space for the Fisher–Yates shuffle. Finally, the sorting method has a simple parallel implementation, unlike the Fisher–Yates shuffle, which is sequential. A variant of the above method that has seen some use in languages that support sorting with user-specified comparison functions is to shuffle a list by sorting it with a comparison function that returns random values. However, this it is very likely to produce highly non-uniform distributions, which in addition depend heavily on the sorting algorithm used. For instance suppose quicksort is used as sorting algorithm, with a fixed element selected as first pivot element. The algorithm starts comparing the pivot with all other elements to separate them into those less and those greater than it, and the relative sizes of those groups will determine the final place of the pivot element. For a uniformly distributed random permutation, each possible final position should be equally likely for the pivot element, but if each of the initial comparisons returns "less" or "greater" with equal probability, then that position will have a binomial distribution for "p" = 1/2, which gives positions near the middle of the sequence with a much higher probability for than positions near the ends. Randomized comparison functions applied to other sorting methods like merge sort may produce results that appear more uniform, but are not quite so either, since merging two sequences by repeatedly choosing one of them with equal probability (until the choice is forced by the exhaustion of one sequence) does not produce results with a uniform distribution; instead the probability to choose a sequence should be proportional to the number of elements left in it. In fact no method that uses only two-way random events with equal probability ("coin flipping"), repeated a bounded number of times, can produce permutations of a sequence (of more than two elements) with a uniform distribution, because every execution path will have as probability a rational number with as denominator a power of 2, while the required probability 1/"n"! for each possible permutation is not of that form. In principle this shuffling method can even result in program failures like endless loops or access violations, because the correctness of a sorting algorithm may depend on properties of the order relation (like transitivity) that a comparison producing random values will certainly not have. While this kind of behaviour should not occur with sorting routines that never perform a comparison whose outcome can be predicted with certainty (based on previous comparisons), there can be valid reasons for deliberately making such comparisons. For instance the fact that any element should compare equal to itself allows using them as sentinel value for efficiency reasons, and if this is the case, a random comparison function would break the sorting algorithm. Potential sources of bias. Care must be taken when implementing the Fisher–Yates shuffle, both in the implementation of the algorithm itself and in the generation of the random numbers it is built on, otherwise the results may show detectable bias. A number of common sources of bias have been listed below. Implementation errors. A common error when implementing the Fisher–Yates shuffle is to pick the random numbers from the wrong range. The flawed algorithm may appear to work correctly, but it will not produce each possible permutation with equal probability, and it may not produce certain permutations at all. For example, a common off-by-one error would be choosing the index "j" of the entry to swap in the example above to be always strictly less than the index "i" of the entry it will be swapped with. This turns the Fisher–Yates shuffle into Sattolo's algorithm, which produces only permutations consisting of a single cycle involving all elements: in particular, with this modification, no element of the array can ever end up in its original position. Similarly, always selecting "j" from the entire range of valid array indices on "every" iteration also produces a result which is biased, albeit less obviously so. This can be seen from the fact that doing so yields "n""n" distinct possible sequences of swaps, whereas there are only "n"! possible permutations of an "n"-element array. Since "n""n" can never be evenly divisible by "n"! when "n" &gt; 2 (as the latter is divisible by "n"−1, which shares no prime factors with "n"), some permutations must be produced by more of the "n""n" sequences of swaps than others. As a concrete example of this bias, observe the distribution of possible outcomes of shuffling a three-element array [1, 2, 3]. There are 6 possible permutations of this array (3! = 6), but the algorithm produces 27 possible shuffles (33 = 27). In this case, [1, 2, 3], [3, 1, 2], and [3, 2, 1] each result from 4 of the 27 shuffles, while each of the remaining 3 permutations occurs in 5 of the 27 shuffles. The matrix to the right shows the probability of each element in a list of length 7 ending up in any other position. Observe that for most elements, ending up in their original position (the matrix's main diagonal) has lowest probability, and moving one slot backwards has highest probability. Modulo bias. Doing a Fisher–Yates shuffle involves picking uniformly distributed random integers from various ranges. Most random number generators, however — whether true or pseudorandom — will only directly provide numbers in a fixed range from 0 to RAND_MAX, and in some libraries, RAND_MAX may be as low as 32767. A simple and commonly used way to force such numbers into a desired range is to apply the modulo operator; that is, to divide them by the size of the range and take the remainder. However, the need in a Fisher–Yates shuffle to generate random numbers in every range from 0–1 to 0–"n" almost guarantees that some of these ranges will not evenly divide the natural range of the random number generator. Thus, the remainders will not always be evenly distributed and, worse yet, the bias will be systematically in favor of small remaindersClassic Modulo (Biased). For example, assume that your random number source gives numbers from 0 to 99 (as was the case for Fisher and Yates' original tables), which is 100 values, and that you wish to obtain an unbiased random number from 0 to 15 (16 values). If you simply divide the numbers by 16 and take the remainder, you will find that the numbers 0–3 occur about 17% more often than others. This is because 16 does not evenly divide 100: the largest multiple of 16 less than or equal to 100 is 6×16 = 96, and it is the numbers in the incomplete range 96–99 that cause the bias. The simplest way to fix the problem is to discard those numbers before taking the remainder and to keep trying again until a number in the suitable range comes up. While in principle this could, in the worst case, take forever, the expected number of retries will always be less than one. A method of obtaining random numbers in the required ranges that is unbiased and nearly never performs the expensive modulo operation was described in 2018 by Daniel Lemire . A related problem occurs with implementations that first generate a random floating-point number—usually in the range [0,1]—and then multiply it by the size of the desired range and round downFP Multiply (Biased)2. The problem here is that random floating-point numbers, however carefully generated, always have only finite precision. This means that there are only a finite number of possible floating point values in any given range, and if the range is divided into a number of segments that does not divide this number evenly, some segments will end up with more possible values than others. While the resulting bias will not show the same systematic downward trend as in the previous case, it will still be there. The extra cost of making sure there is no "modulo bias" when generating random integers for a Fisher-Yates shuffle depends on the approach (classic modulo, floating-point multiplication or Lemire's integer multiplication) and the random number generator used Benchmarking ... Pseudorandom generators. An additional problem occurs when the Fisher–Yates shuffle is used with a pseudorandom number generator or PRNG: as the sequence of numbers output by such a generator is entirely determined by its internal state at the start of a sequence, a shuffle driven by such a generator cannot possibly produce more distinct permutations than the generator has distinct possible states. Even when the number of possible states exceeds the number of permutations, the irregular nature of the mapping from sequences of numbers to permutations means that some permutations will occur more often than others. Thus, to minimize bias, the number of states of the PRNG should exceed the number of permutations by at least several orders of magnitude. For example, the built-in pseudorandom number generator provided by many programming languages and/or libraries may often have only 32 bits of internal state, which means it can only produce 232 different sequences of numbers. If such a generator is used to shuffle a deck of 52 playing cards, it can only ever produce a very small fraction of the 52! ≈ 2225.6 possible permutations. It is impossible for a generator with less than 226 bits of internal state to produce all the possible permutations of a 52-card deck. No pseudorandom number generator can produce more distinct sequences, starting from the point of initialization, than there are distinct seed values it may be initialized with. Thus, a generator that has 1024 bits of internal state but which is initialized with a 32-bit seed can still only produce 232 different permutations right after initialization. It can produce more permutations if one exercises the generator a great many times before starting to use it for generating permutations, but this is a very inefficient way of increasing randomness: supposing one can arrange to use the generator a random number of up to a billion, say 230 for simplicity, times between initialization and generating permutations, then the number of possible permutations is still only 262. A further problem occurs when a simple linear congruential PRNG is used with the divide-and-take-remainder method of range reduction described above. The problem here is that the low-order bits of a linear congruential PRNG with modulo 2"e" are less random than the high-order ones: the low "n" bits of the generator themselves have a period of at most 2"n". When the divisor is a power of two, taking the remainder essentially means throwing away the high-order bits, such that one ends up with a significantly less random value. Different rules apply if the LCG has prime modulo, but such generators are uncommon. This is an example of the general rule that a poor-quality RNG or PRNG will produce poor-quality shuffles. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n)" }, { "math_id": 1, "text": "O(n^2)" }, { "math_id": 2, "text": "n>2" }, { "math_id": 3, "text": "n!" }, { "math_id": 4, "text": "n^n" }, { "math_id": 5, "text": "n/2" }, { "math_id": 6, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=12684962
1268560
Hankel contour
In mathematics, a Hankel contour is a path in the complex plane which extends from (+∞,δ), around the origin counter clockwise and back to (+∞,−δ), where δ is an arbitrarily small positive number. The contour thus remains arbitrarily close to the real axis but without crossing the real axis except for negative values of "x". The Hankel contour can also be represented by a path that has mirror images just above and below the real axis, connected to a circle of radius ε, centered at the origin, where ε is an arbitrarily small number. The two linear portions of the contour are said to be a distance of δ from the real axis. Thus, the total distance between the linear portions of the contour is 2δ. The contour is traversed in the positively-oriented sense, meaning that the circle around the origin is traversed counter-clockwise. Use of Hankel contours is one of the methods of contour integration. This type of path for contour integrals was first used by Hermann Hankel in his investigations of the Gamma function. The Hankel contour is used to evaluate integrals such as the Gamma function, the Riemann zeta function, and other Hankel functions (which are Bessel functions of the third kind). Applications. The Hankel contour and the Gamma function. The Hankel contour is helpful in expressing and solving the Gamma function in the complex "t"-plane. The Gamma function can be defined for any complex value in the plane if we evaluate the integral along the Hankel contour. The Hankel contour is especially useful for expressing the Gamma function for any complex value because the end points of the contour vanish, and thus allows the fundamental property of the Gamma function to be satisfied, which states formula_0. Derivation of the contour integral expression of the Gamma function. Source: Note that the formal representation of the Gamma function is formula_1. To satisfy the fundamental property of the Gamma function, it follows that formula_2 after multiplying both sides by z. Thus, given that the endpoints of the Hankel contour vanish, the left- and right-hand sides reduce to formula_3. Using differential equations, formula_4 becomes the general solution. While "A" is constant with respect to "t", it holds that "A" may fluctuate depending on the complex number "z". Since A(z) is arbitrary, a complex exponential in z may be absorbed into the definition of A(z). Substituting f(t) into the original integral then gives formula_5. By integrating along the Hankel contour, the contour integral expression of the Gamma function becomes formula_6. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Gamma(z+1)=z\\Gamma(z)" }, { "math_id": 1, "text": "\\Gamma(z)=\\int_C f(t)t^{z-1} dt" }, { "math_id": 2, "text": "z\\int_C f(t)t^z dt =[t^{(z-1)}f(t)]-\\int_C t^{(z-1)} f'(t)dt" }, { "math_id": 3, "text": "f(t)+f'(t)=0" }, { "math_id": 4, "text": "f(t)=Ae^{-t}" }, { "math_id": 5, "text": "\\Gamma(z)=A(z)\\int_C e^{-t}(-t)^{z-1}dt" }, { "math_id": 6, "text": "\\Gamma(z)=\\frac{i}{2\\sin{\\pi z}}\\int_C e^{-t}(-t)^{z-1}dt" } ]
https://en.wikipedia.org/wiki?curid=1268560
12687840
Subnet (mathematics)
Generalization of the concept of subsequence to the case of nets In topology and related areas of mathematics, a subnet is a generalization of the concept of subsequence to the case of nets. The analogue of "subsequence" for nets is the notion of a "subnet". The definition is not completely straightforward, but is designed to allow as many theorems about subsequences to generalize to nets as possible. There are three non-equivalent definitions of "subnet". The first definition of a subnet was introduced by John L. Kelley in 1955 and later, Stephen Willard introduced his own (non-equivalent) variant of Kelley's definition in 1970. Subnets in the sense of Willard and subnets in the sense of Kelley are the most commonly used definitions of "subnet" but they are each not equivalent to the concept of "subordinate filter", which is the analog of "subsequence" for filters (they are not equivalent in the sense that there exist subordinate filters on formula_0 whose filter/subordinate–filter relationship cannot be described in terms of the corresponding net/subnet relationship). A third definition of "subnet" (not equivalent to those given by Kelley or Willard) that is equivalent to the concept of "subordinate filter" was introduced independently by Smiley (1957), Aarnes and Andenaes (1972), Murdeshwar (1983), and possibly others, although it is not often used. This article discusses the definition due to Willard (the other definitions are described in the article Filters in topology#Non–equivalence of subnets and subordinate filters). Definitions. There are several different non-equivalent definitions of "subnet" and this article will use the definition introduced in 1970 by Stephen Willard, which is as follows: If formula_1 and formula_2 are nets in a set formula_3 from directed sets formula_4 and formula_5 respectively, then formula_6 is said to be a subnet of formula_7 (in the sense of Willard or a ) if there exists a monotone final function formula_8 such that formula_9 A function formula_10 is monotone, order-preserving, and an order homomorphism if whenever formula_11 then formula_12 and it is called final if its image formula_13 is cofinal in formula_14 The set formula_13 being cofinal in formula_4 means that for every formula_15 there exists some formula_16 such that formula_17 that is, for every formula_18 there exists an formula_19 such that formula_20 Since the net formula_7 is the function formula_22 and the net formula_6 is the function formula_23 the defining condition formula_24 may be written more succinctly and cleanly as either formula_25 or formula_26 where formula_27 denotes function composition and formula_28 is just notation for the function formula_29 Subnets versus subsequences. Importantly, a subnet is not merely the restriction of a net formula_30 to a directed subset of its domain formula_14 In contrast, by definition, a of a given sequence formula_31 is a sequence formed from the given sequence by deleting some of the elements without disturbing the relative positions of the remaining elements. Explicitly, a sequence formula_32 is said to be a subsequence of formula_33 if there exists a strictly increasing sequence of positive integers formula_34 such that formula_35 for every formula_36 (that is to say, such that formula_37). The sequence formula_38 can be canonically identified with the function formula_39 defined by formula_40 Thus a sequence formula_41 is a subsequence of formula_42 if and only if there exists a strictly increasing function formula_43 such that formula_44 Subsequences are subnets Every subsequence is a subnet because if formula_45 is a subsequence of formula_33 then the map formula_43 defined by formula_46 is an order-preserving map whose image is cofinal in its codomain and satisfies formula_47 for all formula_48 Sequence and subnet but not a subsequence The sequence formula_49 is not a subsequence of formula_50 although it is a subnet because the map formula_43 defined by formula_51 is an order-preserving map whose image is formula_52 and satisfies formula_53 for all formula_54 While a sequence is a net, a sequence has subnets that are not subsequences. The key difference is that subnets can use the same point in the net multiple times and the indexing set of the subnet can have much larger cardinality. Using the more general definition where we do not require monotonicity, a sequence is a subnet of a given sequence, if and only if it can be obtained from some subsequence by repeating its terms and reordering them. Subnet of a sequence that is not a sequence A subnet of a sequence is not necessarily a sequence. For an example, let formula_55 be directed by the usual order formula_56 and define formula_57 by letting formula_58 be the ceiling of formula_59 Then formula_60 is an order-preserving map (because it is a non-decreasing function) whose image formula_61 is a cofinal subset of its codomain. Let formula_62 be any sequence (such as a constant sequence, for instance) and let formula_63 for every formula_64 (in other words, let formula_65). This net formula_66 is not a sequence since its domain formula_67 is an uncountable set. However, formula_66 is a subnet of the sequence formula_7 since (by definition) formula_68 holds for every formula_69 Thus formula_6 is a subnet of formula_7 that is not a sequence. Furthermore, the sequence formula_7 is also a subnet of formula_66 since the inclusion map formula_70 (that sends formula_71) is an order-preserving map whose image formula_72 is a cofinal subset of its codomain and formula_73 holds for all formula_48 Thus formula_7 and formula_66 are (simultaneously) subnets of each another. Subnets induced by subsets Suppose formula_74 is an infinite set and formula_33 is a sequence. Then formula_75 is a net on formula_76 that is also a subnet of formula_33 (take formula_57 to be the inclusion map formula_77). This subnet formula_75 in turn induces a subsequence formula_45 by defining formula_78 as the formula_79 smallest value in formula_67 (that is, let formula_80 and let formula_81 for every integer formula_82). In this way, every infinite subset of formula_74 induces a canonical subnet that may be written as a subsequence. However, as demonstrated below, not every subnet of a sequence is a subsequence. Applications. The definition generalizes some key theorems about subsequences: Taking formula_21 be the identity map in the definition of "subnet" and requiring formula_87 to be a cofinal subset of formula_4 leads to the concept of a cofinal subnet, which turns out to be inadequate since, for example, the second theorem above fails for the Tychonoff plank if we restrict ourselves to cofinal subnets. Clustering and closure. If formula_6 is a net in a subset formula_88 and if formula_89 is a cluster point of formula_6 then formula_90 In other words, every cluster point of a net in a subset belongs to the closure of that set. If formula_1 is a net in formula_3 then the set of all cluster points of formula_7 in formula_3 is equal to formula_91 where formula_92 for each formula_93 Convergence versus clustering. If a net converges to a point formula_83 then formula_83 is necessarily a cluster point of that net. The converse is not guaranteed in general. That is, it is possible for formula_89 to be a cluster point of a net formula_7 but for formula_7 to not converge to formula_84 However, if formula_1 clusters at formula_89 then there exists a subnet of formula_7 that converges to formula_84 This subnet can be explicitly constructed from formula_94 and the neighborhood filter formula_95 at formula_83 as follows: make formula_96 into a directed set by declaring that formula_97 then formula_98 and formula_99 is a subnet of formula_1 since the map formula_100 is a monotone function whose image formula_101 is a cofinal subset of formula_102 and formula_103 Thus, a point formula_89 is a cluster point of a given net if and only if it has a subnet that converges to formula_84 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X = \\N" }, { "math_id": 1, "text": "x_{\\bull} = \\left(x_a\\right)_{a \\in A}" }, { "math_id": 2, "text": "s_{\\bull} = \\left(s_i\\right)_{i \\in I}" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "I," }, { "math_id": 6, "text": "s_{\\bull}" }, { "math_id": 7, "text": "x_{\\bull}" }, { "math_id": 8, "text": "h : I \\to A" }, { "math_id": 9, "text": "s_i = x_{h(i)} \\quad \\text{ for all } i \\in I." }, { "math_id": 10, "text": "h : I \\to A" }, { "math_id": 11, "text": "i \\leq j" }, { "math_id": 12, "text": "h(i) \\leq h(j)" }, { "math_id": 13, "text": "h(I)" }, { "math_id": 14, "text": "A." }, { "math_id": 15, "text": "a \\in A," }, { "math_id": 16, "text": "b \\in h(I)" }, { "math_id": 17, "text": "b \\geq a;" }, { "math_id": 18, "text": "a \\in A" }, { "math_id": 19, "text": "i \\in I" }, { "math_id": 20, "text": "h(i) \\geq a." }, { "math_id": 21, "text": "h" }, { "math_id": 22, "text": "x_{\\bull} : A \\to X" }, { "math_id": 23, "text": "s_{\\bull} : I \\to X," }, { "math_id": 24, "text": "\\left(s_i\\right)_{i \\in I} = \\left(x_{h(i)}\\right)_{i \\in I}," }, { "math_id": 25, "text": "s_{\\bull} = x_{h(\\bull)}" }, { "math_id": 26, "text": "s_{\\bull} = x_{\\bull} \\circ h," }, { "math_id": 27, "text": "\\,\\circ\\," }, { "math_id": 28, "text": "x_{h(\\bull)} := \\left(x_{h(i)}\\right)_{i \\in I}" }, { "math_id": 29, "text": "x_{\\bull} \\circ h : I \\to X." }, { "math_id": 30, "text": "\\left(x_a\\right)_{a \\in A}" }, { "math_id": 31, "text": "x_1, x_2, x_3, \\ldots" }, { "math_id": 32, "text": "\\left(s_n\\right)_{n \\in \\N}" }, { "math_id": 33, "text": "\\left(x_i\\right)_{i \\in \\N}" }, { "math_id": 34, "text": "h_1 < h_2 < h_3 < \\cdots" }, { "math_id": 35, "text": "s_n = x_{h_n}" }, { "math_id": 36, "text": "n \\in \\N" }, { "math_id": 37, "text": "\\left(s_1, s_2, \\ldots\\right) = \\left(x_{h_1}, x_{h_2}, \\ldots\\right)" }, { "math_id": 38, "text": "\\left(h_n\\right)_{n \\in \\N} = \\left(h_1, h_2, \\ldots\\right)" }, { "math_id": 39, "text": "h_{\\bull} : \\N \\to \\N" }, { "math_id": 40, "text": "n \\mapsto h_n." }, { "math_id": 41, "text": "s_{\\bull} = \\left(s_n\\right)_{n \\in \\N}" }, { "math_id": 42, "text": "x_{\\bull} = \\left(x_i\\right)_{i \\in \\N}" }, { "math_id": 43, "text": "h : \\N \\to \\N" }, { "math_id": 44, "text": "s_{\\bull} = x_{\\bull} \\circ h." }, { "math_id": 45, "text": "\\left(x_{h_n}\\right)_{n \\in \\N}" }, { "math_id": 46, "text": "n \\mapsto h_n" }, { "math_id": 47, "text": "x_{h_n} = x_{h(n)}" }, { "math_id": 48, "text": "n \\in \\N." }, { "math_id": 49, "text": "\\left(s_i\\right)_{i \\in \\N} := (1, 1, 2, 2, 3, 3, \\ldots)" }, { "math_id": 50, "text": "\\left(x_i\\right)_{i \\in \\N} := (1, 2, 3, \\ldots)" }, { "math_id": 51, "text": "h(i) := \\left\\lfloor \\tfrac{i + 1}{2} \\right\\rfloor" }, { "math_id": 52, "text": "h(\\N) = \\N" }, { "math_id": 53, "text": "s_i = x_{h(i)}" }, { "math_id": 54, "text": "i \\in \\N." }, { "math_id": 55, "text": "I = \\{r \\in \\R : r > 0\\}" }, { "math_id": 56, "text": "\\,\\leq\\," }, { "math_id": 57, "text": "h : I \\to \\N" }, { "math_id": 58, "text": "h(r) = \\lceil r \\rceil" }, { "math_id": 59, "text": "r." }, { "math_id": 60, "text": "h : (I, \\leq) \\to (\\N, \\leq)" }, { "math_id": 61, "text": "h(I) = \\N" }, { "math_id": 62, "text": "x_{\\bull} = \\left(x_i\\right)_{i \\in \\N} : \\N \\to X" }, { "math_id": 63, "text": "s_r := x_{h(r)}" }, { "math_id": 64, "text": "r \\in I" }, { "math_id": 65, "text": "s_{\\bull} := x_{\\bull} \\circ h" }, { "math_id": 66, "text": "\\left(s_r\\right)_{r \\in I}" }, { "math_id": 67, "text": "I" }, { "math_id": 68, "text": "s_r = x_{h(r)}" }, { "math_id": 69, "text": "r \\in I." }, { "math_id": 70, "text": "\\iota : \\N \\to I" }, { "math_id": 71, "text": "n \\mapsto n" }, { "math_id": 72, "text": "\\iota(\\N) = \\N" }, { "math_id": 73, "text": "x_n = s_{\\iota(n)}" }, { "math_id": 74, "text": "I \\subseteq \\N" }, { "math_id": 75, "text": "\\left(x_i\\right)_{i \\in I}" }, { "math_id": 76, "text": "(I, \\leq)" }, { "math_id": 77, "text": "i \\mapsto i" }, { "math_id": 78, "text": "h_n" }, { "math_id": 79, "text": "n^{\\text{th}}" }, { "math_id": 80, "text": "h_1 := \\inf I" }, { "math_id": 81, "text": "h_n := \\inf \\{i \\in I : i > h_{n-1}\\}" }, { "math_id": 82, "text": "n > 1" }, { "math_id": 83, "text": "x" }, { "math_id": 84, "text": "x." }, { "math_id": 85, "text": "y" }, { "math_id": 86, "text": "y_{\\bull}" }, { "math_id": 87, "text": "B" }, { "math_id": 88, "text": "S \\subseteq X" }, { "math_id": 89, "text": "x \\in X" }, { "math_id": 90, "text": "x \\in \\operatorname{cl}_X S." }, { "math_id": 91, "text": "\\bigcap_{a \\in A} \\operatorname{cl}_X \\left(x_{\\geq a}\\right)" }, { "math_id": 92, "text": "x_{\\geq a} := \\left\\{x_b : b \\geq a, b \\in A\\right\\}" }, { "math_id": 93, "text": "a \\in A." }, { "math_id": 94, "text": "(A, \\leq)" }, { "math_id": 95, "text": "\\mathcal{N}_x" }, { "math_id": 96, "text": "I := \\left\\{(a, U) \\in A \\times \\mathcal{N}_x : x_a \\in U\\right\\}" }, { "math_id": 97, "text": "(a, U) \\leq (b, V) \\quad \\text{ if and only if } \\quad a \\leq b \\; \\text{ and } \\; U \\supseteq V;" }, { "math_id": 98, "text": "\\left(x_a\\right)_{(a, U) \\in I} \\to x \\text{ in } X" }, { "math_id": 99, "text": "\\left(x_a\\right)_{(a, U) \\in I}" }, { "math_id": 100, "text": "\\begin{alignat}{4}\n\\alpha :\\;&& I &&\\;\\to \\;& A \\\\[0.3ex]\n && (a, U) &&\\;\\mapsto\\;& a \\\\\n\\end{alignat}" }, { "math_id": 101, "text": "\\alpha(I) = A" }, { "math_id": 102, "text": "A," }, { "math_id": 103, "text": "x_{\\alpha(\\bull)} := \\left(x_{\\alpha(i)}\\right)_{i \\in I} = \\left(x_{\\alpha(a, U)}\\right)_{(a, U) \\in I} = \\left(x_a\\right)_{(a, U) \\in I}." } ]
https://en.wikipedia.org/wiki?curid=12687840
1268823
Real gas
Non-hypothetical gases whose molecules occupy space and have interactions Real gases are nonideal gases whose molecules occupy space and have interactions; consequently, they do not adhere to the ideal gas law. To understand the behaviour of real gases, the following must be taken into account: For most applications, such a detailed analysis is unnecessary, and the ideal gas approximation can be used with reasonable accuracy. On the other hand, real-gas models have to be used near the condensation point of gases, near critical points, at very high pressures, to explain the Joule–Thomson effect, and in other less usual cases. The deviation from ideality can be described by the compressibility factor Z. Models. Van der Waals model. Real gases are often modeled by taking into account their molar weight and molar volume formula_0 or alternatively: formula_1 Where "p" is the pressure, "T" is the temperature, "R" the ideal gas constant, and "V"m the molar volume. "a" and "b" are parameters that are determined empirically for each gas, but are sometimes estimated from their critical temperature ("T"c) and critical pressure ("p"c) using these relations: formula_2 The constants at critical point can be expressed as functions of the parameters a, b: formula_3 With the reduced properties formula_4 the equation can be written in the "reduced form": formula_5 Redlich–Kwong model. The Redlich–Kwong equation is another two-parameter equation that is used to model real gases. It is almost always more accurate than the van der Waals equation, and often more accurate than some equations with more than two parameters. The equation is formula_6 or alternatively: formula_7 where "a" and "b" are two empirical parameters that are not the same parameters as in the van der Waals equation. These parameters can be determined: formula_8 The constants at critical point can be expressed as functions of the parameters a, b: formula_9 Using formula_10 the equation of state can be written in the "reduced form": formula_11 with formula_12 Berthelot and modified Berthelot model. The Berthelot equation (named after D. Berthelot) is very rarely used, formula_13 but the modified version is somewhat more accurate formula_14 Dieterici model. This model (named after C. Dieterici) fell out of usage in recent years formula_15 with parameters a, b. These can be normalized by dividing with the critical point state:formula_16which casts the equation into the reduced form:formula_17 Clausius model. The Clausius equation (named after Rudolf Clausius) is a very simple three-parameter equation used to model gases. formula_18 or alternatively: formula_19 where formula_20 where "V"c is critical volume. Virial model. The Virial equation derives from a perturbative treatment of statistical mechanics. formula_21 or alternatively formula_22 where "A", "B", "C", "A"′, "B"′, and "C"′ are temperature dependent constants. Peng–Robinson model. Peng–Robinson equation of state (named after D.-Y. Peng and D. B. Robinson) has the interesting property being useful in modeling some liquids as well as real gases. formula_23 Wohl model. The Wohl equation (named after A. Wohl) is formulated in terms of critical values, making it useful when real gas constants are not available, but it cannot be used for high densities, as for example the critical isotherm shows a drastic "decrease" of pressure when the volume is contracted beyond the critical volume. formula_24 or: formula_25 or, alternatively: formula_26 where formula_27 formula_28 with formula_29 formula_30, where formula_31 are (respectively) the molar volume, the pressure and the temperature at the critical point. And with the reduced properties formula_10 one can write the first equation in the "reduced form": formula_32 Beattie–Bridgeman model. This equation is based on five experimentally determined constants. It is expressed as formula_33 where formula_34 This equation is known to be reasonably accurate for densities up to about 0.8 "ρ"cr, where "ρ"cr is the density of the substance at its critical point. The constants appearing in the above equation are available in the following table when "p" is in kPa, "v" is in formula_35, "T" is in K and "R" = 8.314formula_36 Benedict–Webb–Rubin model. The BWR equation, formula_37 where "d" is the molar density and where "a", "b", "c", "A", "B", "C", "α", and "γ" are empirical constants. Note that the "γ" constant is a derivative of constant "α" and therefore almost identical to 1. Thermodynamic expansion work. The expansion work of the real gas is different than that of the ideal gas by the quantity formula_38. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "RT = \\left(p + \\frac{a}{V_\\text{m}^2}\\right)\\left(V_\\text{m} - b\\right)" }, { "math_id": 1, "text": "p = \\frac{RT}{V_m - b} - \\frac{a}{V_m^2}" }, { "math_id": 2, "text": "\\begin{align}\n a &= \\frac{27R^2 T_\\text{c}^2}{64p_\\text{c}} \\\\\n b &= \\frac{RT_\\text{c}}{8p_\\text{c}}\n\\end{align}" }, { "math_id": 3, "text": "\np_c=\\frac{a}{27b^2}, \\quad T_c=\\frac{8a}{27bR}, \\qquad V_{m,c}=3b, \\qquad Z_c=\\frac{3}{8}\n" }, { "math_id": 4, "text": "\n p_r = \\frac{p}{p_\\text{c}},\\ \n V_r = \\frac{V_\\text{m}}{V_\\text{m,c}},\\ \n T_r = \\frac{T}{T_\\text{c}}\\ \n" }, { "math_id": 5, "text": "p_r = \\frac{8}{3}\\frac{T_r}{V_r - \\frac{1}{3}} - \\frac{3}{V_r^2}" }, { "math_id": 6, "text": "RT = \\left(p + \\frac{a}{\\sqrt{T}V_\\text{m}\\left(V_\\text{m} + b\\right)}\\right)\\left(V_\\text{m} - b\\right)" }, { "math_id": 7, "text": "p = \\frac{RT}{V_\\text{m} - b} - \\frac{a}{\\sqrt{T}V_\\text{m}\\left(V_\\text{m} + b\\right)}" }, { "math_id": 8, "text": "\\begin{align}\n a &= 0.42748\\, \\frac{R^2{T_\\text{c}}^\\frac{5}{2}}{p_\\text{c}} \\\\\n b &= 0.08664\\, \\frac{RT_\\text{c}}{p_\\text{c}}\n\\end{align}" }, { "math_id": 9, "text": "\np_c=\\frac{(\\sqrt[3]{2}-1)^{7/3}}{3^{1/3}}R^{1/3}\\frac{a^{2/3}}{b^{5/3}}, \\quad T_c=3^{2/3} (\\sqrt[3]{2}-1)^{4/3} (\\frac{a}{bR})^{2/3}, \\qquad V_{m,c}=\\frac{b}{\\sqrt[3]{2}-1}, \\qquad Z_c=\\frac{1}{3}\n" }, { "math_id": 10, "text": "\\ \n p_r = \\frac{p}{p_\\text{c}},\\ \n V_r = \\frac{V_\\text{m}}{V_\\text{m,c}},\\ \n T_r = \\frac{T}{T_\\text{c}}\\ \n" }, { "math_id": 11, "text": "p_r = \\frac{3 T_r}{V_r - b'} - \\frac{1}{b'\\sqrt{T_r} V_r \\left(V_r + b'\\right)}" }, { "math_id": 12, "text": "b' = \\sqrt[3]{2} - 1 \\approx 0.26" }, { "math_id": 13, "text": "p = \\frac{RT}{V_\\text{m} - b} - \\frac{a}{TV_\\text{m}^2}" }, { "math_id": 14, "text": "p = \\frac{RT}{V_\\text{m}}\\left[1 + \\frac{9\\frac{p}{p_\\text{c}}}{128\\frac{T}{T_\\text{c}}} \\left(1 - \\frac{6}{\\frac{T^2}{T_\\text{c}^2}}\\right)\\right]" }, { "math_id": 15, "text": "p = \\frac{RT}{V_\\text{m} - b} \\exp\\left(-\\frac{a}{V_\\text{m}RT}\\right)" }, { "math_id": 16, "text": "\\tilde p = p \\frac{(2be)^2}{a}; \\quad \\tilde T =T \\frac{4bR}{a}; \\quad \\tilde V_m = V_m \\frac{1}{2b}" }, { "math_id": 17, "text": "\\tilde p(2\\tilde V_m -1) = \\tilde T e^{2-\\frac{2}{\\tilde T \\tilde V_m}}" }, { "math_id": 18, "text": "RT = \\left(p + \\frac{a}{T(V_\\text{m} + c)^2}\\right)\\left(V_\\text{m} - b\\right)" }, { "math_id": 19, "text": "p = \\frac{RT}{V_\\text{m} - b} - \\frac{a}{T\\left(V_\\text{m} + c\\right)^2}" }, { "math_id": 20, "text": "\\begin{align}\n a &= \\frac{27R^2 T_\\text{c}^3}{64p_\\text{c}} \\\\\n b &= V_\\text{c} - \\frac{RT_\\text{c}}{4p_\\text{c}} \\\\\n c &= \\frac{3RT_\\text{c}}{8p_\\text{c}} - V_\\text{c}\n\\end{align}" }, { "math_id": 21, "text": "pV_\\text{m} = RT\\left[1 + \\frac{B(T)}{V_\\text{m}} + \\frac{C(T)}{V_\\text{m}^2} + \\frac{D(T)}{V_\\text{m}^3} + \\ldots\\right]" }, { "math_id": 22, "text": "pV_\\text{m} = RT\\left[1 + B'(T)p + C'(T)p^2 + D'(T)p^3 \\ldots\\right]" }, { "math_id": 23, "text": "p = \\frac{RT}{V_\\text{m} - b} - \\frac{a(T)}{V_\\text{m}\\left(V_\\text{m} + b\\right) + b\\left(V_\\text{m} - b\\right)}" }, { "math_id": 24, "text": "p = \\frac{RT}{V_\\text{m} - b} - \\frac{a}{TV_\\text{m}\\left(V_\\text{m} - b\\right)} + \\frac{c}{T^2 V_\\text{m}^3}\\quad" }, { "math_id": 25, "text": "\\left(p - \\frac{c}{T^2 V_\\text{m}^3}\\right)\\left(V_\\text{m} - b\\right) = RT - \\frac{a}{TV_\\text{m}}" }, { "math_id": 26, "text": "RT = \\left(p + \\frac{a}{TV_\\text{m}(V_\\text{m} - b)} - \\frac{c}{T^2 V_\\text{m}^3}\\right)\\left(V_\\text{m} - b\\right)" }, { "math_id": 27, "text": "a = 6p_\\text{c} T_\\text{c} V_\\text{m,c}^2" }, { "math_id": 28, "text": "b = \\frac{V_\\text{m,c}}{4}" }, { "math_id": 29, "text": "V_\\text{m,c} = \\frac{4}{15}\\frac{RT_c}{p_c}" }, { "math_id": 30, "text": "c = 4p_\\text{c} T_\\text{c}^2 V_\\text{m,c}^3\\ " }, { "math_id": 31, "text": "\n V_\\text{m,c},\\ \n p_\\text{c},\\ \n T_\\text{c}\n" }, { "math_id": 32, "text": "p_r = \\frac{15}{4}\\frac{T_r}{V_r - \\frac{1}{4}} - \\frac{6}{T_r V_r\\left(V_r - \\frac{1}{4}\\right)} + \\frac{4}{T_r^2 V_r^3}" }, { "math_id": 33, "text": "p = \\frac{RT}{v^2}\\left(1 - \\frac{c}{vT^3}\\right)(v + B) - \\frac{A}{v^2}" }, { "math_id": 34, "text": "\\begin{align}\n A &= A_0 \\left(1 - \\frac{a}{v}\\right) &\n B &= B_0 \\left(1 - \\frac{b}{v}\\right)\n\\end{align}" }, { "math_id": 35, "text": "\\frac{\\text{m}^3}{\\text{k}\\,\\text{mol}}" }, { "math_id": 36, "text": "\\frac{\\text{kPa}\\cdot\\text{m}^3}{\\text{k}\\,\\text{mol}\\cdot\\text{K}}" }, { "math_id": 37, "text": "p = RTd + d^2\\left(RT(B + bd) - \\left(A + ad - a\\alpha d^4\\right) - \\frac{1}{T^2}\\left[C - cd\\left(1 + \\gamma d^2\\right) \\exp\\left(-\\gamma d^2\\right)\\right]\\right)" }, { "math_id": 38, "text": " \\int_{V_i}^{V_f} \\left(\\frac{RT}{V_m}-P_{real}\\right)dV " } ]
https://en.wikipedia.org/wiki?curid=1268823
1268958
Function of a real variable
Mathematical function In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers formula_0, or a subset of formula_0 that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval. The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers. Nevertheless, the codomain of a function of a real variable may be any set. However, it is often assumed to have a structure of formula_0-vector space over the reals. That is, the codomain may be a Euclidean space, a coordinate vector, the set of matrices of real numbers of a given size, or an formula_0-algebra, such as the complex numbers or the quaternions. The structure formula_0-vector space of the codomain induces a structure of formula_0-vector space on the functions. If the codomain has a structure of formula_0-algebra, the same is true for the functions. The image of a function of a real variable is a curve in the codomain. In this context, a function that defines curve is called a parametric equation of the curve. When the codomain of a function of a real variable is a finite-dimensional vector space, the function may be viewed as a sequence of real functions. This is often used in applications. Real function. A real function is a function from a subset of formula_1 to formula_2 where formula_1 denotes as usual the set of real numbers. That is, the domain of a real function is a subset formula_1, and its codomain is formula_3 It is generally assumed that the domain contains an interval of positive length. Basic examples. For many commonly used real functions, the domain is the whole set of real numbers, and the function is continuous and differentiable at every point of the domain. One says that these functions are defined, continuous and differentiable everywhere. This is the case of: Some functions are defined everywhere, but not continuous at some points. For example Some functions are defined and continuous everywhere, but not everywhere differentiable. For example Many common functions are not defined everywhere, but are continuous and differentiable everywhere where they are defined. For example: Some functions are continuous in their whole domain, and not differentiable at some points. This is the case of: General definition. A real-valued function of a real variable is a function that takes as input a real number, commonly represented by the variable "x", for producing another real number, the "value" of the function, commonly denoted "f"("x"). For simplicity, in this article a real-valued function of a real variable will be simply called a function. To avoid any ambiguity, the other types of functions that may occur will be explicitly specified. Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable is taken in a subset "X" of formula_0, the domain of the function, which is always supposed to contain an interval of positive length. In other words, a real-valued function of a real variable is a function formula_5 such that its domain "X" is a subset of formula_0 that contains an interval of positive length. A simple example of a function in one variable could be: formula_6 formula_7 formula_8 which is the square root of "x". Image. The image of a function formula_9 is the set of all values of "f" when the variable "x" runs in the whole domain of "f". For a continuous (see below for a definition) real-valued function with a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function. The preimage of a given real number "y" is the set of the solutions of the equation "y" = "f"("x"). Domain. The domain of a function of several real variables is a subset of formula_0 that is sometimes explicitly defined. In fact, if one restricts the domain "X" of a function "f" to a subset "Y" ⊂ "X", one gets formally a different function, the "restriction" of "f" to "Y", which is denoted "f"|"Y". In practice, it is often not harmful to identify "f" and "f"|"Y", and to omit the subscript |"Y". Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation. This means that it is not worthy to explicitly define the domain of a function of a real variable. Algebraic structure. The arithmetic operations may be applied to the functions in the following way: It follows that the functions of "n" variables that are everywhere defined and the functions of "n" variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals (formula_0-algebras). One may similarly define formula_14 which is a function only if the set of the points ("x") in the domain of "f" such that "f"("x") ≠ 0 contains an open subset of formula_0. This constraint implies that the above two algebras are not fields. Continuity and limit. Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of a real variable are ubiquitous in mathematics, it is worth defining this notion without reference to the general notion of continuous maps between topological space. For defining the continuity, it is useful to consider the distance function of formula_0, which is an everywhere defined function of 2 real variables: formula_15 A function "f" is continuous at a point formula_16 which is interior to its domain, if, for every positive real number "ε", there is a positive real number "φ" such that formula_17 for all formula_18 such that formula_19 In other words, "φ" may be chosen small enough for having the image by "f" of the interval of radius "φ" centered at formula_16 contained in the interval of length 2"ε" centered at formula_20 A function is continuous if it is continuous at every point of its domain. The limit of a real-valued function of a real variable is as follows. Let "a" be a point in topological closure of the domain "X" of the function "f". The function, "f" has a limit "L" when "x" tends toward "a", denoted formula_21 if the following condition is satisfied: For every positive real number "ε" &gt; 0, there is a positive real number "δ" &gt; 0 such that formula_22 for all "x" in the domain such that formula_23 If the limit exists, it is unique. If "a" is in the interior of the domain, the limit exists if and only if the function is continuous at "a". In this case, we have formula_24 When "a" is in the boundary of the domain of "f", and if "f" has a limit at "a", the latter formula allows to "extend by continuity" the domain of "f" to "a". Calculus. One can collect a number of functions each of a real variable, say formula_25 into a vector parametrized by "x": formula_26 The derivative of the vector y is the vector derivatives of "fi"("x") for "i" = 1, 2, ..., "n": formula_27 One can also perform line integrals along a space curve parametrized by "x", with position vector r = r("x"), by integrating with respect to the variable "x": formula_28 where · is the dot product, and "x" = "a" and "x" = "b" are the start and endpoints of the curve. Theorems. With the definitions of integration and derivatives, key theorems can be formulated, including the fundamental theorem of calculus, integration by parts, and Taylor's theorem. Evaluating a mixture of integrals and derivatives can be done by using theorem differentiation under the integral sign. Implicit functions. A real-valued implicit function of a real variable is not written in the form ""y" = "f"("x")". Instead, the mapping is from the space formula_02 to the zero element in formula_0 (just the ordinary zero 0): formula_29 and formula_30 is an equation in the variables. Implicit functions are a more general way to represent functions, since if: formula_31 then we can always define: formula_32 but the converse is not always possible, i.e. not all implicit functions have the form of this equation. One-dimensional space curves in formula_0"n". Formulation. Given the functions "r"1 "r"1("t"), "r"2 "r"2("t"), ..., "r""n" "r""n"("t") all of a common variable "t", so that: formula_33 or taken together: formula_34 then the parametrized "n"-tuple, formula_35 describes a one-dimensional space curve. Tangent line to curve. At a point r("t" "c") a ("a"1, "a"2, ..., "a""n") for some constant "t" = "c", the equations of the one-dimensional tangent line to the curve at that point are given in terms of the ordinary derivatives of "r"1("t"), "r"2("t"), ..., "r""n"("t"), and "r" with respect to "t": formula_36 Normal plane to curve. The equation of the "n"-dimensional hyperplane normal to the tangent line at r = a is: formula_37 or in terms of the dot product: formula_38 where p ("p"1, "p"2, ..., "p""n") are points "in the plane", not on the space curve. Relation to kinematics. The physical and geometric interpretation of "dr("t")/"dt" is the "velocity" of a point-like particle moving along the path r("t"), treating r as the spatial position vector coordinates parametrized by time "t", and is a vector tangent to the space curve for all "t" in the instantaneous direction of motion. At "t" = "c", the space curve has a tangent vector "dr("t")/"dt"|"t" "c", and the hyperplane normal to the space curve at "t" = "c" is also normal to the tangent at "t" = "c". Any vector in this plane (p − a) must be normal to "d"r("t")/"dt"|"t" "c". Similarly, "d"2r("t")/"dt"2 is the "acceleration" of the particle, and is a vector normal to the curve directed along the radius of curvature. Matrix valued functions. A matrix can also be a function of a single variable. For example, the rotation matrix in 2d: formula_39 is a matrix valued function of rotation angle of about the origin. Similarly, in special relativity, the Lorentz transformation matrix for a pure boost (without rotations): formula_40 is a function of the boost parameter "β" = "v"/"c", in which "v" is the relative velocity between the frames of reference (a continuous variable), and "c" is the speed of light, a constant. Banach and Hilbert spaces and quantum mechanics. Generalizing the previous section, the output of a function of a real variable can also lie in a Banach space or a Hilbert space. In these spaces, division and multiplication and limits are all defined, so notions such as derivative and integral still apply. This occurs especially often in quantum mechanics, where one takes the derivative of a ket or an operator. This occurs, for instance, in the general time-dependent Schrödinger equation: formula_41 where one takes the derivative of a wave function, which can be an element of several different Hilbert spaces. Complex-valued function of a real variable. A complex-valued function of a real variable may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values. If "f"("x") is such a complex valued function, it may be decomposed as "f"("x") = "g"("x") + "ih"("x"), where "g" and "h" are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions. Cardinality of sets of functions of a real variable. The cardinality of the set of real-valued functions of a real variable, formula_42, is formula_43, which is strictly larger than the cardinality of the continuum (i.e., set of all real numbers). This fact is easily verified by cardinal arithmetic: formula_44 Furthermore, if formula_45 is a set such that formula_46, then the cardinality of the set formula_47 is also formula_48, since formula_49 However, the set of continuous functions formula_50 has a strictly smaller cardinality, the cardinality of the continuum, formula_51. This follows from the fact that a continuous function is completely determined by its value on a dense subset of its domain. Thus, the cardinality of the set of continuous real-valued functions on the reals is no greater than the cardinality of the set of real-valued functions of a rational variable. By cardinal arithmetic: formula_52 On the other hand, since there is a clear bijection between formula_53 and the set of constant functions formula_54, which forms a subset of formula_55, formula_56 must also hold. Hence, formula_57. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}" }, { "math_id": 1, "text": "\\mathbb R" }, { "math_id": 2, "text": "\\mathbb R," }, { "math_id": 3, "text": "\\mathbb R." }, { "math_id": 4, "text": "\\frac\\pi 2 + k\\pi," }, { "math_id": 5, "text": "f: X \\to \\R " }, { "math_id": 6, "text": " f : X \\to \\R " }, { "math_id": 7, "text": " X = \\{ x \\in \\R \\,:\\, x \\geq 0\\} " }, { "math_id": 8, "text": " f(x) = \\sqrt{x}" }, { "math_id": 9, "text": "f(x)" }, { "math_id": 10, "text": "(x)\\mapsto r" }, { "math_id": 11, "text": "rf:(x)\\mapsto rf(x)" }, { "math_id": 12, "text": "f+g:(x)\\mapsto f(x)+g(x)" }, { "math_id": 13, "text": "f\\,g:(x)\\mapsto f(x)\\,g(x)" }, { "math_id": 14, "text": "1/f:(x)\\mapsto 1/f(x)," }, { "math_id": 15, "text": "d(x,y)=|x-y|" }, { "math_id": 16, "text": "a" }, { "math_id": 17, "text": "|f(x)-f(a)| < \\varepsilon " }, { "math_id": 18, "text": "x" }, { "math_id": 19, "text": "d(x,a)<\\varphi." }, { "math_id": 20, "text": "f(a)." }, { "math_id": 21, "text": "L = \\lim_{x \\to a} f(x), " }, { "math_id": 22, "text": "|f(x) - L| < \\varepsilon " }, { "math_id": 23, "text": "d(x, a)< \\delta." }, { "math_id": 24, "text": "f(a) = \\lim_{x \\to a} f(x). " }, { "math_id": 25, "text": "y_1 = f_1(x)\\,,\\quad y_2 = f_2(x)\\,,\\ldots, y_n = f_n(x) " }, { "math_id": 26, "text": "\\mathbf{y} = (y_1, y_2, \\ldots, y_n) = [f_1(x), f_2(x) ,\\ldots, f_n(x)] " }, { "math_id": 27, "text": "\\frac{d\\mathbf{y}}{dx} = \\left(\\frac{dy_1}{dx}, \\frac{dy_2}{dx}, \\ldots, \\frac{dy_n}{dx}\\right) " }, { "math_id": 28, "text": "\\int_a^b \\mathbf{y}(x) \\cdot d\\mathbf{r} = \\int_a^b \\mathbf{y}(x) \\cdot \\frac{d\\mathbf{r}(x)}{dx} dx " }, { "math_id": 29, "text": "\\phi: \\R^2 \\to \\{0\\} " }, { "math_id": 30, "text": "\\phi(x,y) = 0 " }, { "math_id": 31, "text": "y=f(x) " }, { "math_id": 32, "text": " \\phi(x, y) = y - f(x) = 0 " }, { "math_id": 33, "text": "\\begin{align}\nr_1 : \\mathbb{R} \\rightarrow \\mathbb{R} & \\quad r_2 : \\mathbb{R} \\rightarrow \\mathbb{R} & \\cdots & \\quad r_n : \\mathbb{R} \\rightarrow \\mathbb{R} \\\\\nr_1 = r_1(t) & \\quad r_2 = r_2(t) & \\cdots & \\quad r_n = r_n(t) \\\\\n\\end{align}" }, { "math_id": 34, "text": "\\mathbf{r} : \\mathbb{R} \\rightarrow \\mathbb{R}^n \\,,\\quad \\mathbf{r} = \\mathbf{r}(t) " }, { "math_id": 35, "text": "\\mathbf{r}(t) = [r_1(t), r_2(t), \\ldots , r_n(t)] " }, { "math_id": 36, "text": "\\frac{r_1(t) - a_1}{dr_1(t)/dt} = \\frac{r_2(t) - a_2}{dr_2(t)/dt} = \\cdots = \\frac{r_n(t) - a_n}{dr_n(t)/dt} " }, { "math_id": 37, "text": "(p_1 - a_1)\\frac{dr_1(t)}{dt} + (p_2 - a_2)\\frac{dr_2(t)}{dt} + \\cdots + (p_n - a_n)\\frac{dr_n(t)}{dt} = 0" }, { "math_id": 38, "text": "(\\mathbf{p} - \\mathbf{a})\\cdot \\frac{d\\mathbf{r}(t)}{dt} = 0" }, { "math_id": 39, "text": "\nR(\\theta) = \\begin{bmatrix}\n\\cos \\theta & -\\sin \\theta \\\\\n\\sin \\theta & \\cos \\theta \\\\\n\\end{bmatrix}" }, { "math_id": 40, "text": "\n\\Lambda(\\beta) = \\begin{bmatrix}\n\\frac{1}{\\sqrt{1-\\beta ^2}} & -\\frac{\\beta }{\\sqrt{1-\\beta ^2}} & 0 & 0 \\\\\n-\\frac{\\beta }{\\sqrt{1-\\beta ^2}} & \\frac{1}{\\sqrt{1-\\beta ^2}} & 0 & 0 \\\\\n0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 1 \\\\\n\\end{bmatrix}" }, { "math_id": 41, "text": "i \\hbar \\frac{\\partial}{\\partial t}\\Psi = \\hat H \\Psi" }, { "math_id": 42, "text": "\\mathbb{R}^\\mathbb{R}=\\{f:\\mathbb{R}\\to \\mathbb{R}\\}" }, { "math_id": 43, "text": "\\beth_2=2^\\mathfrak{c}" }, { "math_id": 44, "text": "\\mathrm{card}(\\R^\\R)=\\mathrm{card}(\\R)^{\\mathrm{card}(\\R)}=\n\\mathfrak{c}^\\mathfrak{c}=(2^{\\aleph_0})^\\mathfrak{c}=2^{\\aleph_0\\cdot\\mathfrak{c}}=2^\\mathfrak{c}.\n" }, { "math_id": 45, "text": "X" }, { "math_id": 46, "text": "2\\leq\\mathrm{card}(X)\\leq\\mathfrak{c}" }, { "math_id": 47, "text": "X^\\mathbb{R}=\\{f:\\mathbb{R}\\to X\\}" }, { "math_id": 48, "text": "2^\\mathfrak{c}" }, { "math_id": 49, "text": "2^\\mathfrak{c}=\\mathrm{card}(2^\\R)\\leq\\mathrm{card}(X^\\R)\\leq\\mathrm{card}(\\R^\n\\R)=2^\\mathfrak{c}." }, { "math_id": 50, "text": "C^0(\\mathbb{R})=\\{f:\\mathbb{R}\\to\\mathbb{R}:f\\ \\mathrm{continuous}\\}" }, { "math_id": 51, "text": "\\mathfrak{c}" }, { "math_id": 52, "text": "\\mathrm{card}(C^0(\\R))\\leq\\mathrm{card}(\\R^\\Q)=(2^{\\aleph_0})^{\\aleph_0}=2^{\\aleph_0\\cdot\\aleph_0}=\n2^{\\aleph_0}=\\mathfrak{c}." }, { "math_id": 53, "text": "\\R" }, { "math_id": 54, "text": "\\{f:\\R\\to\\R: f(x)\\equiv x_0\\}" }, { "math_id": 55, "text": "C^0(\\R)" }, { "math_id": 56, "text": "\\mathrm{card}(C^0(\\R)) \\geq \\mathfrak{c}" }, { "math_id": 57, "text": "\\mathrm{card}(C^0(\\R)) = \\mathfrak{c}" } ]
https://en.wikipedia.org/wiki?curid=1268958
12690616
Subscript and superscript
A character set slightly below and above the normal line of type, respectively A subscript or superscript is a character (such as a number or letter) that is set slightly below or above the normal line of type, respectively. It is usually smaller than the rest of the text. Subscripts appear at or below the baseline, while superscripts are above. Subscripts and superscripts are perhaps most often used in formulas, mathematical expressions, and specifications of chemical compounds and isotopes, but have many other uses as well. In professional typography, subscript and superscript characters are not simply ordinary characters reduced in size; to keep them visually consistent with the rest of the font, typeface designers make them slightly heavier (i.e. medium or bold typography) than a reduced-size character would be. The vertical distance that sub- or superscripted text is moved from the original baseline varies by typeface and by use. In typesetting, such types are traditionally called "superior" and "inferior" letters, figures, etc., or just "superiors" and "inferiors". In English, most nontechnical use of superiors is archaic. Superior and inferior figures on the baseline are used for fractions and most other purposes, while lowered inferior figures are needed for chemical and mathematical subscripts. Uses. A single typeface may contain sub- and superscript glyphs at different positions for different uses. The four most common positions are listed here. Because each position is used in different contexts, not all alphanumerics may be available in all positions. For example, subscript letters on the baseline are quite rare, and many typefaces provide only a limited number of superscripted letters. Despite these differences, all reduced-size glyphs go by the same generic terms "subscript" and "superscript", which are synonymous with the terms "inferior letter" (or "number") and "superior letter" (or "number"), respectively. Most fonts that contain superscript/subscript will have predetermined size and orientation that is dependent on the design of the font. Subscripts that are dropped below the baseline. Subscripts are used in chemical formulas. For example, the chemical formula for glucose is C6H12O6 (meaning that it is a molecule with 6 carbon atoms, 12 hydrogen atoms and 6 oxygen atoms). The chemical formula of the water molecule, H2O, indicates that it contains two hydrogen atoms and one oxygen atom. A subscript is also used to distinguish between different versions of a subatomic particle. Thus electron, muon, and tau neutrinos are denoted and . A particle may be distinguished by multiple subscripts, such as for the triple bottom omega particle. Similarly, subscripts are also used frequently in mathematics to define different versions of the same variable: for example, in an equation "x"0 and "x"f might indicate the initial and final value of "x", while "v"rocket and "v"observer would stand for the velocities of a rocket and an observer. Commonly, variables with a zero in the subscript are referred to as the variable name followed by "nought" (e.g. v0 would be read, "v-nought"). Subscripts are often used to refer to members of a mathematical sequence or set or elements of a vector. For example, in the sequence "O" = (45, −2, 800), "O"3 refers to the third member of sequence "O", which is 800. Also in mathematics and computing, a subscript can be used to represent the radix, or base, of a written number, especially where multiple bases are used alongside each other. For example, comparing values in hexadecimal, denary, and octal one might write Chex = 12dec = 14oct. Subscripted numbers dropped below the baseline are also used for the denominators of stacked fractions, like this: . Subscripts that are aligned with the baseline. The only common use of these subscripts is for the denominators of diagonal fractions, like ½ or the signs for percent %, permille ‰, and basis point ‱. Certain standard abbreviations are also composed as diagonal fractions, such as ℅ (care of), ℀ (account of), ℁ (addressed to the subject), or in Spanish ℆ (cada uno/una, "each one"). Superscripts that typically do not extend above the ascender line. These superscripts typically share a baseline with numerator digits, the top of which are aligned with the top of the full-height numerals of the base font; lowercase ascenders may extend above. Ordinal indicators are sometimes written as superscripts (1st, 2nd, 3rd, 4th, rather than 1st, 2nd, 3rd, 4th), although many English-language style guides recommend against this use. Romance languages use a similar convention, such as 1er or 2e in French, or 4ª and 4º in Galician and Italian, or 4.ª and 4.º in Portuguese and Spanish. In medieval manuscripts, many superscript as well as subscript signs were used to abbreviate text. From these developed modern diacritical marks (glyphs, or "accents" placed above or below the letter). Also, in early Middle High German, umlauts and other modifications to pronunciation would be indicated by superscript letters placed directly above the letter they modified. Thus the modern umlaut ü was written as uͤ. Both vowels and consonants were used in this way, as in ſheͨzze and boͮsen. In modern typefaces, these letters are usually smaller than other superscripts, and their baseline is slightly above the base font's midline, making them extend no higher than a typical ordinal indicator. Superscripts are used for the standard abbreviations for service mark (℠) and trademark (™). The signs for copyright © and registered trademark ® are also sometimes superscripted, depending on the typeface or house style. On handwritten documents and signs, a monetary amount may be written with the cents value superscripted, as in $8⁰⁰ or 8€⁵⁰. Often the superscripted numbers are underlined: $8⁰⁰, 8€⁵⁰. The currency symbol itself may also be superscripted, as in $80 or 6¢.There is no ruling whether or not these characters need to be supercript, or made smaller than the numbers, or aligned to any of the various guide lines. That of course is decided by the preference of the typographer. Superscripts that typically extend above the ascender line. Both low and high superscripts can be used to indicate the presence of a footnote in a document, like this5 or this.xi Any combination of characters can be used for this purpose; in technical writing footnotes are sometimes composed of letters and numbers together, like this.A.2 The choice of low or high alignment depends on taste, but high-set footnotes tend to be more common, as they stand out more from the text. In mathematics, high superscripts are used for exponentiation to indicate that one number or variable is raised to the power of another number or variable. Thus "y"4 is "y" raised to the fourth power, 2"x" is 2 raised to the power of "x", and the equation includes a term for the speed of light squared. This led over time to an "abuse of notation" whereby superscripts indicate iterative function composition, including derivatives. In an unrelated use, superscripts also indicate contravariant tensors in Ricci calculus. The charges of ions and subatomic particles are also denoted by superscripts. is a negatively charged chlorine atom, is an atom of lead with a positive charge of four, is an electron, is a positron, and is an antimuon. Atomic isotopes are written using superscripts. In symbolic form, the number of nucleons is denoted as a superscripted prefix to the chemical symbol (for example He, C, C, I, and U). The letters "m" or "f" may follow the number to indicate metastable or fission isomers, as in Co or Pu. Subscripts and superscripts can also be used together to give more specific information about nuclides. For example, U denotes an atom of uranium with 235 nucleons, 92 of which are protons. A chemical symbol can be completely surrounded: C is a divalent cation of carbon with 14 nucleons, of which six are protons and 8 are neutrons, and there are two atoms in this chemical compound. The numerators of stacked fractions (such as ) usually use high-set superscripts, although some specially designed glyphs keep the top of the numerator aligned with the top of the full-height numerals. Alignment examples. This image shows the four common locations for subscripts and superscripts, according to their typical uses. The typeface is Minion Pro, set in Adobe Illustrator. Note that the default superscripting algorithms of most word processors would set the "th" and "lle" too high, and the weight of all the subscript and superscript glyphs would be too light. Another minor adjustment that is often omitted by renderers is the control of the direction of movement for superscripts and subscripts, when they do not lie on the baseline. Ideally this should allow for the font, e.g. italics are slanting; most renderers adjust the position only vertically and do not also shift it horizontally. This may create a collision with surrounding letters in the same italic font size. One can see an example of such collision on the right side when rendered in HTML (see the figure on the right). To avoid this, it is often desirable to insert a small positive horizontal margin (or a thin space) (on the left side of the first superscript character), or a negative margin (or a tiny backspace) before a subscript. It is more critical with glyphs from fonts in Oblique styles that are more slanted than those from fonts in Italic style, and some fonts reverse the direction of slanting, so there is no general solution except when the renderer takes into account the font metrics properties that specifies the angle of slanting, However the same problem occurs more generally between spans of normal glyphs (non-superscript and non-subscript) when slanting styles are mixed. Software support. Desktop publishing. Many text editing and word processing programs have automatic subscripting and superscripting features, although these programs usually simply use ordinary characters reduced in size and moved up or down – rather than separately designed subscript or superscript glyphs. Professional typesetting programs such as QuarkXPress or Adobe InDesign also have similar features for automatically converting regular type to subscript or superscript. These programs, however, may also offer native OpenType support for the special subscript and superscript glyphs included in many professional typeface packages (such as those shown in the image above). "See also OpenType, below." HTML. In HTML and Wiki syntax, subscript text is produced by putting it inside the tags codice_0 and codice_1. Similarly, superscripts are produced with codice_2 and codice_2. The exact size and position of the resulting characters will vary by font and browser, but are usually reduced to around 75% original size. TeX. In TeX's mathematics mode (as used in MediaWiki), subscripts are typeset with the underscore, while superscripts are made with the caret. Thus codice_4 produces formula_0, and codice_5 produces formula_1. In LaTeX text mode the math method above is inappropriate, as letters will be in math italic, so the command codice_6 will give nth and codice_7 will give Abase (textual "sub"scripts are rare, so codice_8 is not built-in, but requires the "fixltx2e" package). As in other systems, when using UTF-8 encoding, the masculine º and feminine ª ordinal indicators can be used as characters, with no need to use a command. In line with its origin as a superscript circle, the degree symbol (°) is composed by a superscript circle operator (∘). codice_9. Superscripts and subscripts of arbitrary height can be done with the codice_10 command: the first argument is the amount to raise, and the second is the text; a negative first argument will lower the text. In this case the text is not resized automatically, so a sizing command can be included, e.g. codice_11. Unicode. Unicode defines subscript and superscript characters in several areas; in particular, it has a full set of superscript and subscript digits. Owing to the popularity of using these characters to make fractions, most modern fonts render most or all of these as cap height superscripts and baseline subscripts. The same font may align letters and numbers in different ways. Other than numbers, the set of super- and subscript letters and other symbols is incomplete and somewhat random, and many fonts do not contain them. Because of these inconsistencies, these glyphs may not be suitable for some purposes (see "Uses", above). OpenType. Several advanced features of OpenType typefaces are supported for professionally designed subscript and superscript glyphs. Exactly which glyphs are included varies by typeface; some have only basic support for numerals, while others contain a full set of letters, numerals, and punctuation. They can be available via activating codice_12 or codice_13 feature tag. These feature tags can be turned on if software environment support optional features. In addition, some other typefaces placed them in a Unicode Private Use Area. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X_{ab}" }, { "math_id": 1, "text": "X^{ab}" } ]
https://en.wikipedia.org/wiki?curid=12690616
12691877
Analytic function of a matrix
Function that maps matrices to matrices In mathematics, every analytic function can be used for defining a matrix function that maps square matrices with complex entries to square matrices of the same size. This is used for defining the exponential of a matrix, which is involved in the closed-form solution of systems of linear differential equations. Extending scalar function to matrix functions. There are several techniques for lifting a real function to a square matrix function such that interesting properties are maintained. All of the following techniques yield the same matrix function, but the domains on which the function is defined may differ. Power series. If the analytic function f has the Taylor expansion formula_0 then a matrix function formula_1 can be defined by substituting x by a square matrix: powers become matrix powers, additions become matrix sums and multiplications by coefficients become scalar multiplications. If the series converges for formula_2, then the corresponding matrix series converges for matrices A such that formula_3 for some matrix norm that satisfies formula_4. Diagonalizable matrices. A square matrix A is diagonalizable, if there is an invertible matrix P such that formula_5 is a diagonal matrix, that is, D has the shape formula_6 As formula_7 it is natural to set formula_8 It can be verified that the matrix "f"("A") does not depend on a particular choice of P. For example, suppose one is seeking formula_9 for formula_10 One has formula_11 for formula_12 Application of the formula then simply yields formula_13 Likewise, formula_14 Jordan decomposition. All complex matrices, whether they are diagonalizable or not, have a Jordan normal form formula_15, where the matrix "J" consists of Jordan blocks. Consider these blocks separately and apply the power series to a Jordan block: formula_16 This definition can be used to extend the domain of the matrix function beyond the set of matrices with spectral radius smaller than the radius of convergence of the power series. Note that there is also a connection to divided differences. A related notion is the Jordan–Chevalley decomposition which expresses a matrix as a sum of a diagonalizable and a nilpotent part. Hermitian matrices. A Hermitian matrix has all real eigenvalues and can always be diagonalized by a unitary matrix P, according to the spectral theorem. In this case, the Jordan definition is natural. Moreover, this definition allows one to extend standard inequalities for real functions: If formula_17 for all eigenvalues of formula_18, then formula_19. The proof follows directly from the definition. Cauchy integral. Cauchy's integral formula from complex analysis can also be used to generalize scalar functions to matrix functions. Cauchy's integral formula states that for any analytic function f defined on a set "D" ⊂ C, one has formula_21 where C is a closed simple curve inside the domain D enclosing x. Now, replace x by a matrix A and consider a path C inside D that encloses all eigenvalues of A. One possibility to achieve this is to let C be a circle around the origin with radius larger than for an arbitrary matrix norm . Then, "f" ("A") is definable by formula_22 This integral can readily be evaluated numerically using the trapezium rule, which converges exponentially in this case. That means that the precision of the result doubles when the number of nodes is doubled. In routine cases, this is bypassed by Sylvester's formula. This idea applied to bounded linear operators on a Banach space, which can be seen as infinite matrices, leads to the holomorphic functional calculus. Matrix perturbations. The above Taylor power series allows the scalar formula_23 to be replaced by the matrix. This is not true in general when expanding in terms of formula_24 about formula_25 unless formula_26. A counterexample is formula_27, which has a finite length Taylor series. We compute this in two ways, The scalar expression assumes commutativity while the matrix expression does not, and thus they cannot be equated directly unless formula_26. For some "f"("x") this can be dealt with using the same method as scalar Taylor series. For example, formula_31. If formula_32 exists then formula_33. The expansion of the first term then follows the power series given above, formula_34 The convergence criteria of the power series then apply, requiring formula_35 to be sufficiently small under the appropriate matrix norm. For more general problems, which cannot be rewritten in such a way that the two matrices commute, the ordering of matrix products produced by repeated application of the Leibniz rule must be tracked. Arbitrary function of a 2×2 matrix. An arbitrary function "f"("A") of a 2×2 matrix A has its Sylvester's formula simplify to formula_36 where formula_37 are the eigenvalues of its characteristic equation, |"A" − "λI"| = 0, and are given by formula_38 However, if there is degeneracy, the following formula is used, where f' is the derivative of f. formula_39 Classes of matrix functions. Using the semidefinite ordering (formula_40 is positive-semidefinite and formula_41 is positive definite), some of the classes of scalar functions can be extended to matrix functions of Hermitian matrices. Operator monotone. A function f is called operator monotone if and only if formula_42 for all self-adjoint matrices "A","H" with spectra in the domain of f. This is analogous to monotone function in the scalar case. Operator concave/convex. A function f is called operator concave if and only if formula_43 for all self-adjoint matrices "A","H" with spectra in the domain of f and formula_44. This definition is analogous to a concave scalar function. An operator convex function can be defined be switching formula_45 to formula_46 in the definition above. Examples. The matrix log is both operator monotone and operator concave. The matrix square is operator convex. The matrix exponential is none of these. Loewner's theorem states that a function on an "open" interval is operator monotone if and only if it has an analytic extension to the upper and lower complex half planes so that the upper half plane is mapped to itself. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x) = c_0 + c_1 x + c_2 x^2 + \\cdots" }, { "math_id": 1, "text": "A\\mapsto f(A)" }, { "math_id": 2, "text": "|x| < r" }, { "math_id": 3, "text": "\\|A\\| < r" }, { "math_id": 4, "text": "\\|AB\\|\\leq \\|A\\|\\|B\\|" }, { "math_id": 5, "text": "D = P^{-1}\\,A\\,P" }, { "math_id": 6, "text": "D=\\begin{bmatrix}\nd_1 & \\cdots & 0 \\\\\n\\vdots & \\ddots & \\vdots \\\\\n0 & \\cdots & d_n\n\\end{bmatrix}." }, { "math_id": 7, "text": "A = P\\,D\\,P^{-1}," }, { "math_id": 8, "text": "f(A)=P\\, \\begin{bmatrix}\nf(d_1) & \\cdots & 0 \\\\\n\\vdots & \\ddots & \\vdots \\\\\n0 & \\cdots & f(d_n)\n\\end{bmatrix}\\,P^{-1}." }, { "math_id": 9, "text": "\\Gamma(A) = (A-1)!" }, { "math_id": 10, "text": "A = \\begin{bmatrix}\n1&3\\\\\n2&1\n\\end{bmatrix} . " }, { "math_id": 11, "text": "A = P \\begin{bmatrix}\n1-\\sqrt{6}& 0 \\\\\n0 & 1+ \\sqrt{6}\n\\end{bmatrix} P^{-1}~, " }, { "math_id": 12, "text": " P= \\begin{bmatrix}\n1/2 & 1/2 \\\\\n-\\frac{1}{\\sqrt{6}} &\\frac{1}{\\sqrt{6}}\n\\end{bmatrix} ~. " }, { "math_id": 13, "text": " \\Gamma(A) = \\begin{bmatrix}\n1/2 & 1/2 \\\\\n-\\frac{1}{\\sqrt{6} } & \\frac{1}{\\sqrt{6} }\n\\end{bmatrix} \\cdot\n\\begin{bmatrix}\n\\Gamma(1-\\sqrt{6}) & 0\\\\\n0&\\Gamma(1+\\sqrt{6})\n\\end{bmatrix} \\cdot\n\\begin{bmatrix}\n1 & -\\sqrt{6}/2 \\\\\n 1 & \\sqrt{6}/2\n\\end{bmatrix} \\approx\n\\begin{bmatrix} 2.8114 & 0.4080 \\\\\n0.2720 & 2.8114\n\\end{bmatrix} ~.\n" }, { "math_id": 14, "text": " A^4 = \\begin{bmatrix}\n1/2 & 1/2 \\\\\n-\\frac{1}{\\sqrt{6} } & \\frac{1}{\\sqrt{6} }\n\\end{bmatrix} \\cdot\n\\begin{bmatrix}\n(1-\\sqrt{6})^4 & 0\\\\\n0&(1+\\sqrt{6})^4\n\\end{bmatrix} \\cdot\n\\begin{bmatrix}\n1 & -\\sqrt{6}/2 \\\\\n 1 & \\sqrt{6}/2\n\\end{bmatrix} =\n\\begin{bmatrix} 73 & 84\\\\\n56 & 73\n\\end{bmatrix} ~.\n" }, { "math_id": 15, "text": "A = P\\,J\\,P^{-1}" }, { "math_id": 16, "text": " f \\left( \\begin{bmatrix}\n\\lambda & 1 & 0 & \\cdots & 0 \\\\\n0 & \\lambda & 1 & \\vdots & \\vdots \\\\\n0 & 0 & \\ddots & \\ddots & \\vdots \\\\\n\\vdots & \\cdots & \\ddots & \\lambda & 1 \\\\\n0 & \\cdots & \\cdots & 0 & \\lambda\n\\end{bmatrix} \\right) =\n\n\\begin{bmatrix}\n\\frac{f(\\lambda)}{0!} & \\frac{f'(\\lambda)}{1!} & \\frac{f''(\\lambda)}{2!} & \\cdots & \\frac{f^{(n-1)}(\\lambda)}{(n-1)!} \\\\\n0 & \\frac{f(\\lambda)}{0!} & \\frac{f'(\\lambda)}{1!} & \\vdots & \\frac{f^{(n-2)}(\\lambda)}{(n-2)!} \\\\\n0 & 0 & \\ddots & \\ddots & \\vdots \\\\\n\\vdots & \\cdots & \\ddots & \\frac{f(\\lambda)}{0!} & \\frac{f'(\\lambda)}{1!} \\\\\n0 & \\cdots & \\cdots & 0 & \\frac{f(\\lambda)}{0!}\n\\end{bmatrix}.\n" }, { "math_id": 17, "text": " f(a) \\leq g(a)" }, { "math_id": 18, "text": "A" }, { "math_id": 19, "text": "f(A) \\preceq g(A)" }, { "math_id": 20, "text": "X \\preceq Y \\Leftrightarrow Y - X " }, { "math_id": 21, "text": "f(x) = \\frac{1}{2\\pi i} \\oint_{C}\\! {\\frac{f(z)}{z-x}}\\, \\mathrm{d}z ~," }, { "math_id": 22, "text": "f(A) = \\frac{1}{2\\pi i} \\oint_C f(z)\\left(z I - A\\right)^{-1} \\mathrm{d}z \\,. " }, { "math_id": 23, "text": "x" }, { "math_id": 24, "text": "A(\\eta) = A+\\eta B" }, { "math_id": 25, "text": "\\eta = 0" }, { "math_id": 26, "text": "[A,B]=0" }, { "math_id": 27, "text": "f(x) = x^{3}" }, { "math_id": 28, "text": "f(A + \\eta B) = (A+\\eta B)^{3} = A^{3} + \\eta(A^{2}B + ABA + BA^{2}) + \\eta^{2}(AB^{2} + BAB + B^{2}A) + \\eta^{3}B^{3}" }, { "math_id": 29, "text": "f(a+\\eta b)" }, { "math_id": 30, "text": "\\begin{align}\nf(a+\\eta b) &= f(a) + f'(a)\\frac{\\eta b}{1!} + f''(a)\\frac{(\\eta b)^2}{2!} + f'''(a)\\frac{(\\eta b)^3}{3!} \\\\[.5em]\n&= a^3 + 3a^2(\\eta b) + 3a(\\eta b)^2 + (\\eta b)^3 \\\\[.5em]\n&\\to A^3 = + 3A^2(\\eta B) + 3A(\\eta B)^2 + (\\eta B)^3\n\\end{align}" }, { "math_id": 31, "text": "f(x) = \\frac{1}{x}" }, { "math_id": 32, "text": "A^{-1}" }, { "math_id": 33, "text": "f(A+\\eta B) = f(\\mathbb{I} + \\eta A^{-1}B)f(A)" }, { "math_id": 34, "text": "f(\\mathbb{I} + \\eta A^{-1}B) = \\mathbb{I} - \\eta A^{-1}B + (-\\eta A^{-1}B)^2 + \\cdots = \\sum_{n=0}^\\infty (-\\eta A^{-1}B)^n " }, { "math_id": 35, "text": "\\Vert \\eta A^{-1}B \\Vert" }, { "math_id": 36, "text": "f(A) = \\frac{f(\\lambda_+) + f(\\lambda_-)}{2} I + \\frac{A - \\left (\\frac{tr(A)}{2}\\right )I}{\\sqrt{\\left (\\frac{tr(A)}{2}\\right)^2 - |A|}} \\frac{f(\\lambda_+) - f(\\lambda_-)}{2} ~," }, { "math_id": 37, "text": "\\lambda_\\pm" }, { "math_id": 38, "text": "\\lambda_\\pm = \\frac{tr(A)}{2} \\pm \\sqrt{\\left (\\frac{tr(A)}{2}\\right )^2 - |A|} ." }, { "math_id": 39, "text": "f(A) = f \\left( \\frac{tr(A)}{2} \\right) I + \\mathrm{adj} \\left( \\frac{tr(A)}{2}I - A \\right ) f' \\left( \\frac{tr(A)}{2} \\right) ." }, { "math_id": 40, "text": "X \\preceq Y \\Leftrightarrow Y - X" }, { "math_id": 41, "text": " X \\prec Y \\Leftrightarrow Y - X " }, { "math_id": 42, "text": " 0 \\prec A \\preceq H \\Rightarrow f(A) \\preceq f(H) " }, { "math_id": 43, "text": " \\tau f(A) + (1-\\tau) f(H) \\preceq f \\left ( \\tau A + (1-\\tau)H \\right ) " }, { "math_id": 44, "text": "\\tau \\in [0,1]" }, { "math_id": 45, "text": "\\preceq" }, { "math_id": 46, "text": "\\succeq" } ]
https://en.wikipedia.org/wiki?curid=12691877
12693735
Winnow (algorithm)
The winnow algorithm is a technique from machine learning for learning a linear classifier from labeled examples. It is very similar to the perceptron algorithm. However, the perceptron algorithm uses an additive weight-update scheme, while Winnow uses a multiplicative scheme that allows it to perform much better when many dimensions are irrelevant (hence its name winnow). It is a simple algorithm that scales well to high-dimensional data. During training, Winnow is shown a sequence of positive and negative examples. From these it learns a decision hyperplane that can then be used to label novel examples as positive or negative. The algorithm can also be used in the online learning setting, where the learning and the classification phase are not clearly separated. Algorithm. The basic algorithm, Winnow1, is as follows. The instance space is formula_0, that is, each instance is described as a set of Boolean-valued features. The algorithm maintains non-negative weights formula_1 for formula_2, which are initially set to 1, one weight for each feature. When the learner is given an example formula_3, it applies the typical prediction rule for linear classifiers: Here formula_5 is a real number that is called the "threshold". Together with the weights, the threshold defines a dividing hyperplane in the instance space. Good bounds are obtained if formula_6 (see below). For each example with which it is presented, the learner applies the following update rule: A typical value for α is 2. There are many variations to this basic approach. "Winnow2" is similar except that in the demotion step the weights are divided by α instead of being set to 0. "Balanced Winnow" maintains two sets of weights, and thus two hyperplanes. This can then be generalized for multi-label classification. Mistake bounds. In certain circumstances, it can be shown that the number of mistakes Winnow makes as it learns has an upper bound that is independent of the number of instances with which it is presented. If the Winnow1 algorithm uses formula_11 and formula_12 on a target function that is a formula_13-literal monotone disjunction given by formula_14, then for any sequence of instances the total number of mistakes is bounded by: formula_15.
[ { "math_id": 0, "text": "X=\\{0,1\\}^n" }, { "math_id": 1, "text": "w_i" }, { "math_id": 2, "text": "i\\in \\{1,\\ldots,n\\}" }, { "math_id": 3, "text": "(x_1,\\ldots,x_n)" }, { "math_id": 4, "text": "\\sum_{i=1}^n w_i x_i > \\Theta " }, { "math_id": 5, "text": "\\Theta" }, { "math_id": 6, "text": "\\Theta=n/2" }, { "math_id": 7, "text": "x_{i}=1" }, { "math_id": 8, "text": "w_{i}" }, { "math_id": 9, "text": "\\forall x_{i} = 1, w_{i} = 0" }, { "math_id": 10, "text": "\\forall x_{i} = 1, w_{i} = \\alpha w_{i}" }, { "math_id": 11, "text": "\\alpha > 1" }, { "math_id": 12, "text": "\\Theta \\geq 1/\\alpha" }, { "math_id": 13, "text": "k" }, { "math_id": 14, "text": "f(x_1,\\ldots,x_n)=x_{i_1}\\cup \\cdots \\cup x_{i_k}" }, { "math_id": 15, "text": "\\alpha k ( \\log_\\alpha \\Theta+1)+\\frac{n}{\\Theta}" } ]
https://en.wikipedia.org/wiki?curid=12693735