id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
1136901
167 (number)
Natural number 167 (one hundred [and] sixty-seven) is the natural number following 166 and preceding 168. In mathematics. 167 is the 39th prime number, an emirp, an isolated prime, a Chen prime, a Gaussian prime, a safe prime, and an Eisenstein prime with no imaginary part and a real part of the form formula_0. 167 is the smallest number which requires six terms when expressed using the greedy algorithm as a sum of squares, 167 = 144 + 16 + 4 + 1 + 1 + 1, although by Lagrange's four-square theorem its non-greedy expression as a sum of squares can be shorter, e.g. 167 = 121 + 36 + 9 + 1. 167 is a full reptend prime in base 10, since the decimal expansion of 1/167 repeats the following 166 digits: 0.00598802395209580838323353293413173652694610778443113772455089820359281437125748502994 0119760479041916167664670658682634730538922155688622754491017964071856287425149700... 167 is a highly cototient number, as it is the smallest number "k" with exactly 15 solutions to the equation "x" - φ("x") = "k". It is also a strictly non-palindromic number. 167 is the smallest multi-digit prime such that the product of digits is equal to the number of digits times the sum of the digits, i. e., 1×6×7 = 3×(1+6+7) 167 is the smallest positive integer "d" such that the imaginary quadratic field Q(√–"d") has class number = 11. In other fields. 167 is also: References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "3n - 1" } ]
https://en.wikipedia.org/wiki?curid=1136901
1137111
Spherical cap
Section of a sphere In geometry, a spherical cap or spherical dome is a portion of a sphere or of a ball cut off by a plane. It is also a spherical segment of one base, i.e., bounded by a single plane. If the plane passes through the center of the sphere (forming a great circle), so that the height of the cap is equal to the radius of the sphere, the spherical cap is called a "hemisphere". Volume and surface area. The volume of the spherical cap and the area of the curved surface may be calculated using combinations of These variables are inter-related through the formulas formula_4, formula_5, formula_6, and formula_7. If formula_8 denotes the latitude in geographic coordinates, then formula_9, and formula_10. Deriving the surface area intuitively from the spherical sector volume. Note that aside from the calculus based argument below, the area of the spherical cap may be derived from the volume formula_11 of the spherical sector, by an intuitive argument, as formula_12 The intuitive argument is based upon summing the total sector volume from that of infinitesimal triangular pyramids. Utilizing the pyramid (or cone) volume formula of formula_13, where formula_14 is the infinitesimal area of each pyramidal base (located on the surface of the sphere) and formula_15 is the height of each pyramid from its base to its apex (at the center of the sphere). Since each formula_15, in the limit, is constant and equivalent to the radius formula_0 of the sphere, the sum of the infinitesimal pyramidal bases would equal the area of the spherical sector, and: formula_16 Deriving the volume and surface area using calculus. The volume and area formulas may be derived by examining the rotation of the function formula_17 for formula_18, using the formulas the surface of the rotation for the area and the solid of the revolution for the volume. The area is formula_19 The derivative of formula_20 is formula_21 and hence formula_22 The formula for the area is therefore formula_23 The volume is formula_24 Applications. Volumes of union and intersection of two intersecting spheres. The volume of the union of two intersecting spheres of radii formula_25 and formula_26 is formula_27 where formula_28 is the sum of the volumes of the two isolated spheres, and formula_29 the sum of the volumes of the two spherical caps forming their intersection. If formula_30 is the distance between the two sphere centers, elimination of the variables formula_31 and formula_32 leads to formula_33 Volume of a spherical cap with a curved base. The volume of a spherical cap with a curved base can be calculated by considering two spheres with radii formula_25 and formula_26, separated by some distance formula_34, and for which their surfaces intersect at formula_35. That is, the curvature of the base comes from sphere 2. The volume is thus the difference between sphere 2's cap (with height formula_36) and sphere 1's cap (with height formula_2), formula_37 This formula is valid only for configurations that satisfy formula_38 and formula_39. If sphere 2 is very large such that formula_40, hence formula_41 and formula_42, which is the case for a spherical cap with a base that has a negligible curvature, the above equation is equal to the volume of a spherical cap with a flat base, as expected. Areas of intersecting spheres. Consider two intersecting spheres of radii formula_25 and formula_26, with their centers separated by distance formula_34. They intersect if formula_43 From the law of cosines, the polar angle of the spherical cap on the sphere of radius formula_25 is formula_44 Using this, the surface area of the spherical cap on the sphere of radius formula_25 is formula_45 Surface area bounded by parallel disks. The curved surface area of the spherical segment bounded by two parallel disks is the difference of surface areas of their respective spherical caps. For a sphere of radius formula_0, and caps with heights formula_31 and formula_32, the area is formula_46 or, using geographic coordinates with latitudes formula_47 and formula_48, formula_49 For example, assuming the Earth is a sphere of radius 6371 km, the surface area of the arctic (north of the Arctic Circle, at latitude 66.56° as of August 2016) is 2"π" ⋅ 63712 |sin 90° − sin 66.56°| = , or 0.5 ⋅ |sin 90° − sin 66.56°| = 4.125% of the total surface area of the Earth. This formula can also be used to demonstrate that half the surface area of the Earth lies between latitudes 30° South and 30° North in a spherical zone which encompasses all of the Tropics. Generalizations. Sections of other solids. The spheroidal dome is obtained by sectioning off a portion of a spheroid so that the resulting dome is circularly symmetric (having an axis of rotation), and likewise the ellipsoidal dome is derived from the ellipsoid. Hyperspherical cap. Generally, the formula_50-dimensional volume of a hyperspherical cap of height formula_2 and radius formula_0 in formula_50-dimensional Euclidean space is given by: formula_51 where formula_52 (the gamma function) is given by formula_53. The formula for formula_54 can be expressed in terms of the volume of the unit n-ball formula_55 and the hypergeometric function formula_56 or the regularized incomplete beta function formula_57 as formula_58 and the area formula formula_59 can be expressed in terms of the area of the unit n-ball formula_60 as formula_61 where formula_62. A. Chudnov derived the following formulas: formula_63 where formula_64 formula_65 For odd formula_66: formula_67 Asymptotics. If formula_68 and formula_69, then formula_70 where formula_71 is the integral of the standard normal distribution. A more quantitative bound is formula_72. For large caps (that is when formula_73 as formula_74), the bound simplifies to formula_75. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "r" }, { "math_id": 1, "text": "a" }, { "math_id": 2, "text": "h" }, { "math_id": 3, "text": "\\theta" }, { "math_id": 4, "text": " a = r \\sin \\theta" }, { "math_id": 5, "text": "h = r ( 1 - \\cos \\theta )" }, { "math_id": 6, "text": "2hr = a^2 + h^2" }, { "math_id": 7, "text": "2 h a = (a^2 + h^2)\\sin \\theta" }, { "math_id": 8, "text": "\\phi" }, { "math_id": 9, "text": "\\theta+\\phi = \\pi/2 = 90^\\circ\\," }, { "math_id": 10, "text": "\\cos \\theta = \\sin \\phi" }, { "math_id": 11, "text": "V_{sec}" }, { "math_id": 12, "text": "A = \\frac{3}{r}V_{sec} = \\frac{3}{r} \\frac{2\\pi r^2h}{3} = 2\\pi rh\\,." }, { "math_id": 13, "text": "V = \\frac{1}{3} bh'" }, { "math_id": 14, "text": "b" }, { "math_id": 15, "text": "h'" }, { "math_id": 16, "text": "V_{sec} = \\sum{V} = \\sum\\frac{1}{3} bh' = \\sum\\frac{1}{3} br = \\frac{r}{3} \\sum b = \\frac{r}{3} A" }, { "math_id": 17, "text": "f(x)=\\sqrt{r^2-(x-r)^2}=\\sqrt{2rx-x^2}" }, { "math_id": 18, "text": "x \\in [0,h]" }, { "math_id": 19, "text": "A = 2\\pi\\int_0^h f(x) \\sqrt{1+f'(x)^2} \\,dx " }, { "math_id": 20, "text": "f" }, { "math_id": 21, "text": "f'(x) = \\frac{r-x}{\\sqrt{2rx-x^2}} " }, { "math_id": 22, "text": "1+f'(x)^2 = \\frac{r^2}{2rx-x^2} " }, { "math_id": 23, "text": "A = 2\\pi\\int_0^h \\sqrt{2rx-x^2} \\sqrt{\\frac{r^2}{2rx-x^2}} \\,dx\n = 2\\pi \\int_0^h r\\,dx\n = 2\\pi r \\left[x\\right]_0^h\n = 2 \\pi r h " }, { "math_id": 24, "text": "V = \\pi \\int_0^h f(x)^2 \\,dx\n = \\pi \\int_0^h (2rx-x^2) \\,dx\n = \\pi \\left[rx^2-\\frac13x^3\\right]_0^h\n = \\frac{\\pi h^2}{3} (3r - h)" }, { "math_id": 25, "text": "r_1" }, { "math_id": 26, "text": "r_2" }, { "math_id": 27, "text": " V = V^{(1)}-V^{(2)}\\,," }, { "math_id": 28, "text": "V^{(1)} = \\frac{4\\pi}{3}r_1^3 +\\frac{4\\pi}{3}r_2^3" }, { "math_id": 29, "text": "V^{(2)} = \\frac{\\pi h_1^2}{3}(3r_1-h_1)+\\frac{\\pi h_2^2}{3}(3r_2-h_2)" }, { "math_id": 30, "text": "d \\le r_1+r_2" }, { "math_id": 31, "text": "h_1" }, { "math_id": 32, "text": "h_2" }, { "math_id": 33, "text": "V^{(2)} = \\frac{\\pi}{12d}(r_1+r_2-d)^2 \\left( d^2+2d(r_1+r_2)-3(r_1-r_2)^2 \\right)\\,." }, { "math_id": 34, "text": "d" }, { "math_id": 35, "text": "x=h" }, { "math_id": 36, "text": "(r_2-r_1)-(d-h)" }, { "math_id": 37, "text": "\\begin{align}\nV & = \\frac{\\pi h^2}{3}(3r_1-h) - \\frac{\\pi [(r_2-r_1)-(d-h)]^2}{3}[3r_2-((r_2-r_1)-(d-h))]\\,, \\\\\nV & = \\frac{\\pi h^2}{3}(3r_1-h) - \\frac{\\pi}{3}(d-h)^3\\left(\\frac{r_2-r_1}{d-h}-1\\right)^2\\left[\\frac{2r_2+r_1}{d-h}+1\\right]\\,.\n\\end{align}\n" }, { "math_id": 38, "text": "0<d<r_2" }, { "math_id": 39, "text": "d-(r_2-r_1)<h\\leq r_1" }, { "math_id": 40, "text": "r_2\\gg r_1" }, { "math_id": 41, "text": "d \\gg h" }, { "math_id": 42, "text": "r_2\\approx d" }, { "math_id": 43, "text": "|r_1-r_2|\\leq d \\leq r_1+r_2" }, { "math_id": 44, "text": "\\cos \\theta = \\frac{r_1^2-r_2^2+d^2}{2r_1d}" }, { "math_id": 45, "text": "A_1 = 2\\pi r_1^2 \\left( 1+\\frac{r_2^2-r_1^2-d^2}{2 r_1 d} \\right)" }, { "math_id": 46, "text": "A=2 \\pi r |h_1 - h_2|\\,," }, { "math_id": 47, "text": "\\phi_1" }, { "math_id": 48, "text": "\\phi_2" }, { "math_id": 49, "text": "A=2 \\pi r^2 |\\sin \\phi_1 - \\sin \\phi_2|\\,," }, { "math_id": 50, "text": "n" }, { "math_id": 51, "text": "V = \\frac{\\pi ^ {\\frac{n-1}{2}}\\, r^{n}}{\\,\\Gamma \\left ( \\frac{n+1}{2} \\right )} \\int_{0}^{\\arccos\\left(\\frac{r-h}{r}\\right)}\\sin^n (\\theta) \\,\\mathrm{d}\\theta" }, { "math_id": 52, "text": "\\Gamma" }, { "math_id": 53, "text": " \\Gamma(z) = \\int_0^\\infty t^{z-1} \\mathrm{e}^{-t}\\,\\mathrm{d}t " }, { "math_id": 54, "text": "V" }, { "math_id": 55, "text": "C_n = \\pi^{n/2} / \\Gamma[1+\\frac{n}{2}]" }, { "math_id": 56, "text": "{}_{2}F_{1}" }, { "math_id": 57, "text": "I_x(a,b)" }, { "math_id": 58, "text": "V = C_{n} \\, r^{n} \\left( \\frac{1}{2}\\, - \\,\\frac{r-h}{r} \\,\\frac{\\Gamma[1+\\frac{n}{2}]}{\\sqrt{\\pi}\\,\\Gamma[\\frac{n+1}{2}]}\n{\\,\\,}_{2}F_{1}\\left(\\tfrac{1}{2},\\tfrac{1-n}{2};\\tfrac{3}{2};\\left(\\tfrac{r-h}{r}\\right)^{2}\\right)\\right)\n= \\frac{1}{2}C_{n} \\, r^n I_{(2rh-h^2)/r^2} \\left(\\frac{n+1}{2}, \\frac{1}{2} \\right)," }, { "math_id": 59, "text": "A" }, { "math_id": 60, "text": "A_{n}={2\\pi^{n/2}/\\Gamma[\\frac{n}{2}]}" }, { "math_id": 61, "text": "A =\\frac{1}{2}A_{n} \\, r^{n-1} I_{(2rh-h^2)/r^2} \\left(\\frac{n-1}{2}, \\frac{1}{2} \\right)," }, { "math_id": 62, "text": "0\\le h\\le r " }, { "math_id": 63, "text": " A = A_n p_ { n-2 } (q), V = C_n p_n (q) , " }, { "math_id": 64, "text": " q = 1-h/r (0 \\le q \\le 1 ), p_n (q) =(1-G_n(q)/G_n(1))/2 , " }, { "math_id": 65, "text": " G _n(q)= \\int _0^q (1-t^2) ^ { (n-1) /2 } dt ." }, { "math_id": 66, "text": " n=2k+1 " }, { "math_id": 67, "text": " G_n(q) = \\sum_{i=0}^k (-1) ^i \\binom k i \\frac {q^{2i+1}} {2i+1} ." }, { "math_id": 68, "text": " n \\to \\infty " }, { "math_id": 69, "text": "q\\sqrt n = \\text{const.}" }, { "math_id": 70, "text": " p_n (q) \\to 1- F({q \\sqrt n}) " }, { "math_id": 71, "text": " F() " }, { "math_id": 72, "text": " A/A_n = n^{\\Theta(1)} \\cdot [(2-h/r)h/r]^{n/2} " }, { "math_id": 73, "text": "(1-h/r)^4\\cdot n = O(1)" }, { "math_id": 74, "text": "n\\to \\infty" }, { "math_id": 75, "text": "n^{\\Theta(1)} \\cdot e^{-(1-h/r)^2n/2} " } ]
https://en.wikipedia.org/wiki?curid=1137111
11371222
Reserve design
Reserve design is the process of planning and creating a nature reserve in a way that effectively accomplishes the goal of the reserve. Reserve establishment has a variety of goals, and planners must consider many factors for a reserve to be successful. These include habitat preference, migration, climate change, and public support. To accommodate these factors and fulfill the reserve's goal requires that planners create and implement a specific design. Purpose of reserves. All nature reserves have a primary goal of protecting biodiversity from harmful activities and processes, both natural and anthropogenic. To achieve this, reserves must extensively sample biodiversity at all taxonomic levels and enhance and ensure long-term survival of the organisms. As it is described in the guides to nature reserve establishment from Scottish and English governments, a nature reserve will likely contribute to enhancing local sustainability and contribute to meeting biodiversity targets. An additional goal is also included: providing controlled opportunities for study of organisms and their surroundings, where study can mean actual scientific research or use of the reserve for education, engagement and recreation of public. Various secondary benefits, such as economic contributions from enhanced tourism and opportunities for specialist training, are also mentioned. Social and ecological factors. Successful reserves incorporate important ecological and social factors into their design. Such factors include the natural range of predators. When a reserve is too small, carnivores have increased contact with humans, resulting in higher mortality rates for the carnivore. Also certain species are area sensitive. A study on song birds in Japan showed that certain birds only settle in habitats much larger than the area they actually occupy. Knowing species geographic range and preference is essential to determining the size of the reserve needed. Social factors such as the attitudes of local people should also be taken into account. If a reserve is put up in an area that people depend on for their livelihood the reserve often fails. For example, in Bolivia, the Amboró National Park was expanded in 1991 from 1,800 to 6,370 km². While this was celebrated by conservationists, local people who would be displaced by the expansion were angered. They continued to hunt and log within the park and eventually the park size had to be reduced [8]. Because local people were not considered in the design of the reserve, conservation efforts failed. Many conservationists advocate local people must be included in conservation efforts, this is known as an Integrated Conservation and Development Project. Design solutions. Reserve shape. As commonly recommended, an ideal nature reserve should obtain a shape of a perfect circle to reduce dispersal distances avoid detrimental edge effects. However, this is practically very hard to achieve, due to land use for agriculture, human settlements and natural resource extraction. Buffer zones are often suggested as a way of providing protection from human threat, promoting succession and reforestation, and reducing edge effects. English government guide to nature reserves mentions buffer zones as being useful, but not essential for biodiversity protection. Contrasting evidence suggests that shape plays little to no part in the effectiveness of the reserve. A study in 1985 explored the effects of shape and size on islands, and determined that area, rather than shape was the major factor. Reserve size. A complicated debate among conservation biologists (also known as the SLOSS debate) focused on whether it is better to create one large or several small reserves. The species area relationship formula_0 states that the number of species in a habitat is directly proportional to its size. So theoretically if several small reserves have a greater total area than a single large reserve, the small reserves will contain a greater total number of species. This, combined with assumptions of island biogeography theory, lead Jared Diamond to state that a single large reserve is the best method of conservation, and it is still commonly recommended. For example, a review by Ovaskainen determined that a single large reserve site is best at maximising long-term survival of the species and deferring extinction in a closed population. The nested subset theory disagrees with Diamond's conclusion. It states that several small reserves will mostly share the same species, because certain species are better adapted to living in smaller habitats and many other species only exist in larger habitats A study conducted in Illinois had shown that two small forest reserves contained a larger number of bird species than one large forest patch, but the large reserve contained a larger number of migratory birds. Ovaskainen and Fukamachi argued that several small reserve fragments are better at maximising species richness. However, it will most likely only applies to common species, as the rarest, least abundant species are found only in single large sites. As the debate had mixed evidence supporting both Single Large and Several Small reserves, some scientists questioned the practical applicability of island biogeography theory to conservation in general. However, its applicability and its role in stimulating the study of habitat fragmentation is now largely accepted. The scientific findings emerging from habitat fragmentation research are considered to be a key element of conservation biology and applicable to reserve design. Similarly, the suggestion that scientific evidence was lacking to support the hypothesis that subdividing habitat increases extinction rates (fundamentally the problem addressed by the SLOSS debate) was refuted. Habitat quality and heterogeneity. The science of reserve design has faced some recent controversy regarding species-area relationship, when it was shown that habitat heterogeneity is likely a stronger factor in determining species richness than area. The study decoupled area and habitat complexity to show that small, but heterogeneous habitats have more arthropod species than large, but homogeneous ones. Habitat diversity and quality have also been shown to influence biodiversity. It was discovered that plant species richness in Norwegian meadows is correlated with habitat diversity. Another study has found that butterfly population persistence was found to correlate with habitat quality, rather than area. Empathetic Architecture - How can we produces buildings in a reserve to allow empathy within the physical environment of the structure? The term empathy is understood primarily from sociology referring to an interrelation with another person. By Association, whether positive or negative, it is subjective to some extent. In architectural terms, empathy is understood as a positive bond with the built environment.  The more people can associate with the built environment the better they are able to understand the world they live in and we as architects must interpret such techniques and by application when used effectively, can achieve breakthrough designs in potentially shorter cycles to create spaces of greater use. Reserve networks. Protecting species in a confined area sometimes isn’t enough to protect the biodiversity of an entire region. Life within a nature reserve does not function as an isolated unit, separate from its surroundings. Many animals engage in migration and are not guaranteed to stay within fixed reserve boundaries. So, to protect biodiversity over wide geographic ranges, reserve systems are established. Reserve systems are a series of strategically placed reserves designed to connect habitats. This allows animals to travel between protected areas through wildlife corridors. A wildlife corridor is a protected passageway where it is known that fauna migrate. The Yellowstone to Yukon Conservation Initiative is an excellent example of this type of conservation effort. Studies showed that reserve networks are extremely valuable for conservation, and can help increase migration between patches up to 50%. Reserve location. To be efficient and cost-effective, yet still effectively protect a wide range of organisms, nature reserves must be established in species rich geographic locations. This potentially includes biodiversity hotspots, ancient woodland, and unique habitats such as wetlands, bogs, ecosites or endemic islands (e.g. Madagascar). Biodiversity hotspots. According to Conservation International, the term "biodiversity hotspot" refers to "the richest and most threatened reservoirs of plant and animal life on Earth... To qualify as a hotspot, a region must meet two strict criteria: it must contain at least 1,500 species of vascular plants (&gt; 0.5 percent of the world’s total) as endemics, and it has to have lost at least 70 percent of its original habitat." These hotspots are rapidly disappearing due to human activities, but they still have a chance of being saved if conservation measures are enacted. Biodiversity hotspots could be considered the most important places to put reserves. Future habitat. Future habitat of the species we wish to protect is of utmost importance when designing reserves. There are many questions to think about when determining future species ranges: How will the climate shift in the future? Where will species move? What species will climate change benefit? What are potential barriers to these needed species range shifts? Reserves must be designed with future habitat in mind, perhaps incorporating both the current and future ranges of the species’ of concern. The fundamental question in determining future species ranges is how the Earth is changing, both in the present and how it will change in the future. According to the United States Environmental Protection Agency the average surface temperature of the Earth has raised 1.2 – 1.4 °F since 1900. 1 °F of this warming has occurred since the mid-1970s, and at present, the Earth’s surface is heating up about 0.32 °F per decade. Predicted increases in global temperature range from 1.4 °C to 5.8 °C by the year 2100. Large changes in precipitation are also predicted to occur by both the A1Fl scenario and the B1 scenario It is predicted that there will also be large changes in the atmosphere and in the sea level.. This rapid, dramatic climate change has affected and will continue to affect species ranges. A study by Camille Parmesan and Gary Yohe published in 2003 illustrates this point well. 434 of the species analysed were characterized as having changed their ranges. 80% of observed range changes were made polewards or upward, as predicted by global climate change, at an average of 6.1 km per decade. A more recent study in 2011 confirmed this trend and showed that the rate of range shift is at least two times higher than estimated in previous studies. With the polewards movement, species abandon their previous habitat areas in search of cooler environments. An example of this was species of sea anemones thriving in Monterey Bay that had previously had a more southerly distribution. Species of lichens, and butterflies in Europe also followed the patterns of species range shifts predicted by models of future climate change. These species were shown to be migrating northward and upward, to higher latitudes and sky islands. The data from this study also indicated "the dynamics at the range boundaries are expected to be more influenced by climate than are dynamics within the interior of a species range…[where] response to global warming predicts that southerly species should outperform northerly species at the same site." These findings are of particular interest when considering reserve design. At the edges of a reserve, presuming that the reserve is also the species range if the species is highly threatened, climate change will be far more of a factor. Northern borders and those at higher elevations will become future battlegrounds for the conservation of the species in question, as they migrate northward and upward. The borders of today may not include the habitat of tomorrow, thus defeating the purpose of preservation by instead making the species range smaller and smaller if there are barriers to migration at the Northern and higher elevation boundaries of the reserve. Reserves could be designed to keep Northern migration a possibility, with boundaries farther to the North than might be considered practical looking at the today’s species ranges and abundances. Keeping open corridors between reserves connecting them to reserves to the North and the South is another possibility. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S=cA^z" } ]
https://en.wikipedia.org/wiki?curid=11371222
11371560
Expectation value (quantum mechanics)
Expected value of a quantum measurement In quantum mechanics, the expectation value is the probabilistic expected value of the result (measurement) of an experiment. It can be thought of as an average of all the possible outcomes of a measurement as weighted by their likelihood, and as such it is not the "most" probable value of a measurement; indeed the expectation value may have zero probability of occurring (e.g. measurements which can only yield integer values may have a non-integer mean). It is a fundamental concept in all areas of quantum physics. Operational definition. Consider an operator formula_0. The expectation value is then formula_1 in Dirac notation with formula_2 a normalized state vector. Formalism in quantum mechanics. In quantum theory, an experimental setup is described by the observable formula_0 to be measured, and the state formula_3 of the system. The expectation value of formula_0 in the state formula_3 is denoted as formula_4. Mathematically, formula_0 is a self-adjoint operator on a separable complex Hilbert space. In the most commonly used case in quantum mechanics, formula_3 is a pure state, described by a normalized vector formula_5 in the Hilbert space. The expectation value of formula_0 in the state formula_5 is defined as If dynamics is considered, either the vector formula_5 or the operator formula_0 is taken to be time-dependent, depending on whether the Schrödinger picture or Heisenberg picture is used. The evolution of the expectation value does not depend on this choice, however. If formula_0 has a complete set of eigenvectors formula_6, with eigenvalues formula_7, then (1) can be expressed as This expression is similar to the arithmetic mean, and illustrates the physical meaning of the mathematical formalism: The eigenvalues formula_7 are the possible outcomes of the experiment, and their corresponding coefficient formula_8 is the probability that this outcome will occur; it is often called the "transition probability". A particularly simple case arises when formula_0 is a projection, and thus has only the eigenvalues 0 and 1. This physically corresponds to a "yes-no" type of experiment. In this case, the expectation value is the probability that the experiment results in "1", and it can be computed as In quantum theory, it is also possible for an operator to have a non-discrete spectrum, such as the position operator formula_9 in quantum mechanics. This operator has a completely continuous spectrum, with eigenvalues and eigenvectors depending on a continuous parameter, formula_10. Specifically, the operator formula_9 acts on a spatial vector formula_11 as formula_12. In this case, the vector formula_5 can be written as a complex-valued function formula_13 on the spectrum of formula_9 (usually the real line). This is formally achieved by projecting the state vector formula_14 onto the eigenvalues of the operator, as in the discrete case formula_15. It happens that the eigenvectors of the position operator form a complete basis for the vector space of states, and therefore obey a completeness relation in quantum mechanics: formula_16 The above may be used to derive the common, integral expression for the expected value (4), by inserting identities into the vector expression of expected value, then expanding in the position basis: formula_17 Where the orthonormality relation of the position basis vectors formula_18, reduces the double integral to a single integral. The last line uses the modulus of a complex valued function to replace formula_19 with formula_20, which is a common substitution in quantum-mechanical integrals. The expectation value may then be stated, where x is unbounded, as the formula A similar formula holds for the momentum operator, in systems where it has continuous spectrum. All the above formulas are valid for pure states formula_3 only. Prominently in thermodynamics and quantum optics, also "mixed states" are of importance; these are described by a positive trace-class operator formula_21, the "statistical operator" or "density matrix". The expectation value then can be obtained as General formulation. In general, quantum states formula_3 are described by positive normalized linear functionals on the set of observables, mathematically often taken to be a C*-algebra. The expectation value of an observable formula_0 is then given by If the algebra of observables acts irreducibly on a Hilbert space, and if formula_3 is a "normal functional", that is, it is continuous in the ultraweak topology, then it can be written as formula_22 with a positive trace-class operator formula_23 of trace 1. This gives formula (5) above. In the case of a pure state, formula_24 is a projection onto a unit vector formula_5. Then formula_25, which gives formula (1) above. formula_0 is assumed to be a self-adjoint operator. In the general case, its spectrum will neither be entirely discrete nor entirely continuous. Still, one can write formula_0 in a spectral decomposition, formula_26 with a projection-valued measure formula_27. For the expectation value of formula_0 in a pure state formula_28, this means formula_29 which may be seen as a common generalization of formulas (2) and (4) above. In non-relativistic theories of finitely many particles (quantum mechanics, in the strict sense), the states considered are generally normal. However, in other areas of quantum theory, also non-normal states are in use: They appear, for example. in the form of KMS states in quantum statistical mechanics of infinitely extended media, and as charged states in quantum field theory. In these cases, the expectation value is determined only by the more general formula (6). Example in configuration space. As an example, consider a quantum mechanical particle in one spatial dimension, in the configuration space representation. Here the Hilbert space is formula_30, the space of square-integrable functions on the real line. Vectors formula_31 are represented by functions formula_13, called wave functions. The scalar product is given by formula_32. The wave functions have a direct interpretation as a probability distribution: formula_33 gives the probability of finding the particle in an infinitesimal interval of length formula_34 about some point formula_10. As an observable, consider the position operator formula_35, which acts on wavefunctions formula_5 by formula_36 The expectation value, or mean value of measurements, of formula_35 performed on a very large number of "identical" independent systems will be given by formula_37 The expectation value only exists if the integral converges, which is not the case for all vectors formula_5. This is because the position operator is unbounded, and formula_5 has to be chosen from its domain of definition. In general, the expectation of any observable can be calculated by replacing formula_35 with the appropriate operator. For example, to calculate the average momentum, one uses the momentum operator "in configuration space", formula_38. Explicitly, its expectation value is formula_39 Not all operators in general provide a measurable value. An operator that has a pure real expectation value is called an observable and its value can be directly measured in experiment. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. The expectation value, in particular as presented in the section "Formalism in quantum mechanics", is covered in most elementary textbooks on quantum mechanics. For a discussion of conceptual aspects, see:
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\langle A \\rangle = \\langle \\psi | A | \\psi \\rangle " }, { "math_id": 2, "text": " |\\psi \\rangle " }, { "math_id": 3, "text": "\\sigma" }, { "math_id": 4, "text": "\\langle A \\rangle_\\sigma" }, { "math_id": 5, "text": "\\psi" }, { "math_id": 6, "text": "\\phi_j" }, { "math_id": 7, "text": "a_j" }, { "math_id": 8, "text": "|\\langle \\psi | \\phi_j \\rangle|^2" }, { "math_id": 9, "text": "X" }, { "math_id": 10, "text": "x" }, { "math_id": 11, "text": "| x \\rangle" }, { "math_id": 12, "text": "X | x \\rangle = x |x\\rangle" }, { "math_id": 13, "text": "\\psi(x)" }, { "math_id": 14, "text": "| \\psi \\rangle" }, { "math_id": 15, "text": " \\psi(x) \\equiv \\langle x | \\psi \\rangle" }, { "math_id": 16, "text": " \\int |x \\rangle \\langle x| \\, dx \\equiv \\mathbb{I}" }, { "math_id": 17, "text": "\\begin{align}\n\\langle X \\rangle_{\\psi}\n&= \\langle \\psi | X | \\psi \\rangle\n= \\langle \\psi |\\mathbb{I} X \\mathbb{I}| \\psi \\rangle \\\\\n&= \\iint \\langle \\psi | x \\rangle \\langle x | X | x' \\rangle \\langle x' | \\psi \\rangle dx\\ dx' \\\\\n&= \\iint \\langle x | \\psi \\rangle^* x' \\langle x | x' \\rangle \\langle x' | \\psi \\rangle dx\\ dx' \\\\\n&= \\iint \\langle x | \\psi \\rangle^* x' \\delta(x - x') \\langle x' | \\psi \\rangle dx\\ dx' \\\\\n&= \\int \\psi(x)^* x \\psi(x) dx\n= \\int x \\psi(x)^* \\psi(x) dx\n= \\int x |\\psi(x)|^2 dx\n\\end{align}" }, { "math_id": 18, "text": "\\langle x | x' \\rangle = \\delta(x - x')" }, { "math_id": 19, "text": "\\psi^*\\psi" }, { "math_id": 20, "text": "|\\psi|^2" }, { "math_id": 21, "text": "\\rho = \\sum_i p_i | \\psi_i \\rangle \\langle \\psi_i |" }, { "math_id": 22, "text": " \\sigma (\\cdot) = \\operatorname{Tr} (\\rho \\; \\cdot)" }, { "math_id": 23, "text": "\\rho" }, { "math_id": 24, "text": "\\rho= |\\psi\\rangle\\langle\\psi|" }, { "math_id": 25, "text": "\\sigma = \\langle \\psi |\\cdot \\; \\psi\\rangle" }, { "math_id": 26, "text": "A = \\int a \\, dP(a)" }, { "math_id": 27, "text": "P" }, { "math_id": 28, "text": "\\sigma = \\langle \\psi | \\cdot \\, \\psi \\rangle" }, { "math_id": 29, "text": "\\langle A \\rangle_\\sigma = \\int a \\; d \\langle \\psi | P(a) \\psi\\rangle ," }, { "math_id": 30, "text": "\\mathcal{H} = L^2(\\mathbb{R})" }, { "math_id": 31, "text": "\\psi\\in\\mathcal{H}" }, { "math_id": 32, "text": "\\langle \\psi_1 | \\psi_2 \\rangle = \\int \\psi_1^\\ast (x) \\psi_2(x) \\, dx" }, { "math_id": 33, "text": " \\rho(x) dx = \\psi^*(x)\\psi(x) dx" }, { "math_id": 34, "text": "dx" }, { "math_id": 35, "text": "Q" }, { "math_id": 36, "text": " (Q \\psi) (x) = x \\psi(x) ." }, { "math_id": 37, "text": " \\langle Q \\rangle_\\psi\n= \\langle \\psi | Q | \\psi \\rangle\n= \\int_{-\\infty}^{\\infty} \\psi^\\ast(x) \\, x \\, \\psi(x) \\, dx\n= \\int_{-\\infty}^{\\infty} x \\, \\rho(x) \\, dx ." }, { "math_id": 38, "text": "\\mathbf{p} = -i \\hbar \\, \\frac{d}{dx}" }, { "math_id": 39, "text": " \\langle \\mathbf{p} \\rangle_\\psi = -i\\hbar \\int_{-\\infty}^{\\infty} \\psi^\\ast(x) \\, \\frac{d\\psi(x)}{dx} \\, dx." } ]
https://en.wikipedia.org/wiki?curid=11371560
1137435
Natural kind
"Natural kind" is an intellectual grouping, or categorizing of things, in a manner that is reflective of the actual world and not just human interests. Some treat it as a classification identifying some structure of truth and reality that exists whether or not humans recognize it. Others treat it as intrinsically useful to the human mind, but not necessarily reflective of something more objective. Candidate examples of natural kinds are found in all the sciences, but the field of chemistry provides the paradigm example of elements. John Dewey held a view that belief in unconditional natural kinds is a mistake, a relic of obsolete scientific practices. Hilary Putnam rejects descriptivist approaches to natural kinds with semantic reasoning. Hasok Chang and Rasmus Winther hold the emerging view that natural kinds are useful and evolving scientific facts. John Dewey. In 1938, John Dewey published "Logic: The Theory of Inquiry." He there explained how modern scientists create kinds through induction and deduction, and why they have no use for natural kinds. The philosophical issue is how humans can dependably predict that unobserved examples of a kind will have the same traits as a few observed examples. The traditional answer grew out of Aristotle's assertion that humans describe things they know in two kinds of propositions. Existential kinds—known by observing traits—are stated in "generic" propositions. Conceptual kinds—known by intuitive recognition of groups of traits—are stated in "universal" propositions. Dewey argued that modern scientists do not follow Aristotle in treating inductive and deductive propositions as facts already known about nature's stable structure. Today, scientific propositions are intermediate steps in inquiry, hypotheses about processes displaying stable patterns. Aristotle's generic and universal propositions have become conceptual tools of inquiry warranted by inductive inclusion and exclusion of traits. They are provisional means rather than results of inquiry revealing the structure of reality.     Propositions as such are ... provisional, intermediate and instrumental. Since their subject-matter concerns two kinds of means, material and procedural, they are of two main categories: (1) Existential [generic means, known by induction], referring directly to actual conditions, as determined by experimental observation, and (2) ideational or conceptual [universal means, known by deduction], consisting of interrelated meanings, which are non-existential in content ... but which are applicable to existence through the operations they represent as possibilities. Modern induction starts with a question to be answered or a problem to be solved. It identifies problematic subject-matter and seeks potentially relevant traits and conditions. Generic existential data thus identified are reformulated—stated abstractly as if-then universal relations capable of serving as answers or solutions: If formula_0, then water. For Dewey, induction creates warranted kinds by observing constant conjunction of relevant traits.     No grounded generic propositions can be formed save as they are the products of the performance of operations indicated as possible by universal propositions. The problem of inference is, accordingly, to discriminate and conjoin those qualities [kinds] of existential material that serve as distinguishing traits (inclusively and exclusively) of a determinate kind. Dewey used the example of "morning dew" to describe these abstract steps creating scientific kinds. From antiquity, the common-sense belief had been that all dew is a kind of rain, meaning dew drops fall. By the early 1800s the curious absence of rain before dew and the growth of understanding led scientists to examine new traits. Functional processes changing bodies [kinds] from solid to liquid to gas at different temperatures, and operational constants of conduction and radiation, led to new inductive hypotheses "directly suggested by "this" subject-matter, not by any data [kinds] previously observable. ... There were certain [existential] conditions postulated in the content of the new [non-existential] conception about dew, and it had to be determined whether these conditions were satisfied in the "observable" facts of the case." After demonstrating that dew could be formed by these generic existential phenomena, and not by other phenomena, the universal hypothesis arose that dew forms following established laws of temperature and pressure. "The outstanding conclusion is that inductive procedures are those which "prepare" existential material so that it has convincing evidential weight with respect to an inferred generalization. Existential data are not pre-known natural kinds, but become conceptual statements of "natural" processes.     Objects and qualities [kinds] as they naturally present themselves or as they are "given," are not only not the data of science but constitute the most direct and important obstacles to formation of those ideas and hypotheses that are genuinely relevant and effective.     We are brought to the conclusion that it is modes of "active response" which are the ground of generality of logical form, not the existential immediate qualities of that which is responded to. Dewey concluded that nature is not a collection of natural kinds, but rather of reliable processes discoverable by competent induction and deduction. He replaced the ambiguous label "natural kind" by "warranted assertion" to emphasize the conditional nature of all human knowings. Assuming kinds to be given unconditional knowings leads to the error of assuming that conceptual universal propositions can serve as evidence for generic propositions; observed consequences affirm unobservable imagined causes. "For an 'inference' that is not "grounded" in the evidential nature of the material from which it is drawn is "not" an inference. It is a more or less wild guess." Modern induction is not a guess about natural kinds, but a means to create instrumental understanding. Willard Van Orman Quine. In 1969, Willard Van Orman Quine brought the term "natural kind" into contemporary analytic philosophy with an essay bearing that title. His opening paragraph laid out his approach in three parts. First, it questioned the logical and scientific legitimacy of reasoning inductively by counting a few examples posting traits imputed to all members of a kind: "What tends to confirm an induction?" For Quine, induction reveals warranted kinds by repeated observation of visible similarities. Second, it assumed that color can be a characteristic trait of natural kinds, despite some logical puzzles: hypothetical colored kinds such as non-black non-ravens and green-blue emeralds. Finally, it suggested that human psychological structure can explain the illogical success of induction: "an innate flair that we have for natural kinds. He started with the logical hypothesis that, if all ravens are black—an observable natural kind—then non-black non-ravens are equally a natural kind: "... each [observed] black raven tends to confirm the law [universal proposition] that all ravens are black ..." Observing shared generic traits warrants the inductive universal prediction that future experience will confirm the sharing: "And every reasonable [universal] expectation depends on resemblance of [generic] circumstances, together with our tendency to expect similar causes to have similar effects." "The notion of a kind and the notion of similarity or resemblance seem to be variants or adaptations of a single [universal] notion. Similarity is immediately definable in terms of kind; for things are similar when they are two of a kind." Quine posited an intuitive human capacity to recognize criteria for judging degrees of similarity among objects, an "innate flair for natural kinds.” These criteria work instrumentally when applied inductively: "... why does our innate subjective spacing [classification] of [existential] qualities accord so well with the functionally relevant [universal] groupings in nature as to make our inductions tend to come out right?" He admitted that generalizing after observing a few similarities is scientifically and logically unjustified. The numbers and degrees of similarities and differences humans experience are infinite. But the method is justified by its instrumental success in revealing natural kinds. The "problem of induction" is how humans "should stand better than random or coin-tossing chances of coming out right when we predict by inductions which are based on our innate, scientifically unjustified similarity standards."     A standard of similarity is in some sense innate. This point is not against empiricism; it is a commonplace of behavioral psychology. A response to a red circle, if it is rewarded, will be elicited by a pink eclipse more readily than by a blue triangle; the red circle resembles the pink ellipse more than the blue triangle. Without some such prior spacing of qualities, we could never acquire a [classification] habit; all stimuli would be equally alike and equally different. Quine credited human ability to recognize colors as natural kinds to the evolutionary function of color in human survival—distinguishing safe from poisonous kinds of food. He recognized that modern science often judges color similarities to be superficial, but denied that equating existential similarities with abstract universal similarities makes natural kinds any less permanent and important. The human brain's capacity to recognize abstract kinds joins the brain's capacity to recognize existential similarities.     Credit is due to man's inveterate ingenuity, or human sapience, for having worked around the blinding dazzle of color vision and found the more significant regularities elsewhere. Evidently natural selection has dealt with the conflict [between visible and invisible similarities] by endowing man doubly: with both a color-slanted quality space and the ingenuity to rise above it.&lt;br&gt;    He has risen above it by developing modified systems of kinds, hence modified similarity standards for scientific purposes. By the [inductive] trial-and-error process of theorizing he has regrouped things into new kinds which prove to lend themselves to many inductions better than the old.     A man's judgments of similarity do and should depend on his theory [universal propositions], on his beliefs; but similarity itself, what the man's judgments purport to be judgments of, [is] an objective relation in the world. It belongs in the [generic] subject matter not of our [universal] theory ... about the world, but of our [universal] theory of the [generic] world itself. Such would be the acceptable and reputable sort of similarity concept, if it could be defined. Quine argued that the success of innate and learned criteria for classifying kinds on the basis of similarities observed in small samples of kinds, constitutes evidence of the existence of natural kinds; observed consequences affirm imagined causes. His reasoning continues to provoke philosophical debates. Hilary Putnam. In 1975, Hilary Putnam rejected descriptivist ideas about natural kind by elaborating on semantic concepts in language. Putnam explains his rejection of descriptivist and traditionalist approaches to natural kinds with semantic reasoning, and insists that natural kinds can not be thought of via descriptive processes or creating endless lists of properties. In Putnam's Twin Earth thought experiment, one is asked to consider the extension of "water" when confronted with an alternate version of "water" on an imagined "Twin Earth." This "water" is composed of chemical XYZ, as opposed to H2O. However, in all other describable aspects, it is the same as Earth’s "water." Putnam argues that the mere descriptions of an object, such as "water," is insufficient in defining natural kind. There are underlying aspects, such as chemical composition, that may go unaccounted for unless experts are consulted. This information provided by experts is what Putnam argues will ultimately define natural kinds. Putnam calls the essential information used to define natural kind "core facts." This discussion arises in part in response to what he refers to as "Quine’s pessimism" of theory of meaning. Putnam claims that a natural kind can be referred to via its associated stereotype. This stereotype must be a normal member of the category, and is itself defined by core facts as determined by experts. By conveying these core facts, the essential and appropriate use of natural kind terms can be conveyed. The process of conveying core facts to communicate the essence and appropriate term of a natural kind term is shown in Putnams example of describing a lemon and a tiger. With a lemon, it is possible to communicate the stimulus-meaning of what a lemon is by simply showing someone a lemon. In the case of a tiger, on the other hand, it is considerably more complicated to show someone a tiger, but a speaker can just as readily explain what a tiger is by communicating its core facts. By conveying the core facts of a tiger (e.g. big cat, four legs, orange, black stripes, etc.), the listener can, in theory, go on to use the word "tiger" correctly and refer to its extension accurately. Hilary Kornblith. In 1993, Hilary Kornblith published a review of debates about natural kinds since Quine had launched that epistemological project a quarter-century earlier. He evaluated Quine's "picture of natural knowledge" as natural kinds, along with subsequent refinements. He found still acceptable Quine's original assumption that discovering knowledge of mind-independent reality depends on inductive generalisations based on limited observations, despite its being illogical. Equally acceptable was Quine's further assumption that instrumental success of inductive reasoning confirms both the existence of natural kinds and the legitimacy of the method.     I argue that natural kinds make inductive knowledge of the world possible because the clustering of properties characteristic of natural kinds makes inferences from the presence of some of these properties to the presence of others reliable. Were it not for the existence of natural kinds and the causal structure they require, any attempt to infer the existence of some properties from the presence of others would be no more than quixotic; reliable inductive inference would be impossible. The [generic] causal structure of the world as exhibited in [universal] natural kinds thus provides the natural ground of inductive inference. Quine's assumption of an innate human psychological process—"standard of similarity," "subjective spacing of qualities"—also remained unquestioned. Kornbluth strengthened this assumption with new labels for the necessary cognitive qualities: "native processes of belief acquisition," "the structure of human conceptual representation," "native inferential processes," "reasonably accurate detectors of covariation." "To my mind, the primary case to be made for the view that our [universal] psychological processes dovetail with the [generic] causal structure of the world comes ... from the success of science. Kornblith denied that this logic makes human classifications the same as mind-independent classifications: "The categories of modern science, of course, are not innate." But he offered no explanation of how kinds that work conditionally can be distinguished from mind-independent unchanging kinds.     If the scientific categories of mature sciences did not correspond, at least approximately, to real kinds in nature, but instead merely grouped objects together on the basis of salient observable properties which somehow answer to our interests, it would be utterly miraculous that inductions using these scientific categories tend to issue in accurate predictions. Inductive inference can only work ... if there is something in nature binding together the [generic] properties which we use to identify kinds. ... Unobservables [universal propositions] are then postulated to explain the constant conjunction of observable properties.     We approach the world by presupposing that it contains natural kinds. Our inferences depend on this presupposition... This presupposition thus gives us a built-in advantage in understanding what the world is like, and thereby makes inductive understanding of the world a real possibility.     When a population [kind] is uniform with respect to some [generic] property, [inductive] inferences from small samples, and indeed, "from a single case", are perfectly reliable. If I note that a [generic] sample of [universal] copper conducts electricity and straightaway conclude that all copper conducts electricity, then I will do just as well as someone ... checking a very large number of copper samples for their conductivity. Kornblith didn't explain how tedious modern induction accurately generalizes from a few generic traits to all of some universal kind. He attributed such success to individual sensitivity that a single case is representative of all of a kind.     If we are sensitive to the situations in which a population is uniform with respect to some property, then making inferences on the basis of very small samples will be a reliable and efficient way to gain information about a population [natural kind].". He argued that even human infants are intuitively sensitive to natural classifications. :    "From the beginning, children assume that the natural world is divided into kinds on the basis of underlying features which are responsible for their superficial similarities, and that these similarities are an uncertain guide to that real underlying structure." Accepting intuition as a legitimate ground for inductive inferences from small samples, Kornblith criticized popular arguments by Amos Tversky and Daniel Kahneman that intuition is irrational. He continued to argue that traditional induction explains the success of modern science.     Our [universal] conceptual and inferential tendencies jointly conspire, at least roughly, to carve nature at its [generic] joints and project the features of a kind which are essential to it. This preestablished harmony between the [generic] causal structure of the world and the [universal] conceptual and inferential structure of our minds produces reliable inductive inference. Hasok Chang and Rasmus Winther. Hasok Chang and Rasmus Winther contributed essays to a collection entitled "Natural Kinds and Classification in Scientific Practice", published in 2016. The editor of the collection, Catherine Kendig, argued for a modern meaning of natural kinds, rejecting Aristotelian classifications of objects according to their "essences, laws, sameness relations, fundamental properties ... and how these map out the ontological space of the world." She thus dropped the traditional supposition that natural kinds exist permanently and independently of human reasoning. She collected original works examining results of discipline-specific classifications of kinds: "the empirical use of natural kinds and what I dub 'activities of natural kinding' and 'natural kinding practices'." Her natural kinds include scientific disciplines themselves, each with its own methods of inquiry and classifications or taxonomies.. Chang's contribution displayed Kendig's "natural kinding activities" or "practice turn" by reporting classifications in the mature discipline of chemistry—a field renowned for examples of timeless natural kinds: "All water is H2O;" "All gold has atomic number 79." He explicitly rejected Quine's basic assumption that natural kinds are real generic objects. "When I speak of a (natural) kind in this chapter, I am referring to a [universal] classificatory concept, rather than a collection of objects." His kinds result from humanity's continuous knowledge-seeking activities called science and philosophy. "Putting these notions more unambiguously in terms of concepts rather than objects, I maintain: if we hit upon some stable and effective classificatory concepts in our inquiry, we should cherish them (calling them 'natural kinds' would be one clear way of doing so), but without presuming that we have thereby found some eternal essences. He also rejected the position taken by Bird and Tobin in our third quote above. "Alexander Bird and Emma Tobin’s succinct characterization of natural kinds is helpful here, as a foil: ‘to say that a kind is "natural" is to say that it corresponds to a grouping or ordering that does not depend on humans’. My view is precisely the opposite, to the extent that scientific inquiry does depend on humans." For Chang, induction creates conditionally warranted kinds by "epistemic iteration"—refining classifications developmentally to reveal how constant conjunctions of relevant traits work: "fundamental classificatory concepts become refined and corrected through our practical scientific engagement with nature. Any considerable and lasting [instrumental] success of such engagement generates confidence in the classificatory concepts used in it, and invites us to consider them as 'natural'." Among other examples, Chang reported the inductive iterative process by which chemists gradually redefined the kind "element." The original hypothesis was that anything that cannot be decomposed by fire or acids is an element. Learning that some chemical reactions are reversible led to the discovery of weight as a constant through reactions. And then it was discovered that some reactions involve definite and invariable weight ratios, refining understanding of constant traits. "Attempts to establish and explain the combining-weight regularities led to the development of the chemical atomic theory by John Dalton and others. ... Chemical elements were later redefined in terms of atomic number (the number of protons in the nucleus)." Chang claimed his examples of classification practices in chemistry confirmed the fallacy of the traditional assumption that natural kinds exist as mind-independent reality. He attributed this belief more to imagining supernatural intervention in the world, than to illogical induction. He did not consider the popular belief that innate psychological capacities enable traditional induction to work. "Much natural-kind talk has been driven by an intuitive metaphysical essentialism that concerns itself with an objective [generic] order of nature whose [universal] knowledge could, ironically, only be obtained by a supernatural being. Let us renounce such an unnatural notion of natural kinds. Instead, natural kinds should be conceived as something we humans may succeed in inventing and improving through scientific practice." Rasmus Winther's Contribution to "Natural Kinds and Classification in Scientific Practice" gave new meaning to natural objects and qualities in the nascent discipline of Geographic Information Science (GIS). This "inter-discipline" engages in discovering patterns in—and displaying spatial kinds of—data, using methods that make its results unique natural kinds. But it still creates kinds using induction to identify instrumental traits. "Collecting and collating geographical data, building geographical data-bases, and engaging in spatial analysis, visualization, and map-making all require organizing, typologizing, and classifying geographic space, objects, relations, and processes. I focus on the use of natural kinds ..., showing how practices of making and using kinds are contextual, fallible, plural, and purposive. The rich family of kinds involved in these activities are here baptized mapping kinds." He later identified sub-kinds of mapping kinds as "calibrating kinds," "feature kinds," and "object kinds" of "data model types." Winther identified "inferential processes of abstraction and generalization" as methods used by GIS, and explained how they generate digital maps. He illustrated two kinds of inquiry procedures, with sub-procedures to organize data. They are reminiscent of Dewey's multiple steps in modern inductive and deductive inference. Methods for transforming generic phenomena into kinds involve reducing complexity, amplifying, joining, and separating. Methods for selecting among generic kinds involves elimination, classification, and collapse of data. He argued that these methods for mapping kinds can be practiced in other disciplines, and briefly considered how they might harmonize three conflicting philosophical perspectives on natural kinds. Some philosophers believe there can be a "pluralism" of kinds and classifications. They prefer to speak of "relevant" and "interesting" kinds rather than eternal "natural" kinds. They may be called social constructivists whose kinds are human products. Chang's conclusions that natural kinds are human-created and instrumentally useful would appear to put him in this group. Other philosophers, including Quine, examine the role of kinds in scientific inference. Winther does not examine Quine's commitment to traditional induction generalizing from small samples of similar objects. But he does accept Quine's willingness to call human-identified kinds that work natural. "Quine holds that kinds are "functionally relevant groupings in nature" whose recognition permits our inductions to "tend to come out right." That is, kinds ground fallible inductive inferences and predictions, so essential to scientific projects including those of GIS and cartography." Finally, Winther identified a philosophical perspective seeking to reconstruct rather than reject belief in natural kinds. He placed Dewey in this group, ignoring Dewey's rejection of the traditional label in favor of "warranted assertions." "Dewey resisted the standard view of natural kinds, inherited from the Greeks ... Instead, Dewey presents an analysis of kinds (and classes and universals) as fallible and context-specific hypotheses permitting us to address problematic situations effectively." Winther concludes that classification practices used in Geographic Information Science are able to harmonize these conflicting philosophical perspectives on natural kinds. "GIS and cartography suggest that kinds are simultaneously discovered [as pre-existing structures] and constructed [as human classifications]. Geographic features, processes, and objects are of course real. Yet we must structure them in our data models and, subsequently, select and transform them in our maps. Realism and (social) constructivism are hence not exclusive in this field." References. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "H_2 O" } ]
https://en.wikipedia.org/wiki?curid=1137435
11376
Floating-point arithmetic
Computer approximation for real numbers In computing, floating-point arithmetic (FP) is arithmetic that represents subsets of real numbers using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. Numbers of this form are called floating-point numbers. For example, 12.345 is a floating-point number in base ten with five digits of precision: formula_0 However, unlike 12.345, 12.3456 is not a floating-point number in base ten with five digits of precision—it needs six digits of precision; the nearest floating-point number with only five digits is 12.346. In practice, most floating-point systems use base two, though base ten (decimal floating point) is also common. Floating-point arithmetic operations, such as addition and division, approximate the corresponding real number arithmetic operations by rounding any result that is not a floating-point number itself to a nearby floating-point number. For example, in a floating-point arithmetic with five base-ten digits of precision, the sum 12.345 + 1.0001 = 13.3451 might be rounded to 13.345. The term "floating point" refers to the fact that the number's radix point can "float" anywhere to the left, right, or between the significant digits of the number. This position is indicated by the exponent, so floating point can be considered a form of scientific notation. A floating-point system can be used to represent, with a fixed number of digits, numbers of very different orders of magnitude — such as the number of meters between galaxies or between protons in an atom. For this reason, floating-point arithmetic is often used to allow very small and very large real numbers that require fast processing times. The result of this dynamic range is that the numbers that can be represented are not uniformly spaced; the difference between two consecutive representable numbers varies with their exponent. Over the years, a variety of floating-point representations have been used in computers. In 1985, the IEEE 754 Standard for Floating-Point Arithmetic was established, and since the 1990s, the most commonly encountered representations are those defined by the IEEE. The speed of floating-point operations, commonly measured in terms of FLOPS, is an important characteristic of a computer system, especially for applications that involve intensive mathematical calculations. A floating-point unit (FPU, colloquially a math coprocessor) is a part of a computer system specially designed to carry out operations on floating-point numbers. Overview. Floating-point numbers. A number representation specifies some way of encoding a number, usually as a string of digits. There are several mechanisms by which strings of digits can represent numbers. In standard mathematical notation, the digit string can be of any length, and the location of the radix point is indicated by placing an explicit "point" character (dot or comma) there. If the radix point is not specified, then the string implicitly represents an integer and the unstated radix point would be off the right-hand end of the string, next to the least significant digit. In fixed-point systems, a position in the string is specified for the radix point. So a fixed-point scheme might use a string of 8 decimal digits with the decimal point in the middle, whereby "00012345" would represent 0001.2345. In scientific notation, the given number is scaled by a power of 10, so that it lies within a specific range—typically between 1 and 10, with the radix point appearing immediately after the first digit. As a power of ten, the scaling factor is then indicated separately at the end of the number. For example, the orbital period of Jupiter's moon Io is seconds, a value that would be represented in standard-form scientific notation as seconds. Floating-point representation is similar in concept to scientific notation. Logically, a floating-point number consists of: To derive the value of the floating-point number, the "significand" is multiplied by the "base" raised to the power of the "exponent", equivalent to shifting the radix point from its implied position by a number of places equal to the value of the exponent—to the right if the exponent is positive or to the left if the exponent is negative. Using base-10 (the familiar decimal notation) as an example, the number , which has ten decimal digits of precision, is represented as the significand together with 5 as the exponent. To determine the actual value, a decimal point is placed after the first digit of the significand and the result is multiplied by 105 to give , or . In storing such a number, the base (10) need not be stored, since it will be the same for the entire range of supported numbers, and can thus be inferred. Symbolically, this final value is: formula_1 where s is the significand (ignoring any implied decimal point), p is the precision (the number of digits in the significand), b is the base (in our example, this is the number "ten"), and e is the exponent. Historically, several number bases have been used for representing floating-point numbers, with base two (binary) being the most common, followed by base ten (decimal floating point), and other less common varieties, such as base sixteen (hexadecimal floating point), base eight (octal floating point), base four (quaternary floating point), base three (balanced ternary floating point) and even base 256 and base . A floating-point number is a rational number, because it can be represented as one integer divided by another; for example is (145/100)×1000 or /100. The base determines the fractions that can be represented; for instance, 1/5 cannot be represented exactly as a floating-point number using a binary base, but 1/5 can be represented exactly using a decimal base (, or ). However, 1/3 cannot be represented exactly by either binary (0.010101...) or decimal (0.333...), but in base 3, it is trivial (0.1 or 1×3−1) . The occasions on which infinite expansions occur depend on the base and its prime factors. The way in which the significand (including its sign) and exponent are stored in a computer is implementation-dependent. The common IEEE formats are described in detail later and elsewhere, but as an example, in the binary single-precision (32-bit) floating-point representation, formula_2, and so the significand is a string of 24 bits. For instance, the number π's first 33 bits are: formula_3 In this binary expansion, let us denote the positions from 0 (leftmost bit, or most significant bit) to 32 (rightmost bit). The 24-bit significand will stop at position 23, shown as the underlined bit above. The next bit, at position 24, is called the "round bit" or "rounding bit". It is used to round the 33-bit approximation to the nearest 24-bit number (there are specific rules for halfway values, which is not the case here). This bit, which is in this example, is added to the integer formed by the leftmost 24 bits, yielding: formula_4 When this is stored in memory using the IEEE 754 encoding, this becomes the significand s. The significand is assumed to have a binary point to the right of the leftmost bit. So, the binary representation of π is calculated from left-to-right as follows: formula_5 where p is the precision ( in this example), n is the position of the bit of the significand from the left (starting at and finishing at here) and e is the exponent ( in this example). It can be required that the most significant digit of the significand of a non-zero number be non-zero (except when the corresponding exponent would be smaller than the minimum one). This process is called "normalization". For binary formats (which uses only the digits and ), this non-zero digit is necessarily . Therefore, it does not need to be represented in memory, allowing the format to have one more bit of precision. This rule is variously called the "leading bit convention", the "implicit bit convention", the "hidden bit convention", or the "assumed bit convention". Alternatives to floating-point numbers. The floating-point representation is by far the most common way of representing in computers an approximation to real numbers. However, there are alternatives: History. In 1914, the Spanish engineer Leonardo Torres Quevedo published "Essays on Automatics", where he designed a special-purpose electromechanical calculator based on Charles Babbage's analytical engine and described a way to store floating-point numbers in a consistent manner. He stated that numbers will be stored in exponential format as "n" x 10formula_9, and offered three rules by which consistent manipulation of floating-point numbers by machines could be implemented. For Torres, ""n" will always be the same number of digits (e.g. six), the first digit of "n" will be of order of tenths, the second of hundredths, etc, and one will write each quantity in the form: "n"; "m"." The format he proposed shows the need for a fixed-sized significand as is presently used for floating-point data, fixing the location of the decimal point in the significand so that each representation was unique, and how to format such numbers by specifying a syntax to be used that could be entered through a typewriter, as was the case of his Electromechanical Arithmometer in 1920. In 1938, Konrad Zuse of Berlin completed the Z1, the first binary, programmable mechanical computer; it uses a 24-bit binary floating-point number representation with a 7-bit signed exponent, a 17-bit significand (including one implicit bit), and a sign bit. The more reliable relay-based Z3, completed in 1941, has representations for both positive and negative infinities; in particular, it implements defined operations with infinity, such as formula_10, and it stops on undefined operations, such as formula_11. Zuse also proposed, but did not complete, carefully rounded floating-point arithmetic that includes formula_12 and NaN representations, anticipating features of the IEEE Standard by four decades. In contrast, von Neumann recommended against floating-point numbers for the 1951 IAS machine, arguing that fixed-point arithmetic is preferable. The first "commercial" computer with floating-point hardware was Zuse's Z4 computer, designed in 1942–1945. In 1946, Bell Laboratories introduced the Model V, which implemented decimal floating-point numbers. The Pilot ACE has binary floating-point arithmetic, and it became operational in 1950 at National Physical Laboratory, UK. Thirty-three were later sold commercially as the English Electric DEUCE. The arithmetic is actually implemented in software, but with a one megahertz clock rate, the speed of floating-point and fixed-point operations in this machine were initially faster than those of many competing computers. The mass-produced IBM 704 followed in 1954; it introduced the use of a biased exponent. For many decades after that, floating-point hardware was typically an optional feature, and computers that had it were said to be "scientific computers", or to have "scientific computation" (SC) capability (see also Extensions for Scientific Computation (XSC)). It was not until the launch of the Intel i486 in 1989 that "general-purpose" personal computers had floating-point capability in hardware as a standard feature. The UNIVAC 1100/2200 series, introduced in 1962, supported two floating-point representations: The IBM 7094, also introduced in 1962, supported single-precision and double-precision representations, but with no relation to the UNIVAC's representations. Indeed, in 1964, IBM introduced hexadecimal floating-point representations in its System/360 mainframes; these same representations are still available for use in modern z/Architecture systems. In 1998, IBM implemented IEEE-compatible binary floating-point arithmetic in its mainframes; in 2005, IBM also added IEEE-compatible decimal floating-point arithmetic. Initially, computers used many different representations for floating-point numbers. The lack of standardization at the mainframe level was an ongoing problem by the early 1970s for those writing and maintaining higher-level source code; these manufacturer floating-point standards differed in the word sizes, the representations, and the rounding behavior and general accuracy of operations. Floating-point compatibility across multiple computing systems was in desperate need of standardization by the early 1980s, leading to the creation of the IEEE 754 standard once the 32-bit (or 64-bit) word had become commonplace. This standard was significantly based on a proposal from Intel, which was designing the i8087 numerical coprocessor; Motorola, which was designing the 68000 around the same time, gave significant input as well. In 1989, mathematician and computer scientist William Kahan was honored with the Turing Award for being the primary architect behind this proposal; he was aided by his student Jerome Coonen and a visiting professor, Harold Stone. Among the x86 innovations are these: Range of floating-point numbers. A floating-point number consists of two fixed-point components, whose range depends exclusively on the number of bits or digits in their representation. Whereas components linearly depend on their range, the floating-point range linearly depends on the significand range and exponentially on the range of exponent component, which attaches outstandingly wider range to the number. On a typical computer system, a "double-precision" (64-bit) binary floating-point number has a coefficient of 53 bits (including 1 implied bit), an exponent of 11 bits, and 1 sign bit. Since 210 = 1024, the complete range of the positive normal floating-point numbers in this format is from 2−1022 ≈ 2 × 10−308 to approximately 21024 ≈ 2 × 10308. The number of normal floating-point numbers in a system ("B", "P", "L", "U") where is formula_13. There is a smallest positive normal floating-point number, Underflow level = UFL = formula_14, which has a 1 as the leading digit and 0 for the remaining digits of the significand, and the smallest possible value for the exponent. There is a largest floating-point number, Overflow level = OFL = formula_15, which has "B" − 1 as the value for each digit of the significand and the largest possible value for the exponent. In addition, there are representable values strictly between −UFL and UFL. Namely, positive and negative zeros, as well as subnormal numbers. IEEE 754: floating point in modern computers. The IEEE standardized the computer representation for binary floating-point numbers in IEEE 754 (a.k.a. IEC 60559) in 1985. This first standard is followed by almost all modern machines. It was revised in 2008. IBM mainframes support IBM's own hexadecimal floating point format and IEEE 754-2008 decimal floating point in addition to the IEEE 754 binary format. The Cray T90 series had an IEEE version, but the SV1 still uses Cray floating-point format. The standard provides for many closely related formats, differing in only a few details. Five of these formats are called "basic formats", and others are termed "extended precision formats" and "extendable precision format". Three formats are especially widely used in computer hardware and languages: Increasing the precision of the floating-point representation generally reduces the amount of accumulated round-off error caused by intermediate calculations. Other IEEE formats include: Any integer with absolute value less than 224 can be exactly represented in the single-precision format, and any integer with absolute value less than 253 can be exactly represented in the double-precision format. Furthermore, a wide range of powers of 2 times such a number can be represented. These properties are sometimes used for purely integer data, to get 53-bit integers on platforms that have double-precision floats but only 32-bit integers. The standard specifies some special values, and their representation: positive infinity (+∞), negative infinity (−∞), a negative zero (−0) distinct from ordinary ("positive") zero, and "not a number" values (NaNs). Comparison of floating-point numbers, as defined by the IEEE standard, is a bit different from usual integer comparison. Negative and positive zero compare equal, and every NaN compares unequal to every value, including itself. All finite floating-point numbers are strictly smaller than +∞ and strictly greater than −∞, and they are ordered in the same way as their values (in the set of real numbers). Internal representation. Floating-point numbers are typically packed into a computer datum as the sign bit, the exponent field, and the significand or mantissa, from left to right. For the IEEE 754 binary formats (basic and extended) which have extant hardware implementations, they are apportioned as follows: While the exponent can be positive or negative, in binary formats it is stored as an unsigned number that has a fixed "bias" added to it. Values of all 0s in this field are reserved for the zeros and subnormal numbers; values of all 1s are reserved for the infinities and NaNs. The exponent range for normal numbers is [−126, 127] for single precision, [−1022, 1023] for double, or [−16382, 16383] for quad. Normal numbers exclude subnormal values, zeros, infinities, and NaNs. In the IEEE binary interchange formats the leading 1 bit of a normalized significand is not actually stored in the computer datum. It is called the "hidden" or "implicit" bit. Because of this, the single-precision format actually has a significand with 24 bits of precision, the double-precision format has 53, and quad has 113. For example, it was shown above that π, rounded to 24 bits of precision, has: The sum of the exponent bias (127) and the exponent (1) is 128, so this is represented in the single-precision format as An example of a layout for 32-bit floating point is and the 64-bit ("double") layout is similar. Other notable floating-point formats. In addition to the widely used IEEE 754 standard formats, other floating-point formats are used, or have been used, in certain domain-specific areas. Representable numbers, conversion and rounding. By their nature, all numbers expressed in floating-point format are rational numbers with a terminating expansion in the relevant base (for example, a terminating decimal expansion in base-10, or a terminating binary expansion in base-2). Irrational numbers, such as π or √2, or non-terminating rational numbers, must be approximated. The number of digits (or bits) of precision also limits the set of rational numbers that can be represented exactly. For example, the decimal number 123456789 cannot be exactly represented if only eight decimal digits of precision are available (it would be rounded to one of the two straddling representable values, 12345678 × 101 or 12345679 × 101), the same applies to non-terminating digits (.5 to be rounded to either .55555555 or .55555556). When a number is represented in some format (such as a character string) which is not a native floating-point representation supported in a computer implementation, then it will require a conversion before it can be used in that implementation. If the number can be represented exactly in the floating-point format then the conversion is exact. If there is not an exact representation then the conversion requires a choice of which floating-point number to use to represent the original value. The representation chosen will have a different value from the original, and the value thus adjusted is called the "rounded value". Whether or not a rational number has a terminating expansion depends on the base. For example, in base-10 the number 1/2 has a terminating expansion (0.5) while the number 1/3 does not (0.333...). In base-2 only rationals with denominators that are powers of 2 (such as 1/2 or 3/16) are terminating. Any rational with a denominator that has a prime factor other than 2 will have an infinite binary expansion. This means that numbers that appear to be short and exact when written in decimal format may need to be approximated when converted to binary floating-point. For example, the decimal number 0.1 is not representable in binary floating-point of any finite precision; the exact binary representation would have a "1100" sequence continuing endlessly: "e" = −4; "s" = 1100110011001100110011001100110011..., where, as previously, "s" is the significand and "e" is the exponent. When rounded to 24 bits this becomes "e" = −4; "s" = 110011001100110011001101, which is actually 0.100000001490116119384765625 in decimal. As a further example, the real number π, represented in binary as an infinite sequence of bits is 11.0010010000111111011010101000100010000101101000110000100011010011... but is 11.0010010000111111011011 when approximated by rounding to a precision of 24 bits. In binary single-precision floating-point, this is represented as "s" = 1.10010010000111111011011 with "e" = 1. This has a decimal value of 3.1415927410125732421875, whereas a more accurate approximation of the true value of π is 3.14159265358979323846264338327950... The result of rounding differs from the true value by about 0.03 parts per million, and matches the decimal representation of π in the first 7 digits. The difference is the discretization error and is limited by the machine epsilon. The arithmetical difference between two consecutive representable floating-point numbers which have the same exponent is called a unit in the last place (ULP). For example, if there is no representable number lying between the representable numbers 1.45a70c22hex and 1.45a70c24hex, the ULP is 2×16−8, or 2−31. For numbers with a base-2 exponent part of 0, i.e. numbers with an absolute value higher than or equal to 1 but lower than 2, an ULP is exactly 2−23 or about 10−7 in single precision, and exactly 2−53 or about 10−16 in double precision. The mandated behavior of IEEE-compliant hardware is that the result be within one-half of a ULP. Rounding modes. Rounding is used when the exact result of a floating-point operation (or a conversion to floating-point format) would need more digits than there are digits in the significand. IEEE 754 requires "correct rounding": that is, the rounded result is as if infinitely precise arithmetic was used to compute the value and then rounded (although in implementation only three extra bits are needed to ensure this). There are several different rounding schemes (or "rounding modes"). Historically, truncation was the typical approach. Since the introduction of IEEE 754, the default method ("round to nearest, ties to even", sometimes called Banker's Rounding) is more commonly used. This method rounds the ideal (infinitely precise) result of an arithmetic operation to the nearest representable value, and gives that representation as the result. In the case of a tie, the value that would make the significand end in an even digit is chosen. The IEEE 754 standard requires the same rounding to be applied to all fundamental algebraic operations, including square root and conversions, when there is a numeric (non-NaN) result. It means that the results of IEEE 754 operations are completely determined in all bits of the result, except for the representation of NaNs. ("Library" functions such as cosine and log are not mandated.) Alternative rounding options are also available. IEEE 754 specifies the following rounding modes: Alternative modes are useful when the amount of error being introduced must be bounded. Applications that require a bounded error are multi-precision floating-point, and interval arithmetic. The alternative rounding modes are also useful in diagnosing numerical instability: if the results of a subroutine vary substantially between rounding to + and − infinity then it is likely numerically unstable and affected by round-off error. Binary-to-decimal conversion with minimal number of digits. Converting a double-precision binary floating-point number to a decimal string is a common operation, but an algorithm producing results that are both accurate and minimal did not appear in print until 1990, with Steele and White's Dragon4. Some of the improvements since then include: Many modern language runtimes use Grisu3 with a Dragon4 fallback. Decimal-to-binary conversion. The problem of parsing a decimal string into a binary FP representation is complex, with an accurate parser not appearing until Clinger's 1990 work (implemented in dtoa.c). Further work has likewise progressed in the direction of faster parsing. Floating-point operations. For ease of presentation and understanding, decimal radix with 7 digit precision will be used in the examples, as in the IEEE 754 "decimal32" format. The fundamental principles are the same in any radix or precision, except that normalization is optional (it does not affect the numerical value of the result). Here, "s" denotes the significand and "e" denotes the exponent. Addition and subtraction. A simple method to add floating-point numbers is to first represent them with the same exponent. In the example below, the second number is shifted right by three digits, and one then proceeds with the usual addition method: 123456.7 = 1.234567 × 10^5 101.7654 = 1.017654 × 10^2 = 0.001017654 × 10^5 Hence: 123456.7 + 101.7654 = (1.234567 × 10^5) + (1.017654 × 10^2) = (1.234567 × 10^5) + (0.001017654 × 10^5) = (1.234567 + 0.001017654) × 10^5 = 1.235584654 × 10^5 In detail: e=5; s=1.234567 (123456.7) + e=2; s=1.017654 (101.7654) e=5; s=1.234567 + e=5; s=0.001017654 (after shifting) e=5; s=1.235584654 (true sum: 123558.4654) This is the true result, the exact sum of the operands. It will be rounded to seven digits and then normalized if necessary. The final result is e=5; s=1.235585 (final sum: 123558.5) The lowest three digits of the second operand (654) are essentially lost. This is round-off error. In extreme cases, the sum of two non-zero numbers may be equal to one of them: e=5; s=1.234567 + e=−3; s=9.876543 e=5; s=1.234567 + e=5; s=0.00000009876543 (after shifting) e=5; s=1.23456709876543 (true sum) e=5; s=1.234567 (after rounding and normalization) In the above conceptual examples it would appear that a large number of extra digits would need to be provided by the adder to ensure correct rounding; however, for binary addition or subtraction using careful implementation techniques only a "guard" bit, a "rounding" bit and one extra "sticky" bit need to be carried beyond the precision of the operands. Another problem of loss of significance occurs when "approximations" to two nearly equal numbers are subtracted. In the following example "e" = 5; "s" = 1.234571 and "e" = 5; "s" = 1.234567 are approximations to the rationals 123457.1467 and 123456.659. e=5; s=1.234571 − e=5; s=1.234567 e=5; s=0.000004 e=−1; s=4.000000 (after rounding and normalization) The floating-point difference is computed exactly because the numbers are close—the Sterbenz lemma guarantees this, even in case of underflow when gradual underflow is supported. Despite this, the difference of the original numbers is "e" = −1; "s" = 4.877000, which differs more than 20% from the difference "e" = −1; "s" = 4.000000 of the approximations. In extreme cases, all significant digits of precision can be lost. This "cancellation" illustrates the danger in assuming that all of the digits of a computed result are meaningful. Dealing with the consequences of these errors is a topic in numerical analysis; see also Accuracy problems. Multiplication and division. To multiply, the significands are multiplied while the exponents are added, and the result is rounded and normalized. e=3; s=4.734612 × e=5; s=5.417242 e=8; s=25.648538980104 (true product) e=8; s=25.64854 (after rounding) e=9; s=2.564854 (after normalization) Similarly, division is accomplished by subtracting the divisor's exponent from the dividend's exponent, and dividing the dividend's significand by the divisor's significand. There are no cancellation or absorption problems with multiplication or division, though small errors may accumulate as operations are performed in succession. In practice, the way these operations are carried out in digital logic can be quite complex (see Booth's multiplication algorithm and Division algorithm). For a fast, simple method, see the Horner method. Literal syntax. Literals for floating-point numbers depend on languages. They typically use codice_0 or codice_1 to denote scientific notation. The C programming language and the IEEE 754 standard also define a hexadecimal literal syntax with a base-2 exponent instead of 10. In languages like C, when the decimal exponent is omitted, a decimal point is needed to differentiate them from integers. Other languages do not have an integer type (such as JavaScript), or allow overloading of numeric types (such as Haskell). In these cases, digit strings such as codice_2 may also be floating-point literals. Examples of floating-point literals are: Dealing with exceptional cases. Floating-point computation in a computer can run into three kinds of problems: Prior to the IEEE standard, such conditions usually caused the program to terminate, or triggered some kind of trap that the programmer might be able to catch. How this worked was system-dependent, meaning that floating-point programs were not portable. (The term "exception" as used in IEEE 754 is a general term meaning an exceptional condition, which is not necessarily an error, and is a different usage to that typically defined in programming languages such as a C++ or Java, in which an "exception" is an alternative flow of control, closer to what is termed a "trap" in IEEE 754 terminology.) Here, the required default method of handling exceptions according to IEEE 754 is discussed (the IEEE 754 optional trapping and other "alternate exception handling" modes are not discussed). Arithmetic exceptions are (by default) required to be recorded in "sticky" status flag bits. That they are "sticky" means that they are not reset by the next (arithmetic) operation, but stay set until explicitly reset. The use of "sticky" flags thus allows for testing of exceptional conditions to be delayed until after a full floating-point expression or subroutine: without them exceptional conditions that could not be otherwise ignored would require explicit testing immediately after every floating-point operation. By default, an operation always returns a result according to specification without interrupting computation. For instance, 1/0 returns +∞, while also setting the divide-by-zero flag bit (this default of ∞ is designed to often return a finite result when used in subsequent operations and so be safely ignored). The original IEEE 754 standard, however, failed to recommend operations to handle such sets of arithmetic exception flag bits. So while these were implemented in hardware, initially programming language implementations typically did not provide a means to access them (apart from assembler). Over time some programming language standards (e.g., C99/C11 and Fortran) have been updated to specify methods to access and change status flag bits. The 2008 version of the IEEE 754 standard now specifies a few operations for accessing and handling the arithmetic flag bits. The programming model is based on a single thread of execution and use of them by multiple threads has to be handled by a means outside of the standard (e.g. C11 specifies that the flags have thread-local storage). IEEE 754 specifies five arithmetic exceptions that are to be recorded in the status flags ("sticky bits"): The default return value for each of the exceptions is designed to give the correct result in the majority of cases such that the exceptions can be ignored in the majority of codes. "inexact" returns a correctly rounded result, and "underflow" returns a value less than or equal to the smallest positive normal number in magnitude and can almost always be ignored. "divide-by-zero" returns infinity exactly, which will typically then divide a finite number and so give zero, or else will give an "invalid" exception subsequently if not, and so can also typically be ignored. For example, the effective resistance of n resistors in parallel (see fig. 1) is given by formula_17. If a short-circuit develops with formula_18 set to 0, formula_19 will return +infinity which will give a final formula_16 of 0, as expected (see the continued fraction example of for another example). "Overflow" and "invalid" exceptions can typically not be ignored, but do not necessarily represent errors: for example, a root-finding routine, as part of its normal operation, may evaluate a passed-in function at values outside of its domain, returning NaN and an "invalid" exception flag to be ignored until finding a useful start point. Accuracy problems. The fact that floating-point numbers cannot accurately represent all real numbers, and that floating-point operations cannot accurately represent true arithmetic operations, leads to many surprising situations. This is related to the finite precision with which computers generally represent numbers. For example, the decimal numbers 0.1 and 0.01 cannot be represented exactly as binary floating-point numbers. In the IEEE 754 binary32 format with its 24-bit significand, the result of attempting to square the approximation to 0.1 is neither 0.01 nor the representable number closest to it. The decimal number 0.1 is represented in binary as e = −4; s = 110011001100110011001101, which is &lt;templatestyles src="Block indent/styles.css"/&gt;0.100000001490116119384765625 exactly. Squaring this number gives &lt;templatestyles src="Block indent/styles.css"/&gt;0.010000000298023226097399174250313080847263336181640625 exactly. Squaring it with rounding to the 24-bit precision gives &lt;templatestyles src="Block indent/styles.css"/&gt;0.010000000707805156707763671875 exactly. But the representable number closest to 0.01 is &lt;templatestyles src="Block indent/styles.css"/&gt;0.009999999776482582092285156250 exactly. Also, the non-representability of π (and π/2) means that an attempted computation of tan(π/2) will not yield a result of infinity, nor will it even overflow in the usual floating-point formats (assuming an accurate implementation of tan). It is simply not possible for standard floating-point hardware to attempt to compute tan(π/2), because π/2 cannot be represented exactly. This computation in C: /* Enough digits to be sure we get the correct approximation. */ double pi = 3.1415926535897932384626433832795; double z = tan(pi/2.0); will give a result of 16331239353195370.0. In single precision (using the codice_8 function), the result will be −22877332.0. By the same token, an attempted computation of sin(π) will not yield zero. The result will be (approximately) 0.1225×10−15 in double precision, or −0.8742×10−7 in single precision. While floating-point addition and multiplication are both commutative ( and ), they are not necessarily associative. That is, is not necessarily equal to . Using 7-digit significand decimal arithmetic: a = 1234.567, b = 45.67834, c = 0.0004 (a + b) + c: 1234.567 (a) + 45.67834 (b) ____________ 1280.24534 rounds to 1280.245 1280.245 (a + b) + 0.0004 (c) ____________ 1280.2454 rounds to 1280.245 ← (a + b) + c a + (b + c): 45.67834 (b) + 0.0004 (c) ____________ 45.67874 1234.567 (a) + 45.67874 (b + c) ____________ 1280.24574 rounds to 1280.246 ← a + (b + c) They are also not necessarily distributive. That is, may not be the same as : 1234.567 × 3.333333 = 4115.223 1.234567 × 3.333333 = 4.115223 4115.223 + 4.115223 = 4119.338 but 1234.567 + 1.234567 = 1235.802 1235.802 × 3.333333 = 4119.340 In addition to loss of significance, inability to represent numbers such as π and 0.1 exactly, and other slight inaccuracies, the following phenomena may occur: Machine precision and backward error analysis. "Machine precision" is a quantity that characterizes the accuracy of a floating-point system, and is used in backward error analysis of floating-point algorithms. It is also known as unit roundoff or "machine epsilon". Usually denoted , its value depends on the particular rounding being used. With rounding to zero, formula_20 whereas rounding to nearest, formula_21 where "B" is the base of the system and "P" is the precision of the significand (in base "B"). This is important since it bounds the "relative error" in representing any non-zero real number within the normalized range of a floating-point system: formula_22 Backward error analysis, the theory of which was developed and popularized by James H. Wilkinson, can be used to establish that an algorithm implementing a numerical function is numerically stable. The basic approach is to show that although the calculated result, due to roundoff errors, will not be exactly correct, it is the exact solution to a nearby problem with slightly perturbed input data. If the perturbation required is small, on the order of the uncertainty in the input data, then the results are in some sense as accurate as the data "deserves". The algorithm is then defined as "backward stable". Stability is a measure of the sensitivity to rounding errors of a given numerical procedure; by contrast, the condition number of a function for a given problem indicates the inherent sensitivity of the function to small perturbations in its input and is independent of the implementation used to solve the problem. As a trivial example, consider a simple expression giving the inner product of (length two) vectors formula_23 and formula_24, then formula_25 and so formula_26 where formula_27 where formula_28 by definition, which is the sum of two slightly perturbed (on the order of Εmach) input data, and so is backward stable. For more realistic examples in numerical linear algebra, see Higham 2002 and other references below. Minimizing the effect of accuracy problems. Although individual arithmetic operations of IEEE 754 are guaranteed accurate to within half a ULP, more complicated formulae can suffer from larger errors for a variety of reasons. The loss of accuracy can be substantial if a problem or its data are ill-conditioned, meaning that the correct result is hypersensitive to tiny perturbations in its data. However, even functions that are well-conditioned can suffer from large loss of accuracy if an algorithm numerically unstable for that data is used: apparently equivalent formulations of expressions in a programming language can differ markedly in their numerical stability. One approach to remove the risk of such loss of accuracy is the design and analysis of numerically stable algorithms, which is an aim of the branch of mathematics known as numerical analysis. Another approach that can protect against the risk of numerical instabilities is the computation of intermediate (scratch) values in an algorithm at a higher precision than the final result requires, which can remove, or reduce by orders of magnitude, such risk: IEEE 754 quadruple precision and extended precision are designed for this purpose when computing at double precision. For example, the following algorithm is a direct implementation to compute the function which is well-conditioned at 1.0, however it can be shown to be numerically unstable and lose up to half the significant digits carried by the arithmetic when computed near 1.0. double A(double X) double Y, Z; // [1] Y = X - 1.0; Z = exp(Y); if (Z != 1.0) Z = Y / (Z - 1.0); // [2] return Z; If, however, intermediate computations are all performed in extended precision (e.g. by setting line [1] to C99 ), then up to full precision in the final double result can be maintained. Alternatively, a numerical analysis of the algorithm reveals that if the following non-obvious change to line [2] is made: Z = log(Z) / (Z - 1.0); then the algorithm becomes numerically stable and can compute to full double precision. To maintain the properties of such carefully constructed numerically stable programs, careful handling by the compiler is required. Certain "optimizations" that compilers might make (for example, reordering operations) can work against the goals of well-behaved software. There is some controversy about the failings of compilers and language designs in this area: C99 is an example of a language where such optimizations are carefully specified to maintain numerical precision. See the external references at the bottom of this article. A detailed treatment of the techniques for writing high-quality floating-point software is beyond the scope of this article, and the reader is referred to, and the other references at the bottom of this article. Kahan suggests several rules of thumb that can substantially decrease by orders of magnitude the risk of numerical anomalies, in addition to, or in lieu of, a more careful numerical analysis. These include: as noted above, computing all expressions and intermediate results in the highest precision supported in hardware (a common rule of thumb is to carry twice the precision of the desired result, i.e. compute in double precision for a final single-precision result, or in double extended or quad precision for up to double-precision results); and rounding input data and results to only the precision required and supported by the input data (carrying excess precision in the final result beyond that required and supported by the input data can be misleading, increases storage cost and decreases speed, and the excess bits can affect convergence of numerical procedures: notably, the first form of the iterative example given below converges correctly when using this rule of thumb). Brief descriptions of several additional issues and techniques follow. As decimal fractions can often not be exactly represented in binary floating-point, such arithmetic is at its best when it is simply being used to measure real-world quantities over a wide range of scales (such as the orbital period of a moon around Saturn or the mass of a proton), and at its worst when it is expected to model the interactions of quantities expressed as decimal strings that are expected to be exact. An example of the latter case is financial calculations. For this reason, financial software tends not to use a binary floating-point number representation. The "decimal" data type of the C# and Python programming languages, and the decimal formats of the IEEE 754-2008 standard, are designed to avoid the problems of binary floating-point representations when applied to human-entered exact decimal values, and make the arithmetic always behave as expected when numbers are printed in decimal. Expectations from mathematics may not be realized in the field of floating-point computation. For example, it is known that formula_29, and that formula_30, however these facts cannot be relied on when the quantities involved are the result of floating-point computation. The use of the equality test (codice_9) requires care when dealing with floating-point numbers. Even simple expressions like codice_10 will, on most computers, fail to be true (in IEEE 754 double precision, for example, codice_11 is approximately equal to -4.44089209850063e-16). Consequently, such tests are sometimes replaced with "fuzzy" comparisons (codice_12, where epsilon is sufficiently small and tailored to the application, such as 1.0E−13). The wisdom of doing this varies greatly, and can require numerical analysis to bound epsilon. Values derived from the primary data representation and their comparisons should be performed in a wider, extended, precision to minimize the risk of such inconsistencies due to round-off errors. It is often better to organize the code in such a way that such tests are unnecessary. For example, in computational geometry, exact tests of whether a point lies off or on a line or plane defined by other points can be performed using adaptive precision or exact arithmetic methods. Small errors in floating-point arithmetic can grow when mathematical algorithms perform operations an enormous number of times. A few examples are matrix inversion, eigenvector computation, and differential equation solving. These algorithms must be very carefully designed, using numerical approaches such as iterative refinement, if they are to work well. Summation of a vector of floating-point values is a basic algorithm in scientific computing, and so an awareness of when loss of significance can occur is essential. For example, if one is adding a very large number of numbers, the individual addends are very small compared with the sum. This can lead to loss of significance. A typical addition would then be something like 3253.671 + 3.141276 3256.812 The low 3 digits of the addends are effectively lost. Suppose, for example, that one needs to add many numbers, all approximately equal to 3. After 1000 of them have been added, the running sum is about 3000; the lost digits are not regained. The Kahan summation algorithm may be used to reduce the errors. Round-off error can affect the convergence and accuracy of iterative numerical procedures. As an example, Archimedes approximated π by calculating the perimeters of polygons inscribing and circumscribing a circle, starting with hexagons, and successively doubling the number of sides. As noted above, computations may be rearranged in a way that is mathematically equivalent but less prone to error (numerical analysis). Two forms of the recurrence formula for the circumscribed polygon are: Here is a computation using IEEE "double" (a significand with 53 bits of precision) arithmetic: i 6 × 2i × ti, first form 6 × 2i × ti, second form 0 3.4641016151377543863 3.4641016151377543863 1 3.2153903091734710173 3.2153903091734723496 2 3.1596599420974940120 3.1596599420975006733 3 3.1460862151314012979 3.1460862151314352708 4 3.1427145996453136334 3.1427145996453689225 5 3.1418730499801259536 3.1418730499798241950 6 3.1416627470548084133 3.1416627470568494473 7 3.1416101765997805905 3.1416101766046906629 8 3.1415970343230776862 3.1415970343215275928 9 3.1415937488171150615 3.1415937487713536668 10 3.1415929278733740748 3.1415929273850979885 11 3.1415927256228504127 3.1415927220386148377 12 3.1415926717412858693 3.1415926707019992125 13 3.1415926189011456060 3.1415926578678454728 14 3.1415926717412858693 3.1415926546593073709 15 3.1415919358822321783 3.1415926538571730119 16 3.1415926717412858693 3.1415926536566394222 17 3.1415810075796233302 3.1415926536065061913 18 3.1415926717412858693 3.1415926535939728836 19 3.1414061547378810956 3.1415926535908393901 20 3.1405434924008406305 3.1415926535900560168 21 3.1400068646912273617 3.1415926535898608396 22 3.1349453756585929919 3.1415926535898122118 23 3.1400068646912273617 3.1415926535897995552 24 3.2245152435345525443 3.1415926535897968907 25 3.1415926535897962246 26 3.1415926535897962246 27 3.1415926535897962246 28 3.1415926535897962246 The true value is 3.14159265358979323846264338327... While the two forms of the recurrence formula are clearly mathematically equivalent, the first subtracts 1 from a number extremely close to 1, leading to an increasingly problematic loss of significant digits. As the recurrence is applied repeatedly, the accuracy improves at first, but then it deteriorates. It never gets better than about 8 digits, even though 53-bit arithmetic should be capable of about 16 digits of precision. When the second form of the recurrence is used, the value converges to 15 digits of precision. "Fast math" optimization. The aforementioned lack of associativity of floating-point operations in general means that compilers cannot as effectively reorder arithmetic expressions as they could with integer and fixed-point arithmetic, presenting a roadblock in optimizations such as common subexpression elimination and auto-vectorization. The "fast math" option on many compilers (ICC, GCC, Clang, MSVC...) turns on reassociation along with unsafe assumptions such as a lack of NaN and infinite numbers in IEEE 754. Some compilers also offer more granular options to only turn on reassociation. In either case, the programmer is exposed to many of the precision pitfalls mentioned above for the portion of the program using "fast" math. In some compilers (GCC and Clang), turning on "fast" math may cause the program to disable subnormal floats at startup, affecting the floating-point behavior of not only the generated code, but also any program using such code as a library. In most Fortran compilers, as allowed by the ISO/IEC 1539-1:2004 Fortran standard, reassociation is the default, with breakage largely prevented by the "protect parens" setting (also on by default). This setting stops the compiler from reassociating beyond the boundaries of parentheses. Intel Fortran Compiler is a notable outlier. A common problem in "fast" math is that subexpressions may not be optimized identically from place to place, leading to unexpected differences. One interpretation of the issue is that "fast" math as implemented currently has a poorly defined semantics. One attempt at formalizing "fast" math optimizations is seen in "Icing", a verified compiler. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "12.345 = \\! \\underbrace{12345}_\\text{significand} \\! \\times \\! \\underbrace{10}_\\text{base}\\!\\!\\!\\!\\!\\!\\!\\overbrace{{}^{-3}}^{\\text{exponent}}" }, { "math_id": 1, "text": "\\frac{s}{b^{\\,p-1}} \\times b^e," }, { "math_id": 2, "text": "p = 24" }, { "math_id": 3, "text": "11001001\\ 00001111\\ 1101101\\underline{0}\\ 10100010\\ 0." }, { "math_id": 4, "text": "11001001\\ 00001111\\ 1101101\\underline{1}." }, { "math_id": 5, "text": "\\begin{align}\n &\\left(\\sum_{n=0}^{p-1} \\text{bit}_n \\times 2^{-n}\\right) \\times 2^e \\\\\n ={} &\\left(1 \\times 2^{-0} + 1 \\times 2^{-1} + 0 \\times 2^{-2} + 0 \\times 2^{-3} + 1 \\times2^{-4} + \\cdots + 1 \\times 2^{-23}\\right) \\times 2^1 \\\\\n \\approx{} &1.5707964 \\times 2 \\\\\n \\approx{} &3.1415928\n\\end{align}" }, { "math_id": 6, "text": "\\pi" }, { "math_id": 7, "text": "\\sqrt{3}" }, { "math_id": 8, "text": "\\sin (3\\pi)" }, { "math_id": 9, "text": "^m" }, { "math_id": 10, "text": "^1/_\\infty = 0" }, { "math_id": 11, "text": "0 \\times \\infty" }, { "math_id": 12, "text": "\\pm\\infty" }, { "math_id": 13, "text": "2 \\left(B - 1\\right) \\left(B^{P-1}\\right) \\left(U - L + 1\\right)" }, { "math_id": 14, "text": "B^L" }, { "math_id": 15, "text": "\\left(1 - B^{-P}\\right)\\left(B^{U + 1}\\right)" }, { "math_id": 16, "text": "R_{tot}" }, { "math_id": 17, "text": "R_\\text{tot}=1/(1/R_1+1/R_2+\\cdots+1/R_n)" }, { "math_id": 18, "text": "R_1" }, { "math_id": 19, "text": "1/R_1" }, { "math_id": 20, "text": "\\Epsilon_\\text{mach} = B^{1-P},\\," }, { "math_id": 21, "text": "\\Epsilon_\\text{mach} = \\tfrac{1}{2} B^{1-P}," }, { "math_id": 22, "text": "\\left| \\frac{\\operatorname{fl}(x) - x}{x} \\right| \\le \\Epsilon_\\text{mach}." }, { "math_id": 23, "text": "x" }, { "math_id": 24, "text": "y" }, { "math_id": 25, "text": "\\begin{align}\n \\operatorname{fl}(x \\cdot y)\n &= \\operatorname{fl}\\big(\\operatorname{fl}(x_1 \\cdot y_1) + \\operatorname{fl}(x_2 \\cdot y_2)\\big), && \\text{ where } \\operatorname{fl}() \\text{ indicates correctly rounded floating-point arithmetic} \\\\\n &= \\operatorname{fl}\\big((x_1 \\cdot y_1)(1 + \\delta_1) + (x_2 \\cdot y_2)(1 + \\delta_2)\\big), && \\text{ where } \\delta_n \\leq \\Epsilon_\\text{mach}, \\text{ from above} \\\\\n &= \\big((x_1 \\cdot y_1)(1 + \\delta_1) + (x_2 \\cdot y_2)(1 + \\delta_2)\\big)(1 + \\delta_3) \\\\\n &= (x_1 \\cdot y_1)(1 + \\delta_1)(1 + \\delta_3) + (x_2 \\cdot y_2)(1 + \\delta_2)(1 + \\delta_3),\n\\end{align}" }, { "math_id": 26, "text": "\\operatorname{fl}(x \\cdot y) = \\hat{x} \\cdot \\hat{y}," }, { "math_id": 27, "text": "\\begin{align}\n\\hat{x}_1 &= x_1(1 + \\delta_1); & \\hat{x}_2 &= x_2(1 + \\delta_2);\\\\\n\\hat{y}_1 &= y_1(1 + \\delta_3); & \\hat{y}_2 &= y_2(1 + \\delta_3),\\\\\n\\end{align}" }, { "math_id": 28, "text": "\\delta_n \\leq \\Epsilon_\\text{mach}" }, { "math_id": 29, "text": "(x+y)(x-y) = x^2-y^2\\," }, { "math_id": 30, "text": "\\sin^2{\\theta}+\\cos^2{\\theta} = 1\\," }, { "math_id": 31, "text": "t_0 = \\frac{1}{\\sqrt{3}}" }, { "math_id": 32, "text": "t_{i+1} = \\frac{\\sqrt{t_i^2+1}-1}{t_i}" }, { "math_id": 33, "text": "t_{i+1} = \\frac{t_i}{\\sqrt{t_i^2+1}+1}" }, { "math_id": 34, "text": "\\pi \\sim 6 \\times 2^i \\times t_i" }, { "math_id": 35, "text": "i \\rightarrow \\infty" } ]
https://en.wikipedia.org/wiki?curid=11376
11376019
Binomial approximation
Approximation of powers of some binomials The binomial approximation is useful for approximately calculating powers of sums of 1 and a small number "x". It states that formula_0 It is valid when formula_1 and formula_2 where formula_3 and formula_4 may be real or complex numbers. The benefit of this approximation is that formula_4 is converted from an exponent to a multiplicative factor. This can greatly simplify mathematical expressions (as in the example below) and is a common tool in physics. The approximation can be proven several ways, and is closely related to the binomial theorem. By Bernoulli's inequality, the left-hand side of the approximation is greater than or equal to the right-hand side whenever formula_5 and formula_6. Derivations. Using linear approximation. The function formula_7 is a smooth function for "x" near 0. Thus, standard linear approximation tools from calculus apply: one has formula_8 and so formula_9 Thus formula_10 By Taylor's theorem, the error in this approximation is equal to formula_11 for some value of formula_12 that lies between 0 and x. For example, if formula_13 and formula_14, the error is at most formula_15. In little o notation, one can say that the error is formula_16, meaning that formula_17. Using Taylor series. The function formula_18 where formula_3 and formula_4 may be real or complex can be expressed as a Taylor series about the point zero. formula_19 If formula_20 and formula_2, then the terms in the series become progressively smaller and it can be truncated to formula_21 This result from the binomial approximation can always be improved by keeping additional terms from the Taylor series above. This is especially important when formula_22 starts to approach one, or when evaluating a more complex expression where the first two terms in the Taylor series cancel (see example). Sometimes it is wrongly claimed that formula_23 is a sufficient condition for the binomial approximation. A simple counterexample is to let formula_24 and formula_25. In this case formula_26 but the binomial approximation yields formula_27. For small formula_28 but large formula_22, a better approximation is: formula_29 Example. The binomial approximation for the square root, formula_30, can be applied for the following expression, formula_31 where formula_32 and formula_33 are real but formula_34. The mathematical form for the binomial approximation can be recovered by factoring out the large term formula_32 and recalling that a square root is the same as a power of one half. formula_35 Evidently the expression is linear in formula_33 when formula_34 which is otherwise not obvious from the original expression. Generalization. While the binomial approximation is linear, it can be generalized to keep the quadratic term in the Taylor series: formula_36 Applied to the square root, it results in: formula_37 Quadratic example. Consider the expression: formula_38 where formula_39 and formula_40. If only the linear term from the binomial approximation is kept formula_41 then the expression unhelpfully simplifies to zero formula_42 While the expression is small, it is not exactly zero. So now, keeping the quadratic term: formula_43 This result is quadratic in formula_44 which is why it did not appear when only the linear terms in formula_44 were kept. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " (1 + x)^\\alpha \\approx 1 + \\alpha x." }, { "math_id": 1, "text": "|x|<1" }, { "math_id": 2, "text": "|\\alpha x| \\ll 1" }, { "math_id": 3, "text": "x" }, { "math_id": 4, "text": "\\alpha" }, { "math_id": 5, "text": "x>-1" }, { "math_id": 6, "text": "\\alpha \\geq 1" }, { "math_id": 7, "text": " f(x) = (1 + x)^{\\alpha}" }, { "math_id": 8, "text": " f'(x) = \\alpha (1 + x)^{\\alpha - 1}" }, { "math_id": 9, "text": " f'(0) = \\alpha." }, { "math_id": 10, "text": " f(x) \\approx f(0) + f'(0)(x - 0) = 1 + \\alpha x." }, { "math_id": 11, "text": " \\frac{\\alpha(\\alpha - 1) x^2}{2} \\cdot (1 + \\zeta)^{\\alpha - 2}" }, { "math_id": 12, "text": "\\zeta" }, { "math_id": 13, "text": " x < 0 " }, { "math_id": 14, "text": "\\alpha \\geq 2" }, { "math_id": 15, "text": " \\frac{\\alpha(\\alpha - 1) x^2}{2}" }, { "math_id": 16, "text": "o(|x|)" }, { "math_id": 17, "text": " \\lim_{x \\to 0} \\frac{\\textrm{error}}{|x|} = 0" }, { "math_id": 18, "text": " f(x) = (1+x)^\\alpha " }, { "math_id": 19, "text": "\\begin{align}\nf(x) &= \\sum_{n=0}^{\\infty} \\frac{f^{(n)}(0)}{n!} x^n\\\\\nf(x) &= f(0) + f'(0) x + \\frac{1}{2} f''(0) x^2 + \\frac{1}{6} f'''(0) x^3 + \\frac{1}{24} f^{(4)}(0) x^4 + \\cdots\\\\\n(1+x)^{\\alpha} &= 1 + \\alpha x + \\frac{1}{2} \\alpha (\\alpha-1) x^2 + \\frac{1}{6} \\alpha (\\alpha-1)(\\alpha-2)x^3 + \\frac{1}{24} \\alpha (\\alpha-1)(\\alpha-2)(\\alpha-3)x^4 + \\cdots\n\\end{align}" }, { "math_id": 20, "text": "|x| < 1" }, { "math_id": 21, "text": "(1+x)^\\alpha \\approx 1 + \\alpha x ." }, { "math_id": 22, "text": "|\\alpha x|" }, { "math_id": 23, "text": "|x| \\ll 1" }, { "math_id": 24, "text": "x=10^{-6}" }, { "math_id": 25, "text": "\\alpha=10^7" }, { "math_id": 26, "text": "(1+x)^\\alpha > 22,000" }, { "math_id": 27, "text": "1 + \\alpha x = 11" }, { "math_id": 28, "text": "|x|" }, { "math_id": 29, "text": " (1 + x)^\\alpha \\approx e^{\\alpha x} ." }, { "math_id": 30, "text": "\\sqrt{1+x} \\approx 1+x/2" }, { "math_id": 31, "text": "\t\\frac{1}{\\sqrt{a+b}} - \\frac{1}{\\sqrt{a-b}} " }, { "math_id": 32, "text": "a" }, { "math_id": 33, "text": "b" }, { "math_id": 34, "text": "a \\gg b" }, { "math_id": 35, "text": "\\begin{align}\n \\frac{1}{\\sqrt{a+b}} - \\frac{1}{\\sqrt{a-b}} &= \\frac{1}{\\sqrt{a}} \\left(\\left(1+\\frac{b}{a}\\right)^{-1/2} - \\left(1-\\frac{b}{a}\\right)^{-1/2}\\right)\\\\\n &\\approx\\frac{1}{\\sqrt{a}} \\left(\\left(1+\\left(-\\frac{1}{2}\\right)\\frac{b}{a}\\right) - \\left(1-\\left(-\\frac{1}{2}\\right)\\frac{b}{a}\\right)\\right) \\\\\n &\\approx\\frac{1}{\\sqrt{a}} \\left(1-\\frac{b}{2a} - 1 -\\frac{b}{2a}\\right) \\\\\n &\\approx -\\frac{b}{a \\sqrt{a}}\n\\end{align}" }, { "math_id": 36, "text": " (1+x)^\\alpha \\approx 1 + \\alpha x + (\\alpha/2) (\\alpha-1) x^2" }, { "math_id": 37, "text": "\\sqrt{1+x} \\approx 1 + x/2 - x^2 / 8." }, { "math_id": 38, "text": "\t(1 + \\epsilon)^n - (1 - \\epsilon)^{-n} " }, { "math_id": 39, "text": "|\\epsilon|<1" }, { "math_id": 40, "text": "|n \\epsilon| \\ll 1" }, { "math_id": 41, "text": "(1+x)^\\alpha \\approx 1 + \\alpha x" }, { "math_id": 42, "text": "\\begin{align}\n(1 + \\epsilon)^n - (1 - \\epsilon)^{-n} &\\approx (1+ n \\epsilon) - (1 - (-n) \\epsilon)\\\\\n&\\approx (1+ n \\epsilon) - (1 + n \\epsilon)\\\\\n&\\approx 0 .\n\\end{align}" }, { "math_id": 43, "text": "\\begin{align}\n(1+\\epsilon)^n - (1 - \\epsilon)^{-n}&\\approx \\left(1+ n \\epsilon + \\frac{1}{2} n (n-1) \\epsilon^2\\right) - \\left(1 + (-n)(-\\epsilon) + \\frac{1}{2} (-n) (-n-1) (-\\epsilon)^2\\right)\\\\\n&\\approx \\left(1+ n \\epsilon + \\frac{1}{2} n (n-1) \\epsilon^2\\right) - \\left(1 + n \\epsilon + \\frac{1}{2} n (n+1) \\epsilon^2\\right)\\\\\n&\\approx \\frac{1}{2} n (n-1) \\epsilon^2 - \\frac{1}{2} n (n+1) \\epsilon^2\\\\\n&\\approx \\frac{1}{2} n \\epsilon^2 ((n-1) - (n+1)) \\\\\n&\\approx - n \\epsilon^2 \n\\end{align}" }, { "math_id": 44, "text": "\\epsilon" } ]
https://en.wikipedia.org/wiki?curid=11376019
1137612
Generalized eigenvector
Vector satisfying some of the criteria of an eigenvector In linear algebra, a generalized eigenvector of an formula_0 matrix formula_1 is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector. Let formula_2 be an formula_3-dimensional vector space and let formula_1 be the matrix representation of a linear map from formula_2 to formula_2 with respect to some ordered basis. There may not always exist a full set of formula_3 linearly independent eigenvectors of formula_1 that form a complete basis for formula_2. That is, the matrix formula_1 may not be diagonalizable. This happens when the algebraic multiplicity of at least one eigenvalue formula_4 is greater than its geometric multiplicity (the nullity of the matrix formula_5, or the dimension of its nullspace). In this case, formula_4 is called a defective eigenvalue and formula_1 is called a defective matrix. A generalized eigenvector formula_6 corresponding to formula_4, together with the matrix formula_5 generate a Jordan chain of linearly independent generalized eigenvectors which form a basis for an invariant subspace of formula_2. Using generalized eigenvectors, a set of linearly independent eigenvectors of formula_1 can be extended, if necessary, to a complete basis for formula_2. This basis can be used to determine an "almost diagonal matrix" formula_7 in Jordan normal form, similar to formula_1, which is useful in computing certain matrix functions of formula_1. The matrix formula_7 is also useful in solving the system of linear differential equations formula_8 where formula_1 need not be diagonalizable. The dimension of the generalized eigenspace corresponding to a given eigenvalue formula_9 is the algebraic multiplicity of formula_9. Overview and definition. There are several equivalent ways to define an ordinary eigenvector. For our purposes, an eigenvector formula_10 associated with an eigenvalue formula_9 of an formula_3 × formula_3 matrix formula_1 is a nonzero vector for which formula_11, where formula_12 is the formula_3 × formula_3 identity matrix and formula_13 is the zero vector of length formula_3. That is, formula_10 is in the kernel of the transformation formula_14. If formula_1 has formula_3 linearly independent eigenvectors, then formula_1 is similar to a diagonal matrix formula_15. That is, there exists an invertible matrix formula_16 such that formula_1 is diagonalizable through the similarity transformation formula_17. The matrix formula_15 is called a spectral matrix for formula_1. The matrix formula_16 is called a modal matrix for formula_1. Diagonalizable matrices are of particular interest since matrix functions of them can be computed easily. On the other hand, if formula_1 does not have formula_3 linearly independent eigenvectors associated with it, then formula_1 is not diagonalizable. Definition: A vector formula_18 is a generalized eigenvector of rank "m" of the matrix formula_1 and corresponding to the eigenvalue formula_9 if formula_19 but formula_20 Clearly, a generalized eigenvector of rank 1 is an ordinary eigenvector. Every formula_3 × formula_3 matrix formula_1 has formula_3 linearly independent generalized eigenvectors associated with it and can be shown to be similar to an "almost diagonal" matrix formula_7 in Jordan normal form. That is, there exists an invertible matrix formula_16 such that formula_21. The matrix formula_16 in this case is called a generalized modal matrix for formula_1. If formula_9 is an eigenvalue of algebraic multiplicity formula_22, then formula_1 will have formula_22 linearly independent generalized eigenvectors corresponding to formula_9. These results, in turn, provide a straightforward method for computing certain matrix functions of formula_1. Note: For an formula_23 matrix formula_1 over a field formula_24 to be expressed in Jordan normal form, all eigenvalues of formula_1 must be in formula_24. That is, the characteristic polynomial formula_25 must factor completely into linear factors. For example, if formula_1 has real-valued elements, then it may be necessary for the eigenvalues and the components of the eigenvectors to have complex values. The set spanned by all generalized eigenvectors for a given formula_26 forms the generalized eigenspace for formula_26. Examples. Here are some examples to illustrate the concept of generalized eigenvectors. Some of the details will be described later. Example 1. This example is simple but clearly illustrates the point. This type of matrix is used frequently in textbooks. Suppose formula_27 Then there is only one eigenvalue, formula_28, and its algebraic multiplicity is formula_29. Notice that this matrix is in Jordan normal form but is not diagonal. Hence, this matrix is not diagonalizable. Since there is one superdiagonal entry, there will be one generalized eigenvector of rank greater than 1 (or one could note that the vector space formula_30 is of dimension 2, so there can be at most one generalized eigenvector of rank greater than 1). Alternatively, one could compute the dimension of the nullspace of formula_31 to be formula_32, and thus there are formula_33 generalized eigenvectors of rank greater than 1. The ordinary eigenvector formula_34 is computed as usual (see the eigenvector page for examples). Using this eigenvector, we compute the generalized eigenvector formula_35 by solving formula_36 Writing out the values: formula_37 This simplifies to formula_38 The element formula_39 has no restrictions. The generalized eigenvector of rank 2 is then formula_40, where "a" can have any scalar value. The choice of "a" = 0 is usually the simplest. Note that formula_41 so that formula_35 is a generalized eigenvector, because formula_42 so that formula_43 is an ordinary eigenvector, and that formula_44 and formula_45 are linearly independent and hence constitute a basis for the vector space formula_30. Example 2. This example is more complex than Example 1. Unfortunately, it is a little difficult to construct an interesting example of low order. The matrix formula_46 has "eigenvalues" formula_47 and formula_48 with "algebraic multiplicities" formula_49 and formula_50, but "geometric multiplicities" formula_51 and formula_52. The "generalized eigenspaces" of formula_1 are calculated below. formula_53 is the ordinary eigenvector associated with formula_54. formula_55 is a generalized eigenvector associated with formula_54. formula_56 is the ordinary eigenvector associated with formula_57. formula_58 and formula_59 are generalized eigenvectors associated with formula_57. formula_60 formula_61 formula_62 formula_63 formula_64 This results in a basis for each of the "generalized eigenspaces" of formula_1. Together the two "chains" of generalized eigenvectors span the space of all 5-dimensional column vectors. formula_65 An "almost diagonal" matrix formula_7 in "Jordan normal form", similar to formula_1 is obtained as follows: formula_66 formula_67 where formula_16 is a generalized modal matrix for formula_1, the columns of formula_16 are a canonical basis for formula_1, and formula_68. Jordan chains. Definition: Let formula_18 be a generalized eigenvector of rank "m" corresponding to the matrix formula_1 and the eigenvalue formula_9. The chain generated by formula_18 is a set of vectors formula_69 given by where formula_71is always an ordinary eigenvector with a given eigenvalue formula_9. Thus, in general, The vector formula_72, given by (2), is a generalized eigenvector of rank "j" corresponding to the eigenvalue formula_9. A chain is a linearly independent set of vectors. Canonical basis. Definition: A set of "n" linearly independent generalized eigenvectors is a canonical basis if it is composed entirely of Jordan chains. Thus, once we have determined that a generalized eigenvector of rank "m" is in a canonical basis, it follows that the "m" − 1 vectors formula_73 that are in the Jordan chain generated by formula_74 are also in the canonical basis. Let formula_75 be an eigenvalue of formula_1 of algebraic multiplicity formula_76. First, find the ranks (matrix ranks) of the matrices formula_77. The integer formula_78 is determined to be the "first integer" for which formula_79 has rank formula_80 ("n" being the number of rows or columns of formula_1, that is, formula_1 is "n" × "n"). Now define formula_81 The variable formula_82 designates the number of linearly independent generalized eigenvectors of rank "k" corresponding to the eigenvalue formula_75 that will appear in a canonical basis for formula_1. Note that formula_83. Computation of generalized eigenvectors. In the preceding sections we have seen techniques for obtaining the formula_3 linearly independent generalized eigenvectors of a canonical basis for the vector space formula_2 associated with an formula_23 matrix formula_1. These techniques can be combined into a procedure: Solve the characteristic equation of formula_1 for eigenvalues formula_75 and their algebraic multiplicities formula_76; For each formula_84 Determine formula_85; Determine formula_78; Determine formula_86 for formula_87; Determine each Jordan chain for formula_4; Example 3. The matrix formula_88 has an eigenvalue formula_89 of algebraic multiplicity formula_90 and an eigenvalue formula_91 of algebraic multiplicity formula_92. We also have formula_93. For formula_94 we have formula_95. formula_96 formula_97 formula_98 The first integer formula_99 for which formula_100 has rank formula_101 is formula_102. We now define formula_103 formula_104 formula_105 Consequently, there will be three linearly independent generalized eigenvectors; one each of ranks 3, 2 and 1. Since formula_94 corresponds to a single chain of three linearly independent generalized eigenvectors, we know that there is a generalized eigenvector formula_106 of rank 3 corresponding to formula_94 such that but Equations (3) and (4) represent linear systems that can be solved for formula_106. Let formula_107 Then formula_108 and formula_109 Thus, in order to satisfy the conditions (3) and (4), we must have formula_110 and formula_111. No restrictions are placed on formula_112 and formula_113. By choosing formula_114, we obtain formula_115 as a generalized eigenvector of rank 3 corresponding to formula_116. Note that it is possible to obtain infinitely many other generalized eigenvectors of rank 3 by choosing different values of formula_112, formula_113 and formula_117, with formula_111. Our first choice, however, is the simplest. Now using equations (1), we obtain formula_55 and formula_53 as generalized eigenvectors of rank 2 and 1, respectively, where formula_118 and formula_119 The simple eigenvalue formula_91 can be dealt with using standard techniques and has an ordinary eigenvector formula_120 A canonical basis for formula_1 is formula_121 formula_122 and formula_106 are generalized eigenvectors associated with formula_54, while formula_56 is the ordinary eigenvector associated with formula_57. This is a fairly simple example. In general, the numbers formula_86 of linearly independent generalized eigenvectors of rank formula_123 will not always be equal. That is, there may be several chains of different lengths corresponding to a particular eigenvalue. Generalized modal matrix. Let formula_1 be an "n" × "n" matrix. A generalized modal matrix formula_16 for formula_1 is an "n" × "n" matrix whose columns, considered as vectors, form a canonical basis for formula_1 and appear in formula_16 according to the following rules: Jordan normal form. Let formula_2 be an "n"-dimensional vector space; let formula_124 be a linear map in "L"("V"), the set of all linear maps from formula_2 into itself; and let formula_1 be the matrix representation of formula_124 with respect to some ordered basis. It can be shown that if the characteristic polynomial formula_125 of formula_1 factors into linear factors, so that formula_125 has the form formula_126 where formula_127 are the distinct eigenvalues of formula_1, then each formula_128 is the algebraic multiplicity of its corresponding eigenvalue formula_4 and formula_1 is similar to a matrix formula_7 in Jordan normal form, where each formula_4 appears formula_128 consecutive times on the diagonal, and the entry directly above each formula_4 (that is, on the superdiagonal) is either 0 or 1: in each block the entry above the first occurrence of each formula_4 is always 0 (except in the first block); all other entries on the superdiagonal are 1. All other entries (that is, off the diagonal and superdiagonal) are 0. (But no ordering is imposed among the eigenvalues, or among the blocks for a given eigenvalue.) The matrix formula_7 is as close as one can come to a diagonalization of formula_1. If formula_1 is diagonalizable, then all entries above the diagonal are zero. Note that some textbooks have the ones on the subdiagonal, that is, immediately below the main diagonal instead of on the superdiagonal. The eigenvalues are still on the main diagonal. Every "n" × "n" matrix formula_1 is similar to a matrix formula_7 in Jordan normal form, obtained through the similarity transformation formula_129, where formula_16 is a generalized modal matrix for formula_1. (See Note above.) Example 4. Find a matrix in Jordan normal form that is similar to formula_130 Solution: The characteristic equation of formula_1 is formula_131, hence, formula_132 is an eigenvalue of algebraic multiplicity three. Following the procedures of the previous sections, we find that formula_133 and formula_134 Thus, formula_135 and formula_136, which implies that a canonical basis for formula_1 will contain one linearly independent generalized eigenvector of rank 2 and two linearly independent generalized eigenvectors of rank 1, or equivalently, one chain of two vectors formula_137 and one chain of one vector formula_138. Designating formula_139, we find that formula_140 and formula_141 where formula_16 is a generalized modal matrix for formula_1, the columns of formula_16 are a canonical basis for formula_1, and formula_68. Note that since generalized eigenvectors themselves are not unique, and since some of the columns of both formula_16 and formula_7 may be interchanged, it follows that both formula_16 and formula_7 are not unique. Example 5. In Example 3, we found a canonical basis of linearly independent generalized eigenvectors for a matrix formula_1. A generalized modal matrix for formula_1 is formula_142 A matrix in Jordan normal form, similar to formula_1 is formula_143 so that formula_68. Applications. Matrix functions. Three of the most fundamental operations which can be performed on square matrices are matrix addition, multiplication by a scalar, and matrix multiplication. These are exactly those operations necessary for defining a polynomial function of an "n" × "n" matrix formula_1. If we recall from basic calculus that many functions can be written as a Maclaurin series, then we can define more general functions of matrices quite easily. If formula_1 is diagonalizable, that is formula_144 with formula_145 then formula_146 and the evaluation of the Maclaurin series for functions of formula_1 is greatly simplified. For example, to obtain any power "k" of formula_1, we need only compute formula_147, premultiply formula_147 by formula_16, and postmultiply the result by formula_148. Using generalized eigenvectors, we can obtain the Jordan normal form for formula_1 and these results can be generalized to a straightforward method for computing functions of nondiagonalizable matrices. (See Matrix function#Jordan decomposition.) Differential equations. Consider the problem of solving the system of linear ordinary differential equations where formula_149 and formula_150 If the matrix formula_1 is a diagonal matrix so that formula_151 for formula_152, then the system (5) reduces to a system of "n" equations which take the form In this case, the general solution is given by formula_153 formula_154 formula_70 formula_155 In the general case, we try to diagonalize formula_1 and reduce the system (5) to a system like (6) as follows. If formula_1 is diagonalizable, we have formula_156, where formula_16 is a modal matrix for formula_1. Substituting formula_157, equation (5) takes the form formula_158, or where The solution of (7) is formula_159 formula_160 formula_70 formula_161 The solution formula_162 of (5) is then obtained using the relation (8). On the other hand, if formula_1 is not diagonalizable, we choose formula_16 to be a generalized modal matrix for formula_1, such that formula_129 is the Jordan normal form of formula_1. The system formula_163 has the form where the formula_75 are the eigenvalues from the main diagonal of formula_7 and the formula_164 are the ones and zeros from the superdiagonal of formula_7. The system (9) is often more easily solved than (5). We may solve the last equation in (9) for formula_165, obtaining formula_166. We then substitute this solution for formula_165 into the next to last equation in (9) and solve for formula_167. Continuing this procedure, we work through (9) from the last equation to the first, solving the entire system for formula_168. The solution formula_162 is then obtained using the relation (8). Lemma: Given the following chain of generalized eigenvectors of length formula_169 formula_170 formula_171 formula_172 formula_70 formula_173, these functions solve the system of equations, formula_174 Proof: Define formula_175 Then, formula_176. On the other hand we have formula_177 formula_178 formula_179 formula_180 formula_181 as required. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n\\times n" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "V" }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": "\\lambda_i" }, { "math_id": 5, "text": "(A-\\lambda_i I)" }, { "math_id": 6, "text": "x_i" }, { "math_id": 7, "text": "J" }, { "math_id": 8, "text": "\\mathbf x' = A \\mathbf x," }, { "math_id": 9, "text": "\\lambda" }, { "math_id": 10, "text": "\\mathbf u" }, { "math_id": 11, "text": "(A - \\lambda I) \\mathbf u = \\mathbf 0" }, { "math_id": 12, "text": "I" }, { "math_id": 13, "text": "\\mathbf 0" }, { "math_id": 14, "text": "(A - \\lambda I)" }, { "math_id": 15, "text": "D" }, { "math_id": 16, "text": "M" }, { "math_id": 17, "text": "D = M^{-1}AM" }, { "math_id": 18, "text": "\\mathbf x_m" }, { "math_id": 19, "text": "(A - \\lambda I)^m \\mathbf x_m = \\mathbf 0" }, { "math_id": 20, "text": "(A - \\lambda I)^{m-1} \\mathbf x_m \\ne \\mathbf 0." }, { "math_id": 21, "text": "J = M^{-1}AM" }, { "math_id": 22, "text": "\\mu" }, { "math_id": 23, "text": "n \\times n" }, { "math_id": 24, "text": "F" }, { "math_id": 25, "text": "f(x)" }, { "math_id": 26, "text": " \\lambda " }, { "math_id": 27, "text": " A = \\begin{pmatrix} 1 & 1\\\\ 0 & 1 \\end{pmatrix}. " }, { "math_id": 28, "text": " \\lambda = 1" }, { "math_id": 29, "text": "m=2" }, { "math_id": 30, "text": " V " }, { "math_id": 31, "text": " A - \\lambda I " }, { "math_id": 32, "text": "p=1" }, { "math_id": 33, "text": "m-p=1" }, { "math_id": 34, "text": " \\mathbf v_1=\\begin{pmatrix}1 \\\\0 \\end{pmatrix}" }, { "math_id": 35, "text": " \\mathbf v_2 " }, { "math_id": 36, "text": " (A-\\lambda I) \\mathbf v_2 = \\mathbf v_1. " }, { "math_id": 37, "text": " \\left(\\begin{pmatrix} 1 & 1\\\\ 0 & 1 \\end{pmatrix} - 1 \\begin{pmatrix} 1 & 0\\\\ 0 & 1 \\end{pmatrix}\\right)\\begin{pmatrix}v_{21} \\\\v_{22} \\end{pmatrix} = \\begin{pmatrix} 0 & 1\\\\ 0 & 0 \\end{pmatrix} \\begin{pmatrix}v_{21} \\\\v_{22} \\end{pmatrix} =\n\\begin{pmatrix}1 \\\\0 \\end{pmatrix}." }, { "math_id": 38, "text": " v_{22}= 1. " }, { "math_id": 39, "text": "v_{21}" }, { "math_id": 40, "text": " \\mathbf v_2=\\begin{pmatrix}a \\\\1 \\end{pmatrix}" }, { "math_id": 41, "text": " (A-\\lambda I) \\mathbf v_2 = \\begin{pmatrix} 0 & 1\\\\ 0 & 0 \\end{pmatrix} \\begin{pmatrix}a \\\\1 \\end{pmatrix} =\n\\begin{pmatrix}1 \\\\0 \\end{pmatrix} = \\mathbf v_1," }, { "math_id": 42, "text": " (A-\\lambda I)^2 \\mathbf v_2 = (A-\\lambda I) [(A-\\lambda I)\\mathbf v_2] =(A-\\lambda I) \\mathbf v_1 = \\begin{pmatrix} 0 & 1\\\\ 0 & 0 \\end{pmatrix} \\begin{pmatrix}1 \\\\0 \\end{pmatrix} =\n\\begin{pmatrix}0 \\\\0 \\end{pmatrix} = \\mathbf 0," }, { "math_id": 43, "text": " \\mathbf v_1 " }, { "math_id": 44, "text": " \\mathbf v_1" }, { "math_id": 45, "text": " \\mathbf v_2" }, { "math_id": 46, "text": "A = \\begin{pmatrix} \n1 & 0 & 0 & 0 & 0 \\\\\n3 & 1 & 0 & 0 & 0 \\\\\n6 & 3 & 2 & 0 & 0 \\\\\n10 & 6 & 3 & 2 & 0 \\\\\n15 & 10 & 6 & 3 & 2\n\\end{pmatrix}" }, { "math_id": 47, "text": " \\lambda_1 = 1 " }, { "math_id": 48, "text": " \\lambda_2 = 2 " }, { "math_id": 49, "text": " \\mu_1 = 2 " }, { "math_id": 50, "text": " \\mu_2 = 3 " }, { "math_id": 51, "text": " \\gamma_1 = 1 " }, { "math_id": 52, "text": " \\gamma_2 = 1" }, { "math_id": 53, "text": " \\mathbf x_1 " }, { "math_id": 54, "text": " \\lambda_1 " }, { "math_id": 55, "text": " \\mathbf x_2 " }, { "math_id": 56, "text": " \\mathbf y_1 " }, { "math_id": 57, "text": " \\lambda_2 " }, { "math_id": 58, "text": " \\mathbf y_2 " }, { "math_id": 59, "text": " \\mathbf y_3 " }, { "math_id": 60, "text": "(A-1 I) \\mathbf x_1\n = \\begin{pmatrix} \n0 & 0 & 0 & 0 & 0 \\\\\n3 & 0 & 0 & 0 & 0 \\\\\n6 & 3 & 1 & 0 & 0 \\\\\n10 & 6 & 3 & 1 & 0 \\\\\n15 & 10 & 6 & 3 & 1\n\\end{pmatrix}\\begin{pmatrix}\n0 \\\\ 3 \\\\ -9 \\\\ 9 \\\\ -3\n\\end{pmatrix} = \\begin{pmatrix}\n0 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 0\n\\end{pmatrix} = \\mathbf 0 ," }, { "math_id": 61, "text": "(A - 1 I) \\mathbf x_2\n = \\begin{pmatrix} \n0 & 0 & 0 & 0 & 0 \\\\\n3 & 0 & 0 & 0 & 0 \\\\\n6 & 3 & 1 & 0 & 0 \\\\\n10 & 6 & 3 & 1 & 0 \\\\\n15 & 10 & 6 & 3 & 1\n\\end{pmatrix} \\begin{pmatrix}\n1 \\\\ -15 \\\\ 30 \\\\ -1 \\\\ -45\n\\end{pmatrix} = \\begin{pmatrix}\n0 \\\\ 3 \\\\ -9 \\\\ 9 \\\\ -3\n\\end{pmatrix} = \\mathbf x_1 ," }, { "math_id": 62, "text": "(A - 2 I) \\mathbf y_1\n = \\begin{pmatrix} \n-1 & 0 & 0 & 0 & 0 \\\\\n3 & -1 & 0 & 0 & 0 \\\\\n6 & 3 & 0 & 0 & 0 \\\\\n10 & 6 & 3 & 0 & 0 \\\\\n15 & 10 & 6 & 3 & 0\n\\end{pmatrix} \\begin{pmatrix}\n0 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 9\n\\end{pmatrix} = \\begin{pmatrix}\n0 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 0\n\\end{pmatrix} = \\mathbf 0 ," }, { "math_id": 63, "text": "(A - 2 I) \\mathbf y_2 = \\begin{pmatrix} \n-1 & 0 & 0 & 0 & 0 \\\\\n3 & -1 & 0 & 0 & 0 \\\\\n6 & 3 & 0 & 0 & 0 \\\\\n10 & 6 & 3 & 0 & 0 \\\\\n15 & 10 & 6 & 3 & 0\n\\end{pmatrix} \\begin{pmatrix}\n0 \\\\ 0 \\\\ 0 \\\\ 3 \\\\ 0\n\\end{pmatrix} = \\begin{pmatrix}\n0 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 9\n\\end{pmatrix} = \\mathbf y_1 ," }, { "math_id": 64, "text": "(A - 2 I) \\mathbf y_3 = \\begin{pmatrix} \n-1 & 0 & 0 & 0 & 0 \\\\\n3 & -1 & 0 & 0 & 0 \\\\\n6 & 3 & 0 & 0 & 0 \\\\\n10 & 6 & 3 & 0 & 0 \\\\\n15 & 10 & 6 & 3 & 0\n\\end{pmatrix} \\begin{pmatrix}\n0 \\\\ 0 \\\\ 1 \\\\ -2 \\\\ 0\n\\end{pmatrix} = \\begin{pmatrix}\n0 \\\\ 0 \\\\ 0 \\\\ 3 \\\\ 0\n\\end{pmatrix} = \\mathbf y_2 ." }, { "math_id": 65, "text": "\n\\left\\{ \\mathbf x_1, \\mathbf x_2 \\right\\} =\n\\left\\{\n\\begin{pmatrix} 0 \\\\ 3 \\\\ -9 \\\\ 9 \\\\ -3 \\end{pmatrix},\n\\begin{pmatrix} 1 \\\\ -15 \\\\ 30 \\\\ -1 \\\\ -45 \\end{pmatrix} \n\\right\\},\n\\left\\{ \\mathbf y_1, \\mathbf y_2, \\mathbf y_3 \\right\\} =\n\\left\\{ \n\\begin{pmatrix} 0 \\\\ 0 \\\\ 0 \\\\ 0 \\\\ 9 \\end{pmatrix},\n\\begin{pmatrix} 0 \\\\ 0 \\\\ 0 \\\\ 3 \\\\ 0 \\end{pmatrix},\n\\begin{pmatrix} 0 \\\\ 0 \\\\ 1 \\\\ -2 \\\\ 0 \\end{pmatrix}\n\\right\\}.\n" }, { "math_id": 66, "text": "\nM =\n\\begin{pmatrix} \\mathbf x_1 & \\mathbf x_2 & \\mathbf y_1 & \\mathbf y_2 & \\mathbf y_3 \\end{pmatrix} =\n\\begin{pmatrix}\n0 & 1 & 0 &0& 0 \\\\\n3 & -15 & 0 &0& 0 \\\\\n-9 & 30 & 0 &0& 1 \\\\\n9 & -1 & 0 &3& -2 \\\\\n-3 & -45 & 9 &0& 0\n\\end{pmatrix}," }, { "math_id": 67, "text": "J = \\begin{pmatrix}\n1 & 1 & 0 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 2 & 1 & 0 \\\\\n0 & 0 & 0 & 2 & 1 \\\\\n0 & 0 & 0 & 0 & 2\n\\end{pmatrix},\n" }, { "math_id": 68, "text": "AM = MJ" }, { "math_id": 69, "text": "\\left\\{ \\mathbf x_m, \\mathbf x_{m-1}, \\dots , \\mathbf x_1 \\right\\}" }, { "math_id": 70, "text": " \\vdots " }, { "math_id": 71, "text": "\\mathbf x_1 " }, { "math_id": 72, "text": "\\mathbf x_j " }, { "math_id": 73, "text": " \\mathbf x_{m-1}, \\mathbf x_{m-2}, \\ldots , \\mathbf x_1 " }, { "math_id": 74, "text": " \\mathbf x_m " }, { "math_id": 75, "text": " \\lambda_i " }, { "math_id": 76, "text": " \\mu_i " }, { "math_id": 77, "text": " (A - \\lambda_i I), (A - \\lambda_i I)^2, \\ldots , (A - \\lambda_i I)^{m_i} " }, { "math_id": 78, "text": "m_i" }, { "math_id": 79, "text": " (A - \\lambda_i I)^{m_i} " }, { "math_id": 80, "text": "n - \\mu_i " }, { "math_id": 81, "text": " \\rho_k = \\operatorname{rank}(A - \\lambda_i I)^{k-1} - \\operatorname{rank}(A - \\lambda_i I)^k \\qquad (k = 1, 2, \\ldots , m_i)." }, { "math_id": 82, "text": " \\rho_k " }, { "math_id": 83, "text": " \\operatorname{rank}(A - \\lambda_i I)^0 = \\operatorname{rank}(I) = n " }, { "math_id": 84, "text": " \\lambda_i :" }, { "math_id": 85, "text": "n - \\mu_i" }, { "math_id": 86, "text": "\\rho_k" }, { "math_id": 87, "text": "(k = 1, \\ldots , m_i)" }, { "math_id": 88, "text": "\nA = \n\\begin{pmatrix}\n 5 & 1 & -2 & 4 \\\\\n 0 & 5 & 2 & 2 \\\\\n 0 & 0 & 5 & 3 \\\\\n 0 & 0 & 0 & 4\n\\end{pmatrix}\n" }, { "math_id": 89, "text": "\\lambda_1 = 5" }, { "math_id": 90, "text": "\\mu_1 = 3" }, { "math_id": 91, "text": "\\lambda_2 = 4" }, { "math_id": 92, "text": "\\mu_2 = 1" }, { "math_id": 93, "text": "n=4" }, { "math_id": 94, "text": "\\lambda_1" }, { "math_id": 95, "text": "n - \\mu_1 = 4 - 3 = 1" }, { "math_id": 96, "text": "\n(A - 5I) =\n\\begin{pmatrix}\n 0 & 1 & -2 & 4 \\\\\n 0 & 0 & 2 & 2 \\\\\n 0 & 0 & 0 & 3 \\\\\n 0 & 0 & 0 & -1\n\\end{pmatrix},\n\\qquad \\operatorname{rank}(A - 5I) = 3.\n" }, { "math_id": 97, "text": "\n(A - 5I)^2 =\n\\begin{pmatrix}\n 0 & 0 & 2 & -8 \\\\\n 0 & 0 & 0 & 4 \\\\\n 0 & 0 & 0 & -3 \\\\\n 0 & 0 & 0 & 1\n\\end{pmatrix},\n\\qquad \\operatorname{rank}(A - 5I)^2 = 2.\n" }, { "math_id": 98, "text": "\n(A - 5I)^3 =\n\\begin{pmatrix}\n0 & 0 & 0 & 14 \\\\\n 0 & 0 & 0 & -4 \\\\\n 0 & 0 & 0 & 3 \\\\\n 0 & 0 & 0 & -1\n\\end{pmatrix},\n\\qquad \\operatorname{rank}(A - 5I)^3 = 1.\n" }, { "math_id": 99, "text": "m_1" }, { "math_id": 100, "text": "(A - 5I)^{m_1}" }, { "math_id": 101, "text": "n - \\mu_1 = 1" }, { "math_id": 102, "text": "m_1 = 3" }, { "math_id": 103, "text": " \\rho_3 = \\operatorname{rank}(A - 5I)^2 - \\operatorname{rank}(A - 5I)^3 = 2 - 1 = 1 ," }, { "math_id": 104, "text": " \\rho_2 = \\operatorname{rank}(A - 5I)^1 - \\operatorname{rank}(A - 5I)^2 = 3 - 2 = 1 ," }, { "math_id": 105, "text": " \\rho_1 = \\operatorname{rank}(A - 5I)^0 - \\operatorname{rank}(A - 5I)^1 = 4 - 3 = 1 ." }, { "math_id": 106, "text": " \\mathbf x_3 " }, { "math_id": 107, "text": "\n\\mathbf x_3 = \n\\begin{pmatrix}\nx_{31} \\\\\nx_{32} \\\\\nx_{33} \\\\\nx_{34}\n\\end{pmatrix}.\n" }, { "math_id": 108, "text": "\n(A - 5I)^3 \\mathbf x_3 = \n\\begin{pmatrix}\n 0 & 0 & 0 & 14 \\\\\n 0 & 0 & 0 & -4 \\\\\n 0 & 0 & 0 & 3 \\\\\n 0 & 0 & 0 & -1\n\\end{pmatrix}\n\\begin{pmatrix}\nx_{31} \\\\\nx_{32} \\\\\nx_{33} \\\\\nx_{34}\n\\end{pmatrix} = \n\\begin{pmatrix}\n14 x_{34} \\\\\n-4 x_{34} \\\\\n 3 x_{34} \\\\\n- x_{34}\n\\end{pmatrix} = \n\\begin{pmatrix}\n 0 \\\\\n 0 \\\\\n 0 \\\\\n 0\n\\end{pmatrix}\n" }, { "math_id": 109, "text": "\n(A - 5I)^2 \\mathbf x_3 = \n\\begin{pmatrix}\n 0 & 0 & 2 & -8 \\\\\n 0 & 0 & 0 & 4 \\\\\n 0 & 0 & 0 & -3 \\\\\n 0 & 0 & 0 & 1\n\\end{pmatrix}\n\\begin{pmatrix}\nx_{31} \\\\\nx_{32} \\\\\nx_{33} \\\\\nx_{34}\n\\end{pmatrix} = \n\\begin{pmatrix}\n 2 x_{33} - 8 x_{34} \\\\\n 4 x_{34} \\\\\n-3 x_{34} \\\\\n x_{34}\n\\end{pmatrix} \\ne \n\\begin{pmatrix}\n 0 \\\\\n 0 \\\\\n 0 \\\\\n 0\n\\end{pmatrix}.\n" }, { "math_id": 110, "text": "x_{34} = 0" }, { "math_id": 111, "text": "x_{33} \\ne 0" }, { "math_id": 112, "text": "x_{31}" }, { "math_id": 113, "text": "x_{32}" }, { "math_id": 114, "text": "x_{31} = x_{32} = x_{34} = 0, x_{33} = 1" }, { "math_id": 115, "text": "\n\\mathbf x_3 = \n\\begin{pmatrix}\n0 \\\\\n 0 \\\\\n 1 \\\\\n 0\n\\end{pmatrix}\n" }, { "math_id": 116, "text": " \\lambda_1 = 5 " }, { "math_id": 117, "text": "x_{33}" }, { "math_id": 118, "text": "\n\\mathbf x_2 = (A - 5I) \\mathbf x_3 = \n\\begin{pmatrix}\n-2 \\\\\n 2 \\\\\n 0 \\\\\n 0\n\\end{pmatrix},\n" }, { "math_id": 119, "text": "\n\\mathbf x_1 = (A - 5I) \\mathbf x_2 = \n\\begin{pmatrix}\n 2 \\\\\n 0 \\\\\n 0 \\\\\n 0\n\\end{pmatrix}.\n" }, { "math_id": 120, "text": "\n\\mathbf y_1 = \n\\begin{pmatrix}\n-14 \\\\\n 4 \\\\\n -3 \\\\\n 1\n\\end{pmatrix}.\n" }, { "math_id": 121, "text": "\n\\left\\{ \\mathbf x_3, \\mathbf x_2, \\mathbf x_1, \\mathbf y_1 \\right\\} =\n\\left\\{\n\\begin{pmatrix} 0 \\\\ 0 \\\\ 1 \\\\ 0 \\end{pmatrix}\n\\begin{pmatrix} -2 \\\\ 2 \\\\ 0 \\\\ 0 \\end{pmatrix}\n\\begin{pmatrix} 2 \\\\ 0 \\\\ 0 \\\\ 0 \\end{pmatrix}\n\\begin{pmatrix} -14 \\\\ 4 \\\\ -3 \\\\ 1 \\end{pmatrix}\n\\right\\}.\n" }, { "math_id": 122, "text": " \\mathbf x_1, \\mathbf x_2 " }, { "math_id": 123, "text": "k" }, { "math_id": 124, "text": "\\phi" }, { "math_id": 125, "text": "f(\\lambda)" }, { "math_id": 126, "text": " f(\\lambda) = \\pm (\\lambda - \\lambda_1)^{\\mu_1}(\\lambda - \\lambda_2)^{\\mu_2} \\cdots (\\lambda - \\lambda_r)^{\\mu_r} ," }, { "math_id": 127, "text": " \\lambda_1, \\lambda_2, \\ldots , \\lambda_r " }, { "math_id": 128, "text": "\\mu_i" }, { "math_id": 129, "text": " J = M^{-1}AM " }, { "math_id": 130, "text": "\nA = \n\\begin{pmatrix}\n 0 & 4 & 2 \\\\\n-3 & 8 & 3 \\\\\n 4 & -8 & -2\n\\end{pmatrix}.\n" }, { "math_id": 131, "text": "(\\lambda - 2)^3 = 0" }, { "math_id": 132, "text": "\\lambda = 2" }, { "math_id": 133, "text": " \\operatorname{rank}(A - 2I) = 1" }, { "math_id": 134, "text": "\\operatorname{rank}(A - 2I)^2 = 0 = n - \\mu ." }, { "math_id": 135, "text": "\\rho_2 = 1" }, { "math_id": 136, "text": "\\rho_1 = 2" }, { "math_id": 137, "text": " \\left\\{ \\mathbf x_2, \\mathbf x_1 \\right\\} " }, { "math_id": 138, "text": " \\left\\{ \\mathbf y_1 \\right\\} " }, { "math_id": 139, "text": " M = \\begin{pmatrix} \\mathbf y_1 & \\mathbf x_1 & \\mathbf x_2 \\end{pmatrix} " }, { "math_id": 140, "text": "\nM = \n\\begin{pmatrix}\n 2 & 2 & 0 \\\\\n 1 & 3 & 0 \\\\\n 0 & -4 & 1\n\\end{pmatrix},\n" }, { "math_id": 141, "text": "\nJ = \n\\begin{pmatrix}\n 2 & 0 & 0 \\\\\n 0 & 2 & 1 \\\\\n 0 & 0 & 2\n\\end{pmatrix},\n" }, { "math_id": 142, "text": "\nM =\n\\begin{pmatrix} \\mathbf y_1 & \\mathbf x_1 & \\mathbf x_2 & \\mathbf x_3 \\end{pmatrix} =\n\\begin{pmatrix}\n-14 & 2 & -2 & 0 \\\\\n 4 & 0 & 2 & 0 \\\\\n -3 & 0 & 0 & 1 \\\\\n 1 & 0 & 0 & 0\n\\end{pmatrix}." }, { "math_id": 143, "text": "J = \\begin{pmatrix}\n 4 & 0 & 0 & 0 \\\\\n 0 & 5 & 1 & 0 \\\\\n 0 & 0 & 5 & 1 \\\\\n 0 & 0 & 0 & 5\n\\end{pmatrix},\n" }, { "math_id": 144, "text": " D = M^{-1}AM ," }, { "math_id": 145, "text": "\nD = \n\\begin{pmatrix}\n \\lambda_1 & 0 & \\cdots & 0 \\\\\n 0 & \\lambda_2 & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & 0 & \\cdots & \\lambda_n\n\\end{pmatrix},\n" }, { "math_id": 146, "text": "\nD^k = \n\\begin{pmatrix}\n \\lambda_1^k & 0 & \\cdots & 0 \\\\\n 0 & \\lambda_2^k & \\cdots & 0 \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n 0 & 0 & \\cdots & \\lambda_n^k\n\\end{pmatrix}\n" }, { "math_id": 147, "text": "D^k" }, { "math_id": 148, "text": "M^{-1}" }, { "math_id": 149, "text": "\n\\mathbf x = \n\\begin{pmatrix}\nx_1(t) \\\\\nx_2(t) \\\\\n\\vdots \\\\\nx_n(t)\n\\end{pmatrix}, \\quad\n\\mathbf x' = \n\\begin{pmatrix}\nx_1'(t) \\\\\nx_2'(t) \\\\\n\\vdots \\\\\nx_n'(t)\n\\end{pmatrix},\n" }, { "math_id": 150, "text": " A = (a_{ij}) ." }, { "math_id": 151, "text": " a_{ij} = 0 " }, { "math_id": 152, "text": "i \\ne j" }, { "math_id": 153, "text": " x_1 = k_1 e^{a_{11}t} " }, { "math_id": 154, "text": " x_2 = k_2 e^{a_{22}t} " }, { "math_id": 155, "text": " x_n = k_n e^{a_{nn}t} ." }, { "math_id": 156, "text": " D = M^{-1}AM " }, { "math_id": 157, "text": " A = MDM^{-1} " }, { "math_id": 158, "text": " M^{-1} \\mathbf x' = D(M^{-1} \\mathbf x) " }, { "math_id": 159, "text": " y_1 = k_1 e^{\\lambda_1 t} " }, { "math_id": 160, "text": " y_2 = k_2 e^{\\lambda_2 t} " }, { "math_id": 161, "text": " y_n = k_n e^{\\lambda_n t} ." }, { "math_id": 162, "text": " \\mathbf x " }, { "math_id": 163, "text": " \\mathbf y' = J \\mathbf y " }, { "math_id": 164, "text": " \\epsilon_i " }, { "math_id": 165, "text": "y_n" }, { "math_id": 166, "text": "y_n = k_n e^{\\lambda_n t} " }, { "math_id": 167, "text": "y_{n-1}" }, { "math_id": 168, "text": " \\mathbf y " }, { "math_id": 169, "text": "r," }, { "math_id": 170, "text": " X_1 = v_1e^{\\lambda t}" }, { "math_id": 171, "text": " X_2 = (tv_1+v_2)e^{\\lambda t}" }, { "math_id": 172, "text": " X_3 = \\left(\\frac{t^2}{2}v_1+tv_2+v_3\\right)e^{\\lambda t}" }, { "math_id": 173, "text": " X_r = \\left(\\frac{t^{r-1}}{(r-1)!}v_1+...+\\frac{t^2}{2}v_{r-2}+tv_{r-1}+v_r\\right)e^{\\lambda t}" }, { "math_id": 174, "text": " X' = AX." }, { "math_id": 175, "text": "X_j(t)=e^{\\lambda t}\\sum_{i = 1}^j\\frac{t^{j-i}}{(j-i)!} v_i." }, { "math_id": 176, "text": "X'_j(t)=e^{\\lambda t}\\sum_{i = 1}^j\\frac{t^{j-i-1}}{(j-i-1)!}v_i+e^{\\lambda t}\\lambda\\sum_{i = 1}^j\\frac{t^{j-i}}{(j-i)!}v_i" }, { "math_id": 177, "text": "AX_j(t)=e^{\\lambda t}\\sum_{i = 1}^j\\frac{t^{j-i}}{(j-i)!}Av_i" }, { "math_id": 178, "text": "=e^{\\lambda t}\\sum_{i = 1}^j\\frac{t^{j-i}}{(j-i)!}(v_{i-1}+\\lambda v_i)" }, { "math_id": 179, "text": "=e^{\\lambda t}\\sum_{i = 1}^j\\frac{t^{j-i}}{(j-i)!}v_{i-1}+e^{\\lambda t}\\lambda\\sum_{i = 1}^j\\frac{t^{j-i}}{(j-i)!}v_i" }, { "math_id": 180, "text": "=e^{\\lambda t}\\sum_{i = 1}^j\\frac{t^{j-i-1}}{(j-i-1)!}v_{i}+e^{\\lambda t}\\lambda\\sum_{i = 1}^j\\frac{t^{j-i}}{(j-i)!}v_i" }, { "math_id": 181, "text": "=X'_j(t)" } ]
https://en.wikipedia.org/wiki?curid=1137612
11376126
Principal axis theorem
Principal axes of an ellipsoid or hyperboloid are perpendicular In geometry and linear algebra, a principal axis is a certain line in a Euclidean space associated with an ellipsoid or hyperboloid, generalizing the major and minor axes of an ellipse or hyperbola. The principal axis theorem states that the principal axes are perpendicular, and gives a constructive procedure for finding them. Mathematically, the principal axis theorem is a generalization of the method of completing the square from elementary algebra. In linear algebra and functional analysis, the principal axis theorem is a geometrical counterpart of the spectral theorem. It has applications to the statistics of principal components analysis and the singular value decomposition. In physics, the theorem is fundamental to the studies of angular momentum and birefringence. Motivation. The equations in the Cartesian plane R2: formula_0 define, respectively, an ellipse and a hyperbola. In each case, the "x" and "y" axes are the principal axes. This is easily seen, given that there are no "cross-terms" involving products "xy" in either expression. However, the situation is more complicated for equations like formula_1 Here some method is required to determine whether this is an ellipse or a hyperbola. The basic observation is that if, by completing the square, the quadratic expression can be reduced to a sum of two squares then the equation defines an ellipse, whereas if it reduces to a difference of two squares then the equation represents a hyperbola: formula_2 Thus, in our example expression, the problem is how to absorb the coefficient of the cross-term 8"xy" into the functions "u" and "v". Formally, this problem is similar to the problem of matrix diagonalization, where one tries to find a suitable coordinate system in which the matrix of a linear transformation is diagonal. The first step is to find a matrix in which the technique of diagonalization can be applied. The trick is to write the quadratic form as formula_3 where the cross-term has been split into two equal parts. The matrix "A" in the above decomposition is a symmetric matrix. In particular, by the spectral theorem, it has real eigenvalues and is diagonalizable by an orthogonal matrix ("orthogonally diagonalizable"). To orthogonally diagonalize "A", one must first find its eigenvalues, and then find an orthonormal eigenbasis. Calculation reveals that the eigenvalues of "A" are formula_4 with corresponding eigenvectors formula_5 Dividing these by their respective lengths yields an orthonormal eigenbasis: formula_6 Now the matrix "S" = [u1 u2] is an orthogonal matrix, since it has orthonormal columns, and "A" is diagonalized by: formula_7 This applies to the present problem of "diagonalizing" the quadratic form through the observation that formula_8 Thus, the equation formula_9 is that of an ellipse, since the left side can be written as the sum of two squares. It is tempting to simplify this expression by pulling out factors of 2. However, it is important "not" to do this. The quantities formula_10 have a geometrical meaning. They determine an "orthonormal coordinate system" on R2. In other words, they are obtained from the original coordinates by the application of a rotation (and possibly a reflection). Consequently, one may use the "c"1 and "c"2 coordinates to make statements about "length and angles" (particularly length), which would otherwise be more difficult in a different choice of coordinates (by rescaling them, for instance). For example, the maximum distance from the origin on the ellipse "c"12 + 9"c"22 = 1 occurs when "c"2 = 0, so at the points "c"1 = ±1. Similarly, the minimum distance is where "c"2 = ±1/3. It is possible now to read off the major and minor axes of this ellipse. These are precisely the individual eigenspaces of the matrix "A", since these are where "c"2 = 0 or "c"1 = 0. Symbolically, the principal axes are formula_11 To summarize: Using this information, it is possible to attain a clear geometrical picture of the ellipse: to graph it, for instance. Formal statement. The principal axis theorem concerns quadratic forms in R"n", which are homogeneous polynomials of degree 2. Any quadratic form may be represented as formula_12 where "A" is a symmetric matrix. The first part of the theorem is contained in the following statements guaranteed by the spectral theorem: In particular, "A" is "orthogonally diagonalizable", since one may take a basis of each eigenspace and apply the Gram-Schmidt process separately within the eigenspace to obtain an orthonormal eigenbasis. For the second part, suppose that the eigenvalues of "A" are λ1, ..., λ"n" (possibly repeated according to their algebraic multiplicities) and the corresponding orthonormal eigenbasis is u1, ..., u"n". Then, formula_13 and formula_14 where "c""i" is the "i"-th entry of c . Furthermore, The "i"-th principal axis is the line determined by equating "c""j" =0 for all formula_15. The "i"-th principal axis is the span of the vector u"i" .
[ { "math_id": 0, "text": "\\begin{align}\n \\frac{x^2}{9} + \\frac{y^2}{25} &= 1 \\\\[3pt]\n \\frac{x^2}{9} - \\frac{y^2}{25} &= 1\n\\end{align}" }, { "math_id": 1, "text": "5x^2 + 8xy + 5y^2 = 1." }, { "math_id": 2, "text": "\\begin{align}\n u(x, y)^2 + v(x, y)^2 &= 1\\qquad \\text{(ellipse)} \\\\\n u(x, y)^2 - v(x, y)^2 &= 1\\qquad \\text{(hyperbola)}.\n\\end{align}" }, { "math_id": 3, "text": "5x^2 + 8xy + 5y^2 =\n \\begin{bmatrix}\n x & y\n \\end{bmatrix}\n \\begin{bmatrix}\n 5 & 4 \\\\\n 4 & 5\n \\end{bmatrix}\n \\begin{bmatrix}\n x \\\\ y\n \\end{bmatrix} =\n \\mathbf{x}^\\textsf{T} A\\mathbf{x} \n" }, { "math_id": 4, "text": "\\lambda_1 = 1,\\quad \\lambda_2 = 9" }, { "math_id": 5, "text": "\n \\mathbf{v}_1 = \\begin{bmatrix} 1 \\\\ -1 \\end{bmatrix},\\quad\n \\mathbf{v}_2 = \\begin{bmatrix} 1 \\\\ 1 \\end{bmatrix}.\n" }, { "math_id": 6, "text": "\n \\mathbf{u}_1 = \\begin{bmatrix} 1/\\sqrt{2} \\\\ -1/\\sqrt{2} \\end{bmatrix},\\quad\n \\mathbf{u}_2 = \\begin{bmatrix} 1/\\sqrt{2} \\\\ 1/\\sqrt{2} \\end{bmatrix}.\n" }, { "math_id": 7, "text": "A = SDS^{-1} = SDS^\\textsf{T} =\n \\begin{bmatrix}\n 1/\\sqrt{2} & 1/\\sqrt{2}\\\\\n -1/\\sqrt{2} & 1/\\sqrt{2}\n \\end{bmatrix}\n \\begin{bmatrix}\n 1 & 0 \\\\\n 0 & 9\n \\end{bmatrix}\n \\begin{bmatrix}\n 1/\\sqrt{2} & -1/\\sqrt{2} \\\\\n 1/\\sqrt{2} & 1/\\sqrt{2}\n \\end{bmatrix}.\n" }, { "math_id": 8, "text": "\n 5x^2 + 8xy + 5y^2 =\n \\mathbf{x}^\\textsf{T} A\\mathbf{x} =\n \\mathbf{x}^\\textsf{T}\\left(SDS^\\textsf{T}\\right)\\mathbf{x} =\n \\left(S^\\textsf{T} \\mathbf{x}\\right)^\\textsf{T} D\\left(S^\\textsf{T} \\mathbf{x}\\right) =\n 1\\left(\\frac{x - y}{\\sqrt{2}}\\right)^2 + 9\\left(\\frac{x + y}{\\sqrt{2}}\\right)^2.\n" }, { "math_id": 9, "text": "5x^2 + 8xy + 5y^2 = 1" }, { "math_id": 10, "text": "c_1 = \\frac{x - y}{\\sqrt{2}},\\quad c_2 = \\frac{x + y}{\\sqrt{2}}" }, { "math_id": 11, "text": "\n E_1 = \\text{span}\\left(\\begin{bmatrix} 1/\\sqrt{2} \\\\ -1/\\sqrt{2} \\end{bmatrix}\\right),\\quad\n E_2 = \\text{span}\\left(\\begin{bmatrix} 1/\\sqrt{2} \\\\ 1/\\sqrt{2} \\end{bmatrix}\\right).\n" }, { "math_id": 12, "text": "Q(\\mathbf{x}) = \\mathbf{x}^\\textsf{T} A\\mathbf{x}" }, { "math_id": 13, "text": " \\mathbf{c} = [\\mathbf{u}_1, \\ldots,\\mathbf{u}_n]^\\textsf{T} \\mathbf{x}," }, { "math_id": 14, "text": "Q(\\mathbf{x}) = \\lambda_1 c_1^2 + \\lambda_2 c_2^2 + \\dots + \\lambda_n c_n^2," }, { "math_id": 15, "text": "j = 1,\\ldots, i-1, i+1,\\ldots, n" } ]
https://en.wikipedia.org/wiki?curid=11376126
1137672
Canonical ensemble
Ensemble of states at a constant temperature In statistical mechanics, a canonical ensemble is the statistical ensemble that represents the possible states of a mechanical system in thermal equilibrium with a heat bath at a fixed temperature. The system can exchange energy with the heat bath, so that the states of the system will differ in total energy. The principal thermodynamic variable of the canonical ensemble, determining the probability distribution of states, is the absolute temperature (symbol: "T"). The ensemble typically also depends on mechanical variables such as the number of particles in the system (symbol: "N") and the system's volume (symbol: "V"), each of which influence the nature of the system's internal states. An ensemble with these three parameters, which are assumed constant for the ensemble to be considered canonical, is sometimes called the "NVT" ensemble. The canonical ensemble assigns a probability "P" to each distinct microstate given by the following exponential: formula_0 where "E" is the total energy of the microstate, and "k" is the Boltzmann constant. The number "F" is the free energy (specifically, the Helmholtz free energy) and is assumed to be a constant for a specific ensemble to be considered canonical. However, the probabilities and "F" will vary if different "N", "V", "T" are selected. The free energy "F" serves two roles: first, it provides a normalization factor for the probability distribution (the probabilities, over the complete set of microstates, must add up to one); second, many important ensemble averages can be directly calculated from the function "F"("N", "V", "T"). An alternative but equivalent formulation for the same concept writes the probability as formula_1 using the canonical partition function formula_2 rather than the free energy. The equations below (in terms of free energy) may be restated in terms of the canonical partition function by simple mathematical manipulations. Historically, the canonical ensemble was first described by Boltzmann (who called it a "holode") in 1884 in a relatively unknown paper. It was later reformulated and extensively investigated by Gibbs in 1902. Applicability of canonical ensemble. The canonical ensemble is the ensemble that describes the possible states of a system that is in thermal equilibrium with a heat bath (the derivation of this fact can be found in Gibbs). The canonical ensemble applies to systems of any size; while it is necessary to assume that the heat bath is very large (i.e., take a macroscopic limit), the system itself may be small or large. The condition that the system is mechanically isolated is necessary in order to ensure it does not exchange energy with any external object besides the heat bath. In general, it is desirable to apply the canonical ensemble to systems that are in direct contact with the heat bath, since it is that contact that ensures the equilibrium. In practical situations, the use of the canonical ensemble is usually justified either 1) by assuming that the contact is mechanically weak, or 2) by incorporating a suitable part of the heat bath connection into the system under analysis, so that the connection's mechanical influence on the system is modeled within the system. When the total energy is fixed but the internal state of the system is otherwise unknown, the appropriate description is not the canonical ensemble but the microcanonical ensemble. For systems where the particle number is variable (due to contact with a particle reservoir), the correct description is the grand canonical ensemble. In statistical physics textbooks for interacting particle systems the three ensembles are assumed to be thermodynamically equivalent: the fluctuations of macroscopic quantities around their average value become small and, as the number of particles tends to infinity, they tend to vanish. In the latter limit, called the thermodynamic limit, the average constraints effectively become hard constraints. The assumption of ensemble equivalence dates back to Gibbs and has been verified for some models of physical systems with short-range interactions and subject to a small number of macroscopic constraints. Despite the fact that many textbooks still convey the message that ensemble equivalence holds for all physical systems, over the last decades various examples of physical systems have been found for which breaking of ensemble equivalence occurs. Example ensembles. ""We may imagine a great number of systems of the same nature, but differing in the configurations and velocities which they have at a given instant, and differing in not merely infinitesimally, but it may be so as to embrace every conceivable combination of configuration and velocities..." J. W. Gibbs" (1903) Boltzmann distribution (separable system). If a system described by a canonical ensemble can be separated into independent parts (this happens if the different parts do not interact), and each of those parts has a fixed material composition, then each part can be seen as a system unto itself and is described by a canonical ensemble having the same temperature as the whole. Moreover, if the system is made up of multiple "similar" parts, then each part has exactly the same distribution as the other parts. In this way, the canonical ensemble provides exactly the Boltzmann distribution (also known as Maxwell–Boltzmann statistics) for systems of "any number" of particles. In comparison, the justification of the Boltzmann distribution from the microcanonical ensemble only applies for systems with a large number of parts (that is, in the thermodynamic limit). The Boltzmann distribution itself is one of the most important tools in applying statistical mechanics to real systems, as it massively simplifies the study of systems that can be separated into independent parts (e.g., particles in a gas, electromagnetic modes in a cavity, molecular bonds in a polymer). Ising model (strongly interacting system). In a system composed of pieces that interact with each other, it is usually not possible to find a way to separate the system into independent subsystems as done in the Boltzmann distribution. In these systems it is necessary to resort to using the full expression of the canonical ensemble in order to describe the thermodynamics of the system when it is thermostatted to a heat bath. The canonical ensemble is generally the most straightforward framework for studies of statistical mechanics and even allows one to obtain exact solutions in some interacting model systems. A classic example of this is the Ising model, which is a widely discussed toy model for the phenomena of ferromagnetism and of self-assembled monolayer formation, and is one of the simplest models that shows a phase transition. Lars Onsager famously calculated exactly the free energy of an infinite-sized square-lattice Ising model at zero magnetic field, in the canonical ensemble. Precise expressions for the ensemble. The precise mathematical expression for a statistical ensemble depends on the kind of mechanics under consideration—quantum or classical—since the notion of a "microstate" is considerably different in these two cases. In quantum mechanics, the canonical ensemble affords a simple description since diagonalization provides a discrete set of microstates with specific energies. The classical mechanical case is more complex as it involves instead an integral over canonical phase space, and the size of microstates in phase space can be chosen somewhat arbitrarily. Quantum mechanical. A statistical ensemble in quantum mechanics is represented by a density matrix, denoted by formula_9. In basis-free notation, the canonical ensemble is the density matrix formula_10 where "Ĥ" is the system's total energy operator (Hamiltonian), and exp() is the matrix exponential operator. The free energy "F" is determined by the probability normalization condition that the density matrix has a trace of one, formula_11: formula_12 The canonical ensemble can alternatively be written in a simple form using bra–ket notation, if the system's energy eigenstates and energy eigenvalues are known. Given a complete basis of energy eigenstates |"ψ""i"⟩, indexed by "i", the canonical ensemble is: formula_13 formula_14 where the "E""i" are the energy eigenvalues determined by "Ĥ"|"ψ""i"⟩ "E""i"|"ψ""i"⟩. In other words, a set of microstates in quantum mechanics is given by a complete set of stationary states. The density matrix is diagonal in this basis, with the diagonal entries each directly giving a probability. Classical mechanical. In classical mechanics, a statistical ensemble is instead represented by a joint probability density function in the system's phase space, "ρ"("p"1, … "p""n", "q"1, … "q""n"), where the "p"1, … "p""n" and "q"1, … "q""n" are the canonical coordinates (generalized momenta and generalized coordinates) of the system's internal degrees of freedom. In a system of particles, the number of degrees of freedom "n" depends on the number of particles "N" in a way that depends on the physical situation. For a three-dimensional gas of monoatoms (not molecules), "n" 3"N". In diatomic gases there will also be rotational and vibrational degrees of freedom. The probability density function for the canonical ensemble is: formula_15 where Again, the value of "F" is determined by demanding that "ρ" is a normalized probability density function: formula_16 This integral is taken over the entire phase space. In other words, a microstate in classical mechanics is a phase space region, and this region has volume "hnC". This means that each microstate spans a range of energy, however this range can be made arbitrarily narrow by choosing "h" to be very small. The phase space integral can be converted into a summation over microstates, once phase space has been finely divided to a sufficient degree. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P = e^{(F - E)/(k T)}," }, { "math_id": 1, "text": "\\textstyle P = \\frac{1}{Z} e^{-E/(k T)}," }, { "math_id": 2, "text": "\\textstyle Z = e^{-F/(k T)}" }, { "math_id": 3, "text": " \\langle p \\rangle = -\\frac{\\partial F} {\\partial V}, " }, { "math_id": 4, "text": " S = -k \\langle \\log P \\rangle = - \\frac{\\partial F} {\\partial T}, " }, { "math_id": 5, "text": " \\langle E \\rangle = F + ST." }, { "math_id": 6, "text": " dF = - S \\, dT - \\langle p\\rangle \\, dV ." }, { "math_id": 7, "text": " d\\langle E \\rangle = T \\, dS - \\langle p\\rangle \\, dV ." }, { "math_id": 8, "text": " \\langle E^2 \\rangle - \\langle E \\rangle^2 = k T^2 \\frac{\\partial \\langle E \\rangle}{\\partial T}." }, { "math_id": 9, "text": "\\hat \\rho" }, { "math_id": 10, "text": "\\hat \\rho = \\exp\\left(\\tfrac{1}{kT}(F - \\hat H)\\right)," }, { "math_id": 11, "text": "\\operatorname{Tr} \\hat \\rho=1" }, { "math_id": 12, "text": "e^{-\\frac{F}{k T}} = \\operatorname{Tr} \\exp\\left(-\\tfrac{1}{kT} \\hat H\\right)." }, { "math_id": 13, "text": "\\hat \\rho = \\sum_i e^{\\frac{F - E_i}{k T}} |\\psi_i\\rangle \\langle \\psi_i | " }, { "math_id": 14, "text": "e^{-\\frac{F}{k T}} = \\sum_i e^{\\frac{- E_i}{k T}}." }, { "math_id": 15, "text": "\\rho = \\frac{1}{h^n C} e^{\\frac{F - E}{k T}}," }, { "math_id": 16, "text": "e^{-\\frac{F}{k T}} = \\int \\ldots \\int \\frac{1}{h^n C} e^{\\frac{- E}{k T}} \\, dp_1 \\ldots dq_n " } ]
https://en.wikipedia.org/wiki?curid=1137672
1137736
Principle of sufficient reason
Everything has a cause; an axiom The principle of sufficient reason states that everything must have a reason or a cause. The principle was articulated and made prominent by Gottfried Wilhelm Leibniz, with many antecedents, and was further used and developed by Arthur Schopenhauer and William Hamilton. History. The modern formulation of the principle is usually ascribed to early Enlightenment philosopher Gottfried Leibniz. Leibniz formulated it, but was not an originator. The idea was conceived of and utilized by various philosophers who preceded him, including Anaximander, Parmenides, Archimedes, Plato and Aristotle, Cicero, Avicenna, Thomas Aquinas, and Spinoza. One often pointed to is in Anselm of Canterbury: his phrase "quia Deus nihil sine ratione facit" (because God does nothing without reason) and the formulation of the ontological argument for the existence of God. A clearer connection is with the cosmological argument for the existence of God. The principle can be seen in both Thomas Aquinas and William of Ockham. Notably, the post-Kantian philosopher Arthur Schopenhauer elaborated the principle, and used it as the foundation of his system. Some philosophers have associated the principle of sufficient reason with (Nothing comes from nothing). William Hamilton identified the laws of inference "modus ponens" with the "Law of Sufficient Reason, or of Reason and Consequent" and modus tollens with its contrapositive expression. Formulation. The principle has a variety of expressions, all of which are perhaps best summarized by the following: formula_0 A sufficient explanation may be understood either in terms of "reasons" or "causes," for like many philosophers of the period, Leibniz did not carefully distinguish between the two. The resulting principle is very different, however, depending on which interpretation is given (see Payne's summary of Schopenhauer's "Fourfold Root"). It is an open question whether the principle of sufficient reason can be applied to axioms within a logic construction like a mathematical or a physical theory, because axioms are propositions accepted as having no justification possible within the system. The principle declares that all propositions considered to be true within a system should be deducible from the set axioms at the base of the construction (i.e., that they ensue necessarily if we assume the system's axioms to be true). However, Gödel has shown that for every sufficiently expressive deductive system a proposition exists that can neither be proved nor disproved (see Gödel's incompleteness theorems). Different Views. Leibniz's view. Leibniz identified two kinds of truth, necessary and contingent truths. And he claimed that all truths are based upon two principles: (1) non-contradiction, and (2) sufficient reason. In the "Monadology", he says, Our reasonings are grounded upon two great principles, that of contradiction, in virtue of which we judge false that which involves a contradiction, and true that which is opposed or contradictory to the false; And that of sufficient reason, in virtue of which we hold that there can be no fact real or existing, no statement true, unless there be a sufficient reason, why it should be so and not otherwise, although these reasons usually cannot be known by us (paragraphs 31 and 32). Necessary truths can be derived from the law of identity (and the principle of non-contradiction): "Necessary truths are those that can be demonstrated through an analysis of terms, so that in the end they become identities, just as in Algebra an equation expressing an identity ultimately results from the substitution of values [for variables]. That is, necessary truths depend upon the principle of contradiction." The sufficient reason for a necessary truth is that its negation is a contradiction. Leibniz admitted contingent truths, that is, facts in the world that are not necessarily true, but that are nonetheless true. Even these contingent truths, according to Leibniz, can only exist on the basis of sufficient reasons. Since the sufficient reasons for contingent truths are largely unknown to humans, Leibniz made appeal to infinitary sufficient reasons, to which God uniquely has access: In contingent truths, even though the predicate is in the subject, this can never be demonstrated, nor can a proposition ever be reduced to an equality or to an identity, but the resolution proceeds to infinity, God alone seeing, not the end of the resolution, of course, which does not exist, but the connection of the terms or the containment of the predicate in the subject, since he sees whatever is in the series. Without this qualification, the principle can be seen as a description of a certain notion of closed system, in which there is no 'outside' to provide unexplained events with causes. It is also in tension with the paradox of Buridan's ass, because although the facts supposed in the paradox would present a counterexample to the claim that all contingent truths are determined by sufficient reasons, the key premise of the paradox must be rejected when one considers Leibniz's typical infinitary conception of the world. In consequence of this, the case also of Buridan's ass between two meadows, impelled equally towards both of them, is a fiction that cannot occur in the universe...For the universe cannot be halved by a plane drawn through the middle of the ass, which is cut vertically through its length, so that all is equal and alike on both sides...Neither the parts of the universe nor the viscera of the animal are alike nor are they evenly placed on both sides of this vertical plane. There will therefore always be many things in the ass and outside the ass, although they be not apparent to us, which will determine him to go on one side rather than the other. And although man is free, and the ass is not, nevertheless for the same reason it must be true that in man likewise the case of a perfect equipoise between two courses is impossible. ("Theodicy", pg. 150) Leibniz also used the principle of sufficient reason to refute the idea of absolute space: I say then, that if space is an absolute being, there would be something for which it would be impossible there should be a sufficient reason. Which is against my axiom. And I prove it thus. Space is something absolutely uniform; and without the things placed in it, one point in space does not absolutely differ in any respect whatsoever from another point of space. Now from hence it follows, (supposing space to be something in itself, beside the order of bodies among themselves,) that 'tis impossible that there should be a reason why God, preserving the same situation of bodies among themselves, should have placed them in space after one particular manner, and not otherwise; why everything was not placed the quite contrary way, for instance, by changing East into West. Hamilton's fourth law: "Infer nothing without ground or reason". Here is how Hamilton, circa 1837–1838, expressed his "fourth law" in his LECT. V. LOGIC. 60–61: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I now go on to the fourth law. Par. XVII. Law of Sufficient Reason, or of Reason and Consequent: XVII. The thinking of an object, as actually characterized by positive or by negative attributes, is not left to the caprice of Understanding – the faculty of thought; but that faculty must be necessitated to this or that determinate act of thinking by a knowledge of something different from, and independent of; the process of thinking itself. This condition of our understanding is expressed by the law, as it is called, of Sufficient Reason (principium Rationis Sufficientis); but it is more properly denominated the law of Reason and Consequent (principium Rationis et Consecutionis). That knowledge by which the mind is necessitated to affirm or posit something else, is called the "logical reason ground," or "antecedent"; that something else which the mind is necessitated to affirm or posit, is called the "logical consequent"; and the relation between the reason and consequent, is called the "logical connection or consequence". This law is expressed in the formula – Infer nothing without a ground or reason. "Relations between Reason and Consequent": The relations between Reason and Consequent, when comprehended in a pure thought, are the following: "The logical significance of this law": The logical significance of the law of Reason and Consequent lies in this, – That in virtue of it, thought is constituted into a series of acts all indissolubly connected; each necessarily inferring the other. Thus it is that the distinction and opposition of possible, actual and necessary matter, which has been introduced into Logic, is a doctrine wholly extraneous to this science." Schopenhauer's Four Forms. According to Schopenhauer's "On the Fourfold Root of the Principle of Sufficient Reason", there are four distinct forms of the principle. First Form: The Principle of Sufficient Reason of Becoming (principium rationis sufficientis fiendi); appears as the law of causality in the understanding. Second Form: The Principle of Sufficient Reason of Knowing (principium rationis sufficientis cognoscendi); asserts that if a judgment is to express a piece of knowledge, it must have a sufficient ground or reason, in which case it receives the predicate true. Third Form: The Principle of Sufficient Reason of Being (principium rationis sufficientis essendi); the law whereby the parts of space and time determine one another as regards those relations. Example in arithmetic: Each number presupposes the preceding numbers as grounds or reasons of its being; "I can reach ten only by going through all the preceding numbers; and only by virtue of this insight into the ground of being, do I know that where there are ten, so are there eight, six, four." "Now just as the subjective correlative to the first class of representations is the understanding, that to the second the faculty of reason, and that to the third pure sensibility, so is the subjective correlative to this fourth class found to be the inner sense, or generally self-consciousness." Fourth Form: The Principle of Sufficient Reason of Acting (principium rationis sufficientis agendi); briefly known as the law of motivation. "Any judgment that does not follow its previously existing ground or reason" or any state that cannot be explained away as falling under the three previous headings "must be produced by an act of will which has a motive." As his proposition in 43 states, "Motivation is causality seen from within." As a law of thought. The principle was one of the four recognised laws of thought, that held a place in European pedagogy of logic and reasoning (and, to some extent, philosophy in general) in the 18th and 19th centuries. It was influential in the thinking of Leo Tolstoy, amongst others, in the elevated form that history could not be accepted as random. A sufficient reason is sometimes described as the coincidence of every single thing that is needed for the occurrence of an effect (i.e. of the so-called "necessary conditions"). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\forall P \\exist Q (Q \\rightarrow P)" } ]
https://en.wikipedia.org/wiki?curid=1137736
11380117
Skew-Hamiltonian matrix
In linear algebra, skew-Hamiltonian matrices are special matrices which correspond to skew-symmetric bilinear forms on a symplectic vector space. Let "V" be a vector space, equipped with a symplectic form formula_0. Such a space must be even-dimensional. A linear map formula_1 is called a skew-Hamiltonian operator with respect to formula_0 if the form formula_2 is skew-symmetric. Choose a basis formula_3 in "V", such that formula_0 is written as formula_4. Then a linear operator is skew-Hamiltonian with respect to formula_0 if and only if its matrix "A" satisfies formula_5, where "J" is the skew-symmetric matrix formula_6 and "In" is the formula_7 identity matrix. Such matrices are called skew-Hamiltonian. The square of a Hamiltonian matrix is skew-Hamiltonian. The converse is also true: every skew-Hamiltonian matrix can be obtained as the square of a Hamiltonian matrix.
[ { "math_id": 0, "text": "\\Omega" }, { "math_id": 1, "text": "A:\\; V \\mapsto V" }, { "math_id": 2, "text": "x, y \\mapsto \\Omega(A(x), y)" }, { "math_id": 3, "text": " e_1, ... e_{2n}" }, { "math_id": 4, "text": "\\sum_i e_i \\wedge e_{n+i}" }, { "math_id": 5, "text": "A^T J = J A" }, { "math_id": 6, "text": "J=\n\\begin{bmatrix}\n0 & I_n \\\\\n-I_n & 0 \\\\\n\\end{bmatrix}" }, { "math_id": 7, "text": "n\\times n" } ]
https://en.wikipedia.org/wiki?curid=11380117
1138222
Complex multiplication of abelian varieties
In mathematics, an abelian variety "A" defined over a field "K" is said to have CM-type if it has a large enough commutative subring in its endomorphism ring End("A"). The terminology here is from complex multiplication theory, which was developed for elliptic curves in the nineteenth century. One of the major achievements in algebraic number theory and algebraic geometry of the twentieth century was to find the correct formulations of the corresponding theory for abelian varieties of dimension "d" &gt; 1. The problem is at a deeper level of abstraction, because it is much harder to manipulate analytic functions of several complex variables. The formal definition is that formula_0 the tensor product of End("A") with the rational number field Q, should contain a commutative subring of dimension 2"d" over Q. When "d" = 1 this can only be a quadratic field, and one recovers the cases where End("A") is an order in an imaginary quadratic field. For "d" &gt; 1 there are comparable cases for CM-fields, the complex quadratic extensions of totally real fields. There are other cases that reflect that "A" may not be a simple abelian variety (it might be a cartesian product of elliptic curves, for example). Another name for abelian varieties of CM-type is abelian varieties with sufficiently many complex multiplications. It is known that if "K" is the complex numbers, then any such "A" has a field of definition which is in fact a number field. The possible types of endomorphism ring have been classified, as rings with involution (the Rosati involution), leading to a classification of CM-type abelian varieties. To construct such varieties in the same style as for elliptic curves, starting with a lattice Λ in C"d", one must take into account the Riemann relations of abelian variety theory. The CM-type is a description of the action of a (maximal) commutative subring "L" of EndQ("A") on the holomorphic tangent space of "A" at the identity element. Spectral theory of a simple kind applies, to show that "L" acts via a basis of eigenvectors; in other words "L" has an action that is via diagonal matrices on the holomorphic vector fields on "A". In the simple case, where "L" is itself a number field rather than a product of some number of fields, the CM-type is then a list of complex embeddings of "L". There are 2"d" of those, occurring in complex conjugate pairs; the CM-type is a choice of one out of each pair. It is known that all such possible CM-types can be realised. Basic results of Goro Shimura and Yutaka Taniyama compute the Hasse–Weil L-function of "A", in terms of the CM-type and a Hecke L-function with Hecke character, having infinity-type derived from it. These generalise the results of Max Deuring for the elliptic curve case.
[ { "math_id": 0, "text": " \\operatorname{End}_\\mathbb{Q}(A) " } ]
https://en.wikipedia.org/wiki?curid=1138222
1138322
Argument principle
Theorem in complex analysis In complex analysis, the argument principle (or Cauchy's argument principle) is a theorem relating the difference between the number of zeros and poles of a meromorphic function to a contour integral of the function's logarithmic derivative. Formulation. If "f"("z") is a meromorphic function inside and on some closed contour "C", and "f" has no zeros or poles on "C", then formula_0 where "Z" and "P" denote respectively the number of zeros and poles of "f"("z") inside the contour "C", with each zero and pole counted as many times as its multiplicity and order, respectively, indicate. This statement of the theorem assumes that the contour "C" is simple, that is, without self-intersections, and that it is oriented counter-clockwise. More generally, suppose that "f"("z") is a meromorphic function on an open set Ω in the complex plane and that "C" is a closed curve in Ω which avoids all zeros and poles of "f" and is contractible to a point inside Ω. For each point "z" ∈ Ω, let "n"("C","z") be the winding number of "C" around "z". Then formula_1 where the first summation is over all zeros "a" of "f" counted with their multiplicities, and the second summation is over the poles "b" of "f" counted with their orders. Interpretation of the contour integral. The contour integral formula_2 can be interpreted as 2π"i" times the winding number of the path "f"("C") around the origin, using the substitution "w" = "f"("z"): formula_3 That is, it is "i" times the total change in the argument of "f"("z") as "z" travels around "C", explaining the name of the theorem; this follows from formula_4 and the relation between arguments and logarithms. Proof of the argument principle. Let "z""Z" be a zero of "f". We can write "f"("z") = ("z" − "z""Z")"k""g"("z") where "k" is the multiplicity of the zero, and thus "g"("z""Z") ≠ 0. We get formula_5 and formula_6 Since "g"("z""Z") ≠ 0, it follows that "g' "("z")/"g"("z") has no singularities at "z""Z", and thus is analytic at "z"Z, which implies that the residue of "f"′("z")/"f"("z") at "z""Z" is "k". Let "z"P be a pole of "f". We can write "f"("z") = ("z" − "z"P)−"m""h"("z") where "m" is the order of the pole, and "h"("z"P) ≠ 0. Then, formula_7 and formula_8 similarly as above. It follows that "h"′("z")/"h"("z") has no singularities at "z"P since "h"("z"P) ≠ 0 and thus it is analytic at "z"P. We find that the residue of "f"′("z")/"f"("z") at "z"P is −"m". Putting these together, each zero "z""Z" of multiplicity "k" of "f" creates a simple pole for "f"′("z")/"f"("z") with the residue being "k", and each pole "z"P of order "m" of "f" creates a simple pole for "f"′("z")/"f"("z") with the residue being −"m". (Here, by a simple pole we mean a pole of order one.) In addition, it can be shown that "f"′("z")/"f"("z") has no other poles, and so no other residues. By the residue theorem we have that the integral about "C" is the product of 2"πi" and the sum of the residues. Together, the sum of the "k"'s for each zero "z""Z" is the number of zeros counting multiplicities of the zeros, and likewise for the poles, and so we have our result. Applications and consequences. The argument principle can be used to efficiently locate zeros or poles of meromorphic functions on a computer. Even with rounding errors, the expression formula_9 will yield results close to an integer; by determining these integers for different contours "C" one can obtain information about the location of the zeros and poles. Numerical tests of the Riemann hypothesis use this technique to get an upper bound for the number of zeros of Riemann's formula_10 function inside a rectangle intersecting the critical line. The argument principle can also be used to prove Rouché's theorem, which can be used to bound the roots of polynomial roots. A consequence of the more general formulation of the argument principle is that, under the same hypothesis, if "g" is an analytic function in Ω, then formula_11 For example, if "f" is a polynomial having zeros "z"1, ..., "z"p inside a simple contour "C", and "g"("z") = "z"k, then formula_12 is power sum symmetric polynomial of the roots of "f". Another consequence is if we compute the complex integral: formula_13 for an appropriate choice of "g" and "f" we have the Abel–Plana formula: formula_14 which expresses the relationship between a discrete sum and its integral. The argument principle is also applied in control theory. In modern books on feedback control theory, it is commonly used as the theoretical foundation for the Nyquist stability criterion. Moreover, a more generalized form of the argument principle can be employed to derive Bode's sensitivity integral and other related integral relationships. Generalized argument principle. There is an immediate generalization of the argument principle. Suppose that g is analytic in the region formula_15. Then formula_16 where the first summation is again over all zeros "a" of "f" counted with their multiplicities, and the second summation is again over the poles "b" of "f" counted with their orders. History. According to the book by Frank Smithies ("Cauchy and the Creation of Complex Function Theory", Cambridge University Press, 1997, p. 177), Augustin-Louis Cauchy presented a theorem similar to the above on 27 November 1831, during his self-imposed exile in Turin (then capital of the Kingdom of Piedmont-Sardinia) away from France. However, according to this book, only zeroes were mentioned, not poles. This theorem by Cauchy was only published many years later in 1874 in a hand-written form and so is quite difficult to read. Cauchy published a paper with a discussion on both zeroes and poles in 1855, two years before his death. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{1}{2\\pi i}\\oint_{C} {f'(z) \\over f(z)}\\, dz=Z-P" }, { "math_id": 1, "text": "\\frac{1}{2\\pi i}\\oint_{C} \\frac{f'(z)}{f(z)}\\, dz = \\sum_a n(C,a) - \\sum_b n(C,b)\\," }, { "math_id": 2, "text": "\\oint_{C} \\frac{f'(z)}{f(z)}\\, dz" }, { "math_id": 3, "text": "\\oint_{C} \\frac{f'(z)}{f(z)}\\, dz = \\oint_{f(C)} \\frac{1}{w}\\, dw" }, { "math_id": 4, "text": "\\frac{d}{dz}\\log(f(z))=\\frac{f'(z)}{f(z)}" }, { "math_id": 5, "text": "f'(z)=k(z-z_Z)^{k-1}g(z)+(z-z_Z)^kg'(z)\\,\\!" }, { "math_id": 6, "text": "{f'(z)\\over f(z)}={k \\over z-z_Z}+{g'(z)\\over g(z)}." }, { "math_id": 7, "text": "f'(z)=-m(z-z_P)^{-m-1}h(z)+(z-z_P)^{-m}h'(z)\\,\\!." }, { "math_id": 8, "text": "{f'(z)\\over f(z)}={-m \\over z-z_P}+{h'(z)\\over h(z)}" }, { "math_id": 9, "text": "{1\\over 2\\pi i}\\oint_{C} {f'(z) \\over f(z)}\\, dz" }, { "math_id": 10, "text": "\\xi(s)" }, { "math_id": 11, "text": " \\frac{1}{2\\pi i} \\oint_C g(z)\\frac{f'(z)}{f(z)}\\, dz = \\sum_a n(C,a)g(a) - \\sum_b n(C,b)g(b)." }, { "math_id": 12, "text": " \\frac{1}{2\\pi i} \\oint_C z^k\\frac{f'(z)}{f(z)}\\, dz = z_1^k+z_2^k+\\cdots+z_p^k," }, { "math_id": 13, "text": "\\oint_C f(z){g'(z) \\over g(z)}\\, dz" }, { "math_id": 14, "text": " \\sum_{n=0}^{\\infty}f(n)-\\int_{0}^{\\infty}f(x)\\,dx= f(0)/2+i\\int_{0}^{\\infty}\\frac{f(it)-f(-it)}{e^{2\\pi t}-1}\\, dt\\, " }, { "math_id": 15, "text": "\\Omega" }, { "math_id": 16, "text": "\\frac{1}{2\\pi i}\\oint_{C} {f'(z) \\over f(z)} g(z) \\, dz = \\sum_a g(a) n(C,a) - \\sum_b g(b) n(C,b)\\," } ]
https://en.wikipedia.org/wiki?curid=1138322
11383510
Fluid-attenuated inversion recovery
Fluid-attenuated inversion recovery (FLAIR) is a magnetic resonance imaging magnetic resonance imaging sequence with an inversion recovery set to null fluids. For example, it can be used in brain imaging to suppress cerebrospinal fluid (CSF) effects on the image, so as to bring out the periventricular hyperintense lesions, such as multiple sclerosis (MS) plaques. It was invented by Graeme Bydder, Joseph Hajnal, and Ian Young in the early 1990's. FLAIR can be used with both three-dimensional imaging (3D FLAIR) or two dimensional imaging (2D FLAIR). Technique. By carefully choosing the inversion time (TI), the signal from any particular tissue can be nulled. The appropriate TI depends on the tissue via the formula: formula_0 in other words, one should typically use a TI of around 70% of the "T1" value. In the case of CSF suppression, one aims for "T1"-weighted images, which prioritize the signal of fat over that of water. Therefore, if the long TI (inversion time) is adjusted to a zero crossing point for water (none of its signal is visible), the signal of the CSF is theoretically being "erased," from the derived image. Clinical applications. The FLAIR sequence analysis has been especially useful in the evaluation and study of CNS disorders, involving: References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\textrm{TI} = \\ln(2) \\cdot T_1,\\," } ]
https://en.wikipedia.org/wiki?curid=11383510
11384086
Spin–spin relaxation
Magnetic phenomenon In physics, the spin–spin relaxation is the mechanism by which Mxy, the transverse component of the magnetization vector, exponentially decays towards its equilibrium value in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). It is characterized by the spin–spin relaxation time, known as T2, a time constant characterizing the signal decay. It is named in contrast to T1, the spin–lattice relaxation time. It is the time it takes for the magnetic resonance signal to irreversibly decay to 37% (1/e) of its initial value after its generation by tipping the longitudinal magnetization towards the magnetic transverse plane. Hence the relation formula_0. T2 relaxation generally proceeds more rapidly than T1 recovery, and different samples and different biological tissues have different T2. For example, fluids have the longest T2, and water based tissues are in the 40–200 ms range, while fat based tissues are in the 10–100 ms range. Amorphous solids have T2 in the range of milliseconds, while the transverse magnetization of crystalline samples decays in around 1/20 ms. Origin. When excited nuclear spins—i.e., those lying partially in the transverse plane—interact with each other by sampling local magnetic field inhomogeneities on the micro- and nanoscales, their respective accumulated phases deviate from expected values. While the slow- or non-varying component of this deviation is reversible, some net signal will inevitably be lost due to short-lived interactions such as collisions and random processes such as diffusion through heterogeneous space. "T2" decay does not occur due to the tilting of the magnetization vector away from the transverse plane. Rather, it is observed due to the interactions of an ensemble of spins dephasing from each other. Unlike spin-lattice relaxation, considering spin-spin relaxation using only a single isochromat is trivial and not informative. Determining parameters. Like spin-lattice relaxation, spin-spin relaxation can be studied using a molecular tumbling autocorrelation framework. The resulting signal decays exponentially as the echo time (TE), i.e., the time after excitation at which readout occurs, increases. In more complicated experiments, multiple echoes can be acquired simultaneously in order to quantitatively evaluate one or more superimposed "T"2 decay curves. The relaxation rate experienced by a spin, which is the inverse of "T"2, is proportional to a spin's tumbling energy at the frequency "difference" between one spin and another; in less mathematical terms, energy is transferred between two spins when they rotate at a similar frequency to their beat frequency, formula_1 in the figure at right. In that the beat frequency range is very small relative to the average rotation rate formula_2, spin-spin relaxation is not heavily dependent on magnetic field strength. This directly contrasts with spin-lattice relaxation, which occurs at tumbling frequencies equal to the Larmor frequency formula_3. Some frequency shifts, such as the NMR chemical shift, occur at frequencies proportional to the Larmor frequency, and the related but distinct parameter "T"2* can be heavily dependent on field strength due to the difficulty of correcting for inhomogeneity in stronger magnet bores. Assuming isothermal conditions, spins tumbling faster through space will generally have a longer "T"2. Since slower tumbling displaces the spectral energy at high tumbling frequencies to lower frequencies, the relatively low beat frequency will experience a monotonically increasing amount of energy as formula_4 increases, decreasing relaxation time. The figure at the left illustrates this relationship. It is worth noting again that fast tumbling spins, such as those in pure water, have similar "T"1 and "T"2 relaxation times, while slow tumbling spins, such as those in crystal lattices, have very distinct relaxation times. Measurement. A spin echo experiment can be used to reverse time-invariant dephasing phenomena such as millimeter-scale magnetic inhomogeneities. The resulting signal decays exponentially as the echo time (TE), i.e., the time after excitation at which readout occurs, increases. In more complicated experiments, multiple echoes can be acquired simultaneously in order to quantitatively evaluate one or more superimposed "T2" decay curves. In MRI, "T2"-weighted images can be obtained by selecting an echo time on the order of the various tissues' "T2"s. In order to reduce the amount of "T1" information and therefore contamination in the image, excited spins are allowed to return to near-equilibrium on a "T1" scale before being excited again. (In MRI parlance, this waiting time is called the "repetition time" and is abbreviated TR). Pulse sequences other than the conventional spin echo can also be used to measure "T2"; gradient echo sequences such as steady-state free precession (SSFP) and multiple spin echo sequences can be used to accelerate image acquisition or inform on additional parameters. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M_{xy}(t) = M_{xy}(0) e^{-t/T_2} \\," }, { "math_id": 1, "text": "\\omega_1" }, { "math_id": 2, "text": "(1/\\tau_c)" }, { "math_id": 3, "text": "\\omega_0" }, { "math_id": 4, "text": "\\tau_c" } ]
https://en.wikipedia.org/wiki?curid=11384086
11384459
Spin–lattice relaxation
Physical phenomenon During nuclear magnetic resonance observations, spin–lattice relaxation is the mechanism by which the longitudinal component of the total nuclear magnetic moment vector (parallel to the constant magnetic field) exponentially relaxes from a higher energy, non-equilibrium state to thermodynamic equilibrium with its surroundings (the "lattice"). It is characterized by the spin–lattice relaxation time, a time constant known as T1. There is a different parameter, "T2", the spin–spin relaxation time, which concerns the exponential relaxation of the transverse component of the nuclear magnetization vector (perpendicular to the external magnetic field). Measuring the variation of "T1" and "T2" in different materials is the basis for some magnetic resonance imaging techniques. Nuclear physics. "T1" characterizes the rate at which the longitudinal "Mz" component of the magnetization vector recovers exponentially towards its thermodynamic equilibrium, according to equation formula_0 Or, for the specific case that formula_1 formula_2 It is thus the time it takes for the longitudinal magnetization to recover approximately 63% [1-(1/e)] of its initial value after being flipped into the magnetic transverse plane by a 90° radiofrequency pulse. Nuclei are contained within a molecular structure, and are in constant vibrational and rotational motion, creating a complex magnetic field. The magnetic field caused by thermal motion of nuclei within the lattice is called the lattice field. The lattice field of a nucleus in a lower energy state can interact with nuclei in a higher energy state, causing the energy of the higher energy state to distribute itself between the two nuclei. Therefore, the energy gained by nuclei from the RF pulse is dissipated as increased vibration and rotation within the lattice, which can slightly increase the temperature of the sample. The name "spin–lattice relaxation" refers to the process in which the spins give the energy they obtained from the RF pulse back to the surrounding lattice, thereby restoring their equilibrium state. The same process occurs after the spin energy has been altered by a change of the surrounding static magnetic field (e.g. pre-polarization by or insertion into high magnetic field) or if the nonequilibrium state has been achieved by other means (e.g., hyperpolarization by optical pumping). The relaxation time, "T1" (the average lifetime of nuclei in the higher energy state) is dependent on the gyromagnetic ratio of the nucleus and the mobility of the lattice. As mobility increases, the vibrational and rotational frequencies increase, making it more likely for a component of the lattice field to be able to stimulate the transition from high to low energy states. However, at extremely high mobilities, the probability decreases as the vibrational and rotational frequencies no longer correspond to the energy gap between states. Different tissues have different "T1" values. For example, fluids have long "T1"s (1500-2000 ms), and water-based tissues are in the 400-1200 ms range, while fat based tissues are in the shorter 100-150 ms range. The presence of strongly magnetic ions or particles (e.g., ferromagnetic or paramagnetic) also strongly alter "T1" values and are widely used as MRI contrast agents. "T"1 weighted images. Magnetic resonance imaging uses the resonance of the protons to generate images. Protons are excited by a radio frequency pulse at an appropriate frequency (Larmor frequency) and then the excess energy is released in the form of a minuscule amount of heat to the surroundings as the spins return to their thermal equilibrium. The magnetization of the proton ensemble goes back to its equilibrium value with an exponential curve characterized by a time constant "T1" (see Relaxation (NMR)). "T1" weighted images can be obtained by setting short repetition time (TR) such as &lt; 750 ms and echo time (TE) such as &lt; 40 ms in conventional spin echo sequences, while in Gradient Echo Sequences they can be obtained by using flip angles of larger than 50o while setting TE values to less than 15 ms. "T1" is significantly different between grey matter and white matter and is used when undertaking brain scans. A strong "T1" contrast is present between fluid and more solid anatomical structures, making "T1" contrast suitable for morphological assessment of the normal or pathological anatomy, e.g., for musculoskeletal applications. In the rotating frame. Spin–lattice relaxation in the rotating frame is the mechanism by which "Mxy", the transverse component of the magnetization vector, exponentially decays towards its equilibrium value of zero, under the influence of a radio frequency (RF) field in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI). It is characterized by the spin–lattice relaxation time constant in the rotating frame, "T"1ρ. It is named in contrast to "T"1, the spin-lattice relaxation time. "T"1ρ MRI is an alternative to conventional "T"1 and "T"2 MRI by its use of a long-duration, low-power radio frequency referred to as spin-lock (SL) pulse applied to the magnetization in the transverse plane. The magnetization is effectively spin-locked around an effective "B"1 field created by the vector sum of the applied "B"1 and any off-resonant component. The spin-locked magnetization will relax with a time constant "T"1ρ, which is the time it takes for the magnetic resonance signal to reach 37% (1/e) of its initial value, formula_3. Hence the relation: formula_4 , where "t"SL is the duration of the RF field. Measurement. "T"1ρ can be quantified (relaxometry) by curve fitting the signal expression above as a function of the duration of the spin-lock pulse while the amplitude of spin-lock pulse ("γB"1~0.1-few kHz) is fixed. Quantitative "T"1ρ MRI relaxation maps reflect the biochemical composition of tissues. Imaging. "T"1ρ MRI has been used to image tissues such as cartilage, intervertebral discs, brain, and heart, as well as certain types of cancers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "M_z(t) = M_{z,\\mathrm{eq}} - \\left [ M_{z,\\mathrm{eq}} - M_{z}(0) \\right ] e^{-t/T_1}" }, { "math_id": 1, "text": "M_z(0)=-M_{z,\\mathrm{eq}}" }, { "math_id": 2, "text": "M_z(t) = M_{z,\\mathrm{eq}} \\left ( 1 - 2e^{-t/T_1}\\right ) " }, { "math_id": 3, "text": "M_{xy}(0)" }, { "math_id": 4, "text": "M_{xy}(t_{\\rm SL}) = M_{xy}(0) e^{-t_{\\rm SL}/T_{1\\rho}} \\," } ]
https://en.wikipedia.org/wiki?curid=11384459
11384913
K-space in magnetic resonance imaging
In magnetic resonance imaging (MRI), the "k"-space or "reciprocal space" (a mathematical space of spatial frequencies) is obtained as the 2D or 3D Fourier transform of the image measured. It was introduced in 1979 by Likes and in 1983 by Ljunggren and Twieg. In MRI physics, complex values are sampled in "k"-space during an MR measurement in a premeditated scheme controlled by a "pulse sequence", i.e. an accurately timed sequence of radiofrequency and gradient pulses. In practice, "k"-space often refers to the "temporary image space", usually a matrix, in which data from digitized MR signals are stored during data acquisition. When "k"-space is full (at the end of the scan) the data are mathematically processed to produce a final image. Thus "k"-space holds "raw" data before "reconstruction". It can be formulated by defining "wave vectors" formula_0 and formula_1 for "frequency encoding" (FE) and "phase encoding" (PE): formula_2 formula_3 where formula_4 is the sampling time (the reciprocal of sampling frequency), formula_5 is the duration of "G"PE, formula_6 ("gamma bar") is the gyromagnetic ratio, "m" is the sample number in the FE direction and "n" is the sample number in the PE direction (also known as "partition number"). Then, the 2D-Fourier Transform of this encoded signal results in a representation of the spin density distribution in two dimensions. Thus position ("x","y") and spatial frequency (formula_0, formula_1) constitute a Fourier transform pair. Typically, "k"-space has the same number of rows and columns as the final image and is filled with raw data during the scan, usually one line per TR (Repetition Time). An MR image is a complex-valued map of the spatial distribution of the transverse magnetization "M"xy in the sample at a specific time point after an excitation. Conventional qualitative interpretation of Fourier Analysis asserts that low spatial frequencies (near the center of "k"-space) contain the signal to noise and contrast information of the image, whereas high spatial frequencies (outer peripheral regions of "k"-space) contain the information determining the image resolution. This is the basis for advanced scanning techniques, such as the "keyhole" acquisition, in which a first complete "k"-space is acquired, and subsequent scans are performed for acquiring just the central part of the "k"-space; in this way, different contrast images can be acquired without the need of running full scans. A nice symmetry property exists in "k"-space if the image magnetization "M"xy is prepared to be proportional simply to a contrast-weighted proton density and thus is a real quantity. In such a case, the signal at two opposite locations in "k"-space is: formula_7 where the star (formula_8) denotes complex conjugation. Thus "k"-space information is somewhat redundant then, and an image can be reconstructed using only one half of the "k"-space, either in the PE (Phase Encode) direction saving scan time (such a technique is known as "half Fourier" or "half scan") or in the FE (Frequency Encode) direction, allowing for lower sampling frequencies and/or shorter echo times (such a technique is known as "half echo"). However, these techniques are approximate due to phase errors in the MRI data which can rarely be completely controlled (due to imperfect static field shim, effects of spatially selective excitation, signal detection coil properties, motion etc.) or nonzero phase due to just physical reasons (such as the different chemical shift of fat and water in gradient echo techniques). MRI "k-space" is related to NMR "time-domain" in all aspects, both being used for raw data storage. The only difference between the MRI "k-space" and the NMR "time domain" is that a gradient "G" is present in MRI data acquisition, but is absent in NMR data acquisition. As a result of this difference, the NMR "FID" signal and the MRI "spin-echo" signal take different mathematical forms: formula_9cosformula_10expformula_11 and formula_12sinformula_13 where formula_14 Due to the presence of the gradient "G", the spatial information r (not the spatial frequency information k) is encoded onto the frequency formula_15, and at the same time the "time-domain" is renamed as "k-space".
[ { "math_id": 0, "text": "k_\\mathrm{FE}" }, { "math_id": 1, "text": "k_\\mathrm{PE}" }, { "math_id": 2, "text": "k_\\mathrm{FE}=\\bar{\\gamma} G_\\mathrm{FE}m\\Delta t" }, { "math_id": 3, "text": "k_\\mathrm{PE}=\\bar{\\gamma} n\\Delta G_\\mathrm{PE} \\tau" }, { "math_id": 4, "text": "\\Delta t" }, { "math_id": 5, "text": "\\tau" }, { "math_id": 6, "text": "\\bar{\\gamma}" }, { "math_id": 7, "text": "S(-k_\\mathrm{FE},-k_\\mathrm{PE}) = S^*(k_\\mathrm{FE},k_\\mathrm{PE}) \\," }, { "math_id": 8, "text": "^*" }, { "math_id": 9, "text": " \\text{FID}=M_\\mathrm{0}" }, { "math_id": 10, "text": "(\\omega_\\mathrm{0}t)" }, { "math_id": 11, "text": "(-t/T_\\mathrm{2})" }, { "math_id": 12, "text": " \\text{Spin-Echo}= M_\\mathrm{0}" }, { "math_id": 13, "text": "(\\omega_\\mathrm{r}t)/(\\omega_\\mathrm{r}t)" }, { "math_id": 14, "text": " \\omega_\\mathrm{r}=\\omega_\\mathrm{0} + \\bar{\\gamma} rG" }, { "math_id": 15, "text": "\\omega" } ]
https://en.wikipedia.org/wiki?curid=11384913
1138590
Electricity meter
Device used to measure electricity use An electricity meter, electric meter, electrical meter, energy meter, or kilowatt-hour meter is a device that measures the amount of electric energy consumed by a residence, a business, or an electrically powered device over a time interval. Electric utilities use electric meters installed at customers' premises for billing and monitoring purposes. They are typically calibrated in billing units, the most common one being the kilowatt hour ("kWh"). They are usually read once each billing period. When energy savings during certain periods are desired, some meters may measure demand, the maximum use of power in some interval. "Time of day" metering allows electric rates to be changed during a day, to record usage during peak high-cost periods and off-peak, lower-cost, periods. Also, in some areas meters have relays for demand response load shedding during peak load periods. History. The earliest commercial uses of electric energy, in the 1880s, had easily predictable usage; billing was based on the number of lamps or motors installed in a building. However, as usage spread, and especially with the invention of pluggable appliances, it also became more variable, and the electric utilities sought a means to bill customers based on actual rather than estimated usage. Direct current. Many experimental types of meter were developed. Thomas Edison at first worked on a direct current (DC) electromechanical meter with a direct reading register, but instead developed an electrochemical metering system, which used an electrolytic cell to totalise current consumption. At periodic intervals the plates were removed and weighed, and the customer billed. The electrochemical meter was labor-intensive to read and not well received by customers. DC meters often measured charge in ampere hours. Since the voltage of the supply should remain substantially constant, the reading of the meter was proportional to actual energy consumed. For example, if a meter recorded that 100 ampere hours had been consumed on a 200-volt supply, then 20 kilowatt-hours of energy had been supplied. An early type of electrochemical meter used in the United Kingdom was the 'Reason' meter. This consisted of a vertically mounted glass structure with a mercury reservoir at the top of the meter. As current was drawn from the supply, electrochemical action transferred the mercury to the bottom of the column. Like all other DC meters, it recorded ampere hours. Once the mercury pool was exhausted, the meter became an open circuit. It was therefore necessary for the consumer to pay for a further supply of electricity, whereupon, the supplier's agent would unlock the meter from its mounting and invert it restoring the mercury to the reservoir and the supply. In practice the consumer would get the supply company's agent in before the supply ran out and pay only for the charge consumed as read from the scale. The agent would then reset the meter to zero by inverting it. In 1885 Ferranti offered a mercury motor meter with a register similar to gas meters; this had the advantage that the consumer could easily read the meter and verify consumption. The first accurate, recording electricity consumption meter was a DC meter by Hermann Aron, who patented it in 1883. Hugo Hirst of the British General Electric Company introduced it commercially into Great Britain from 1888. Aron's meter recorded the total charge used over time, and showed it on a series of clock dials. Alternating current. The first specimen of the AC kilowatt-hour meter produced on the basis of Hungarian Ottó Bláthy's patent and named after him was presented by the Ganz Works at the Frankfurt Fair in the autumn of 1889, and the first induction kilowatt-hour meter was already marketed by the factory at the end of the same year. These were the first alternating-current watt-hour meters, known by the name of Bláthy-meters. The AC kilowatt hour meters used at present operate on the same principle as Bláthy's original invention. Also around 1889, Elihu Thomson of the American General Electric company developed a recording watt meter (watt-hour meter) based on an ironless commutator motor. This meter overcame the disadvantages of the electrochemical type and could operate on either alternating or direct current. In 1894 Oliver Shallenberger of the Westinghouse Electric Corporation applied the induction principle previously used only in AC ampere hour meters to produce a watt-hour meter of the modern electromechanical form, using an induction disk whose rotational speed was made proportional to the power in the circuit. The Bláthy meter was similar to Shallenberger and Thomson meter in that they are two-phase motor meter. Although the induction meter would only work on alternating current, it eliminated the delicate and troublesome commutator of the Thomson design. Shallenberger fell ill and was unable to refine his initial large and heavy design, although he did also develop a polyphase version. Units. The most common unit of measurement on the electricity meter is the kilowatt hour ["kWh"], which is equal to the amount of energy used by a load of one kilowatt over a period of one hour, or 3,600,000 joules. Some electricity companies use the SI megajoule instead. Demand is normally measured in watts, but averaged over a period, most often a quarter- or half-hour. Reactive power is measured in "thousands of volt-ampere reactive-hours", (kvarh). By convention, a "lagging" or inductive load, such as a motor, will have positive reactive power. A "leading", or capacitive load, will have negative reactive power. Volt-amperes measures all power passed through a distribution network, including reactive and actual. This is equal to the product of root-mean-square volts and amperes. Distortion of the electric current by loads is measured in several ways. Power factor is the ratio of resistive (or real) power to volt-amperes. A capacitive load has a leading power factor, and an inductive load has a lagging power factor. A purely resistive load (such as a filament lamp, heater or kettle) exhibits a power factor of 1. Current harmonics are a measure of distortion of the wave form. For example, electronic loads such as computer power supplies draw their current at the voltage peak to fill their internal storage elements. This can lead to a significant voltage drop near the supply voltage peak which shows as a flattening of the voltage waveform. This flattening causes odd harmonics which are not permissible if they exceed specific limits, as they are not only wasteful, but may interfere with the operation of other equipment. Harmonic emissions are mandated by law in EU and other countries to fall within specified limits. In addition to metering based on the amount of energy used, other types of metering are available. Meters which measured the amount of charge (coulombs) used, known as ampere hour meters, were used in the early days of electrification. These were dependent upon the supply voltage remaining constant for accurate measurement of energy usage, which was not a likely circumstance with most supplies. The most common application was in relation to special-purpose meters to monitor charge / discharge status of large batteries. Some meters measured only the length of time for which charge flowed, with no measurement of the magnitude of voltage or current being made. These are only suited for constant-load applications and are rarely used today. Operation. Electricity meters operate by continuously measuring the instantaneous voltage (volts) and current (amperes) to give energy used (in joules, kilowatt-hours etc.). Meters for smaller services (such as small residential customers) can be connected directly in-line between source and customer. For larger loads, more than about 200 ampere of load, current transformers are used, so that the meter can be located somewhere other than in line with the service conductors. The meters fall into two basic categories, electromechanical and electronic. Electromechanical. The most common type of electricity meter is the electromechanical watt-hour meter. On a single-phase AC supply, the electromechanical induction meter operates through electromagnetic induction by counting the revolutions of a non-magnetic, but electrically conductive, metal disc which is made to rotate at a speed proportional to the power passing through the meter. The number of revolutions is thus proportional to the energy usage. The voltage coil consumes a small and relatively constant amount of power, typically around 2 watts which is not registered on the meter. The current coil similarly consumes a small amount of power in proportion to the square of the current flowing through it, typically up to a couple of watts at full load, which is registered on the meter. The disc is acted upon by two sets of induction coils, which form, in effect, a two phase linear induction motor. One coil is connected in such a way that it produces a magnetic flux in proportion to the voltage and the other produces a magnetic flux in proportion to the current. The field of the voltage coil is delayed by 90 degrees, due to the coil's inductive nature, and calibrated using a lag coil. This produces eddy currents in the disc and the effect is such that a force is exerted on the disc in proportion to the product of the instantaneous current and instantaneous voltage. A permanent magnet acts as an eddy current brake, exerting an opposing force proportional to the speed of rotation of the disc. The equilibrium between these two opposing forces results in the disc rotating at a speed proportional to the power or rate of energy usage. The disc drives a register mechanism which counts revolutions, much like the odometer in a car, in order to render a measurement of the total energy used. Different phase configurations use additional voltage and current coils. The disc is supported by a spindle which has a worm gear which drives the register. The register is a series of dials which record the amount of energy used. The dials may be of the "cyclometer" type, an odometer-like display that is easy to read where for each dial a single digit is shown through a window in the face of the meter, or of the pointer type where a pointer indicates each digit. With the dial pointer type, adjacent pointers generally rotate in opposite directions due to the gearing mechanism. The amount of energy represented by one revolution of the disc is denoted by the symbol Kh which is given in units of watt-hours per revolution. The value 7.2 is commonly seen. Using the value of Kh one can determine their power consumption at any given time by timing the disc with a stopwatch. formula_0. Where: "t" = time in seconds taken by the disc to complete one revolution, "P" = power in watts. For example, if "Kh" = 7.2 as above, and one revolution took place in 14.4 seconds, the power is 1800 watts. This method can be used to determine the power consumption of household devices by switching them on one by one. Most domestic electricity meters must be read manually, whether by a representative of the power company or by the customer. Where the customer reads the meter, the reading may be supplied to the power company by telephone, post or over the internet. The electricity company will normally require a visit by a company representative at least annually in order to verify customer-supplied readings and to make a basic safety check of the meter. In an induction type meter, creep is a phenomenon that can adversely affect accuracy, that occurs when the meter disc rotates continuously with potential applied and the load terminals open circuited. A test for error due to creep is called a creep test. Two standards govern meter accuracy, ANSI C12.20 for North America and IEC 62053. Electronic. Electronic meters display the energy used on an LCD or LED display, and some can also transmit readings to remote places. In addition to measuring energy used, electronic meters can also record other parameters of the load and supply such as instantaneous and maximum rate of usage demands, voltages, power factor and reactive power used etc. They can also support time-of-day billing, for example, recording the amount of energy used during on-peak and off-peak hours. The meter has a power supply, a metering engine, a processing and communication engine (i.e. a microcontroller), and other add-on modules such as a real time clock (RTC), a liquid crystal display, infra red communication ports/modules and so on. The metering engine is given the voltage and current inputs and has a voltage reference, samplers and quantisers followed by an analog to digital conversion section to yield the digitised equivalents of all the inputs. These inputs are then processed using a digital signal processor to calculate the various metering parameters. The largest source of long-term errors in the meter is drift in the preamp, followed by the precision of the voltage reference. Both of these vary with temperature as well, and vary wildly when meters are outdoors. Characterising and compensating for these is a major part of meter design. The processing and communication section has the responsibility of calculating the various derived quantities from the digital values generated by the metering engine. This also has the responsibility of communication using various protocols and interface with other addon modules connected as slaves to it. RTC and other add-on modules are attached as slaves to the processing and communication section for various input/output functions. On a modern meter most if not all of this will be implemented inside the microprocessor, such as the RTC, LCD controller, temperature sensor, memory and analog to digital converters. Communication methods. Remote meter reading is a practical example of telemetry. It saves the cost of a human meter reader and the resulting mistakes, but it also allows more measurements, and remote provisioning. Many smart meters now include a switch to interrupt or restore service. Historically, rotating meters could report their metered information remotely, using a pair of electrical contacts attached to a "KYZ" line. A KYZ interface is a Form C contact supplied from the meter. In a KYZ interface, the Y and Z wires are switch contacts, shorted to K for a measured amount of energy. When one contact closes the other opens to provide count accuracy security. Each contact change of state is considered one pulse. The frequency of pulses indicates the power demand. The number of pulses indicates energy metered. When incorporated into an electromechanical meter, the relay changes state with each full or half rotation of the meter disc. Each state change is called a "pulse." KYZ outputs were historically attached to "totaliser relays" feeding a "totaliser" so that many meters could be read all at once in one place. KYZ outputs are also the classic way of attaching electricity meters to programmable logic controllers, HVACs or other control systems. Some modern meters also supply a contact closure that warns when the meter detects a demand near a higher electricity tariff, to improve demand side management. EN 62053-31 (formerly DIN 43864) defines the S0 interface, which is a galvanically isolated open collector output. Voltage and current are limited to 27 V and 27 mA, respectively. Each metered amount of electrical energy produces one impulse with a length of 32-100 ms. The meter constant (pulses per kWh) is programmable on some meters, but often fixed to 1000-10000 pulses per kWh. Other meters implement a similar pulse interface, but with an infrared LED instead of an electrical connection. The interface is also used on other kinds of meters, like water meters. Many meters designed for semi-automated reading have a serial port that communicates by infrared LED through the faceplate of the meter. In some multi-unit buildings, a similar protocol is used, but in a wired bus using a serial current loop to connect all the meters to a single plug. The plug is often near a more easily accessible point. In the European Union, the most common infrared and protocol is "FLAG", a simplified subset of mode C of IEC 61107. In the United States and Canada, the favored infrared protocol is ANSI C12.18. Some industrial meters use protocols for programmable logic controllers, like Modbus or DNP3. One protocol proposed for this purpose is DLMS/COSEM which can operate over any medium, including serial ports. The data can be transmitted by Zigbee, Wi-Fi, telephone lines or over the power lines themselves. Some meters can be read over the internet. Other more modern protocols are also becoming widely used, like OSGP (Open Smart Grid Protocol). Electronic meters now also use low-power radio, GSM, GPRS, Bluetooth, IrDA, as well as RS-485 wired link. The meters can store the entire usage profiles with timestamps and relay them at the click of a button. The demand readings stored with the profiles accurately indicate the load requirements of the customer. This load profile data is processed at the utilities for billing and planning purposes. "AMR" (Automatic Meter Reading) and "RMR" (Remote Meter Reading) describe various systems that allow meters to be checked remotely, without the need to send a meter reader. An electronic meter can transmit its readings by telephone line or radio to a central billing office. Monitoring and billing methods. Commercial uses. Large commercial and industrial premises may use electronic meters which record power usage in blocks of half an hour or less. This is because most electricity grids have demand surges throughout the day, and the power company may wish to give price incentives to large customers to reduce demand at these times. These demand surges often correspond to meal times or, famously, to advertisements interrupting popular television programmes. Home energy monitoring. A potentially powerful means to reduce household energy consumption is to provide convenient real-time feedback to users so they can change their energy using behaviour. Recently, low-cost energy feedback displays have become available, that may be able to measure energy (Watt-hours), momentary power (wattage), and may additionally be able to measure the MAINS voltage, current, uptime, apparent power, capturing peak wattage and peak current, and have a manually set clock. The display may indicate the power consumption over the week graphically. A study using a consumer-readable meter in 500 Ontario homes by "Hydro One" showed an average 6.5% drop in total electricity use when compared with a similarly sized control group. "Hydro One" subsequently offered free power monitors to 30,000 customers based on the success of the pilot. Projects such as Google PowerMeter, take information from a smart meter and make it more readily available to users to help encourage conservation. Plug-in electricity meters (or plug load meters) measure energy used by individual appliances. There are a variety of models available on the market today but they all work on the same basic principle. The meter is plugged into an outlet, and the appliance to be measured is plugged into the meter. Such meters can help in energy conservation by identifying major energy users, or devices that consume excessive standby power. Web resources can also be used, if an estimate of the power consumption is enough for the research purposes. A power meter can often be borrowed from the local power authorities or a local public library. Multiple tariff. Electricity retailers may wish to charge customers different tariffs at different times of the day to better reflect the costs of generation and transmission. Since it is typically not cost effective to store significant amounts of electricity during a period of low demand for use during a period of high demand, costs will vary significantly depending on the time of day. Low cost generation capacity (baseload) such as nuclear can take many hours to start, meaning a surplus in times of low demand, whereas high cost but flexible generating capacity (such as gas turbines) must be kept available to respond at a moment's notice (spinning reserve) to peak demand, perhaps being used for a few minutes per day, which is very expensive. Some multiple tariff meters use different tariffs for different amounts of demand. These are usually industrial meters. Domestic variable-rate meters generally permit two to three tariffs ("peak", "off-peak" and "shoulder") and in such installations a simple electromechanical time switch may be used. Historically, these have often been used in conjunction with electrical storage heaters or hot water storage systems. Multiple tariffs are made easier by time of use (TOU) meters which incorporate or are connected to a time switch and which have multiple registers. Switching between the tariffs may happen via ripple control, or via a radio-activated switch. In principle, a sealed time switch can also be used, but is considered more vulnerable to tampering to obtain cheaper electricity. Radio-activated switching is common in the UK, with a nightly data signal sent within the longwave carrier of BBC Radio 4, 198 kHz. The time of off-peak charging is usually seven hours between midnight and 7:00am GMT/BST, and this is designed to power storage heaters and immersion heaters. In the UK, such tariffs are typically branded "Economy 7", "White Meter" or "Dual-Rate". The popularity of such tariffs has declined in recent years, at least in the domestic market, because of the (perceived or real) deficiencies of storage heaters and the comparatively much lower cost of natural gas per kWh (typically a factor of 3-5 times lower). Nevertheless, a sizeable number of properties do not have the option of gas, with many in rural areas being outside the gas supply network, and others being expensive upfront to upgrade to a radiator system. An Economy 10 meter is also available, which gives 10 hours of cheap off-peak electricity spread out over three timeslots throughout a 24-hour period. This allows multiple top-up boosts to storage heaters, or a good spread of times to run a wet electric heating system on a cheaper electricity rate. Most meters using "Economy 7" switch the entire electricity supply to the cheaper rate during the 7 hour night time period, not just the storage heater circuit. The downside of this is that the daytime rate per kWh is significantly higher, and that standing charges are sometimes higher. For example, as of July 2017, normal ("single rate") electricity costs 17.14p per kWh in the London region on the standard default tariff for EDF Energy (the post-privatisation incumbent electricity supplier in London), with a standing charge of 18.90p per day. The equivalent Economy 7 costs are 21.34p per kWh during the peak usage period with 7.83p per kWh during the off-peak usage period, and a standing charge of 18.90p per day. Timer switches installed on washing machines, tumble dryers, dishwashers and immersion heaters may be set so that they only switch on during the off-peak usage period. Smart meters. Smart meters go a step further than simple AMR (automatic meter reading). They offer additional functionality including a real-time or near real-time reads, power outage notification, and power quality monitoring. They allow price setting agencies to introduce different prices for consumption based on the time of day and the season. Another type of smart meter uses nonintrusive load monitoring to automatically determine the number and type of appliances in a residence, how much energy each uses and when. This meter is used by electric utilities to do surveys of energy use. It eliminates the need to put timers on all of the appliances in a house to determine how much energy each uses. Prepayment meters. The standard business model of electricity retailing involves the electricity company billing the customer for the amount of energy used in the previous month or quarter. In some countries, if the retailer believes that the customer may not pay the bill, a prepayment meter may be installed. This requires the customer to make advance payment before electricity can be used. If the available credit is exhausted then the supply of electricity is cut off by a relay. In the UK, mechanical prepayment coin meters used to be common, both in private rented accommodation and residential customers of the electricity boards, the nationalised electricity sector. Disadvantages of these included the need for regular visits to remove the cash, and risk of theft of the cash in the meters by both customers and burglars. The first automated pre-payment meters were introduced by London Electricity, in conjunction with the Schlumberger Metering based in Felixstowe, UK. They were initially called Key Meters and later renamed Budget Meters. They avoided the 60,000 disconnections for non-payment per annum and the many disadvantages of cash prepayment. They were also popular with customers who wanted a convenient payment method, especially in short term tenancies. Well over 1 million such meters were installed across the UK in the first few years after introduction. Modern solid-state electricity meters, in conjunction with smart cards, have removed these disadvantages and such meters are commonly used for customers considered to be a poor credit risk. In the UK, customers can use organisations such as the Post Office Limited or PayPoint network, where rechargeable tokens (Quantum cards for natural gas, or plastic "keys" for electricity) can be loaded with whatever money the customer has available. In South Africa, Sudan and Northern Ireland prepaid meters are recharged by entering a unique, encoded twenty digit number using a keypad. This makes the tokens, which may be electronically delivered or printed on a slip of paper at point of purchase, very cheap to produce. Around the world, experiments are going on, especially in developing countries, to test pre-payment systems. In some cases, prepayment meters have not been accepted by customers. There are various groups, such as the Standard Transfer Specification (STS) association, which promote common standards for prepayment metering systems across manufacturers. Prepaid meters using the STS standard are used in many countries. Time of day metering. Time of Day metering (TOD), also known as Time of Usage (TOU) or Seasonal Time of Day (SToD), metering involves dividing the day, month and year into tariff slots and with higher rates at peak load periods and low tariff rates at off-peak load periods. While this can be used to automatically control usage on the part of the customer (resulting in automatic load control), it is often simply the customer's responsibility to control his own usage or pay accordingly (voluntary load control). This also allows the utilities to plan their transmission infrastructure appropriately. See also Demand-side Management (DSM). TOD metering normally splits rates into an arrangement of multiple segments including on-peak, off-peak, mid-peak or shoulder, and critical peak. A typical arrangement is a peak occurring during the day (non-holiday days only), such as from 1 pm to 9 pm Monday through Friday during the summer and from 6:30 am to 12 noon and 5 pm to 9 pm during the winter. More complex arrangements include the use of critical peaks that occur during high demand periods. The times of peak demand/cost will vary in different markets around the world. Large commercial users can purchase power by the hour using either forecast pricing or real-time pricing. Some utilities allow residential customers to pay hourly rates, such as in Illinois, which uses day ahead pricing. Power export metering. Many electricity customers are installing their own electricity generating equipment, whether for reasons of economy, redundancy or environmental reasons. When a customer is generating more electricity than required for his own use, the surplus may be exported back to the power grid. Customers that generate back into the "grid" usually must have special equipment and safety devices to protect the grid components (as well as the customer's own) in case of faults (electrical short circuits) or maintenance of the grid (say voltage on a downed line coming from an exporting customers facility). This exported energy may be accounted for in the simplest case by the meter running backwards during periods of net export, thus reducing the customer's recorded energy usage by the amount exported. This in effect results in the customer being paid for his/her exports at the full retail price of electricity. Unless equipped with a ratchet or equivalent, a standard meter will accurately record power flow in each direction by simply running backwards when power is exported. Where allowed by law, utilities maintain a profitable margin between the price of energy delivered to the consumer and the rate credited for consumer-generated energy that flows back to the grid. Lately, upload sources typically originate from renewable sources (e.g., wind turbines, photovoltaic cells), or gas or steam turbines, which are often found in cogeneration systems. Another potential upload source that has been proposed is plug-in hybrid car batteries (vehicle-to-grid power systems). This requires a "smart grid," which includes meters that measure electricity via communication networks that require remote control and give customers timing and pricing options. Vehicle-to-grid systems could be installed at workplace parking lots and garages and at park and rides and could help drivers charge their batteries at home at night when off-peak power prices are cheaper, and receive bill crediting for selling excess electricity back to the grid during high-demand hours. Location. The location of an electricity meter varies with each installation. Possible locations include on a utility pole serving the property, in a street-side cabinet (meter box) or inside the premises adjacent to the consumer unit / distribution board. Electricity companies may prefer external locations as the meter can be read without gaining access to the premises but external meters may be more prone to vandalism. Current transformers permit the meter to be located remotely from the current-carrying conductors. This is common in large installations. For example, a substation serving a single large customer may have metering equipment installed in a cabinet, without bringing heavy cables into the cabinet. Customer drop and metering equation. Since electrical standards vary in different regions, "customer drops" from the grid to the customer also vary depending on the standards and the type of installation. There are several common types of connections between a grid and a customer. Each type has a different "metering equation." Blondel's theorem states that for any system with N current-carrying conductors, that N-1 measuring elements are sufficient to measure electrical energy. This indicates that different metering is needed, for example, for a three-phase three-wire system than for a three-phase four-wire (with neutral) system. In Europe, Asia, Africa and most other locations, single phase is common for residential and small commercial customers. Single phase distribution is less-expensive, because one set of transformers in a substation normally serve a large area with relatively high voltages (usually 230 V) and no local transformers. These have a simple metering equation: Watts = volts x amps, with volts measured from the neutral to the phase wire. In the United States, Canada, and parts of Central and South America similar customers are normally served by three-wire single phase. Three-wire single-phase requires local transformers, as few as one per ten residences, but provides lower, safer voltages at the socket (usually 120 V), and provides two voltages to customers: neutral to phase (usually 120 V), and phase to phase (usually 240 V). Additionally, three-wire customers normally have neutral wired to the zero side of the generator's windings, which gives earthing that can be easily measured to be safe. These meters have a metering equation of Watts = 0.5 x volts x (amps of phase A − amps of phase B), with volts measured between the phase wires. Industrial power is normally supplied as three phase power. There are two forms: three wire, or four wire with a system neutral. In "three wire" or "three wire delta", there is no neutral but an earth ground is the safety ground. The three phases have voltage only relative to each other. This distribution method has one fewer wire, is less expensive, and is common in Asia, Africa, and many parts of Europe. In regions that mix residences and light industry, it is common for this to be the only distribution method. A meter for this type normally measures two of the windings relative to the third winding, and adds the watts. One disadvantage of this system is that if the safety earth fails, it is difficult to discover this by direct measurement, because no phase has a voltage relative to earth. In the four-wire three-phase system, sometimes called "four-wire wye", the safety ground is connected to a neutral wire that is physically connected to the zero-voltage side of the three windings of the generator or transformer. Since all power phases are relative to the neutral in this system, if the neutral is disconnected, it can be directly measured. In the United States, the National Electrical Code requires neutrals to be of this type. In this system, power meters measure and sum all three phases relative to the neutral. In North America, it is common for electricity meters to plug into a standardised socket outdoors, on the side of a building. This allows the meter to be replaced without disturbing the wires to the socket, or the occupant of the building. Some sockets may have a bypass while the meter is removed for service. The amount of electricity used without being recorded during this small time is considered insignificant when compared to the inconvenience which might be caused to the customer by cutting off the electricity supply. Most electronic meters in North America use a serial protocol, ANSI C12.18. In many other countries the supply and load terminals are in the meter housing itself. Cables are connected directly to the meter. In some areas the meter is outside, often on a utility pole. In others, it is inside the building in a niche. If inside, it may share a data connection with other meters. If it exists, the shared connection is often a small plug near the post box. The connection is often EIA-485 or infrared with a serial protocol such as IEC 62056. In 2014, networking to meters is rapidly changing. The most common schemes seem to combine an existing national standard for data (e.g. ANSI C12.19 or IEC 62056) operating via the Internet Protocol with a small circuit board for powerline communication, or a digital radio for a mobile phone network, or an ISM band. Accuracy. Electricity meters are required to register the energy consumed within an acceptable degree of accuracy. Any significant error in the registered energy can represent a loss to the electricity supplier, or the consumer being over billed. The accuracy is generally laid down in statute for the location in which the meter is installed. Statutory provisions may also specify a procedure to be followed should the accuracy be disputed. For the United Kingdom, any installed electricity meter is required to accurately record the consumed energy, but it is permitted to under-read by 3.5%, or over-read by 2.5%. Disputed meters are initially verified with a check meter operating alongside the disputed meter. The final resort is for the disputed meter to be fully tested both in the installed location and at a specialist calibration laboratory. Approximately 93% of disputed meters are found to be operating satisfactorily. A refund of electricity paid for, but not consumed (but not vice versa) will only be made if the laboratory is able to estimate how long the meter has been misregistering. This contrasts with gas meters where if a meter is found to be under reading, it is assumed that it has under read for as long as the consumer has had a gas supply through it. Any refund due is limited to the previous six years. Tampering and security. Meters can be manipulated to make them under-register, effectively allowing power use without paying for it. This theft or fraud can be dangerous as well as dishonest. Power companies often install remote-reporting meters specifically to enable remote detection of tampering, and specifically to discover energy theft. The change to smart power meters is useful to stop energy theft. When tampering is detected, the normal tactic, legal in most areas of the United States, is to switch the subscriber to a "tampering" tariff charged at the meter's maximum designed current. At US$0.095/kWh, a standard residential 50 A meter causes a legally collectible charge of about US$5,000.00 per month. Meter readers are trained to spot signs of tampering, and with crude mechanical meters, the maximum rate may be charged each billing period until the tamper is removed, or the service is disconnected. A common method of tampering on mechanical disk meters is to attach magnets to the outside of the meter. Strong magnets saturate the magnetic fields in the meter so that the motor portion of a mechanical meter does not operate. Lower power magnets can add to the drag resistance of the internal disk resistance magnets. Magnets can also saturate current transformers or power-supply transformers in electronic meters, though countermeasures are common. Some combinations of capacitive and inductive load can interact with the coils and mass of a rotor and cause reduced or reverse motion. All of these effects can be detected by the electric company, and many modern meters can detect or compensate for them. The owner of the meter normally secures the meter against tampering. Revenue meters' mechanisms and connections are sealed. Meters may also measure VAR-hours (the reflected load), neutral and DC currents (elevated by most electrical tampering), ambient magnetic fields, etc. Even simple mechanical meters can have mechanical flags that are dropped by magnetic tampering or large DC currents. Newer computerised meters usually have counter-measures against tampering. AMR (Automated Meter Reading) meters often have sensors that can report opening of the meter cover, magnetic anomalies, extra clock setting, glued buttons, inverted installation, reversed or switched phases etc. Some tampers bypass the meter, wholly or in part. Safe tampers of this type normally increase the neutral current at the meter. Most split-phase residential meters in the United States are unable to detect neutral currents. However, modern tamper-resistant meters can detect and bill it at standard rates. Disconnecting a meter's neutral connector is unsafe because shorts can then pass through people or equipment rather than a metallic ground to the generator or earth. A phantom loop connection via an earth ground is often much higher resistance than the metallic neutral connector. Even if an earth ground is safe, metering at the substation can alert the operator to tampering. Substations, inter-ties, and transformers normally have a high-accuracy meter for the area served. Power companies normally investigate discrepancies between the total billed and the total generated, in order to find and fix power distribution problems. These investigations are an effective method to discover tampering. Power thefts in the United States are often connected with indoor marijuana grow operations. Narcotics detectives associate abnormally high power usage with the lighting such operations require. Indoor marijuana growers aware of this are particularly motivated to steal electricity simply to conceal their usage of it. Regulation and legislation. Following the deregulation of electricity supply markets in many countries, the company responsible for an electricity meter may not be obvious. Depending on the arrangements in place, the meter may be the property of the meter Operator, electricity distributor, the retailer or for some large users of electricity the meter may belong to the customer. The company responsible for reading the meter may not always be the company which owns it. Meter reading is now sometimes subcontracted and in some areas the same person may read gas, water and electricity meters at the same time. The introduction of advanced meters in residential areas has produced additional privacy issues that may affect ordinary customers. These meters are often capable of recording energy usage every 15, 30 or 60 minutes. Some meters have one or two IR LEDs on the front: one used for testing and which acts as the equivalent of the timing mark on the older mechanical meters and the other as part of a two-way IR communications port for reading / programming the meter. These IR LEDs are visible with some night vision viewers and certain video cameras that are capable of sensing IR transmissions. These can be used for surveillance, revealing information about peoples' possessions and behaviour. For instance, it can show when the customer is away for extended periods. Nonintrusive load monitoring gives even more detail about what appliances people have and their living and use patterns. A more detailed and recent analysis of this issue was performed by the Illinois Security Lab. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "P = {{3600 \\cdot Kh } \\over t}" } ]
https://en.wikipedia.org/wiki?curid=1138590
11386287
Levy–Mises equations
The Levi–Mises equations (also called flow rules) describe the relationship between stress and strain for an ideal plastic solid where the elastic strains are negligible. The generalized Levy–Mises equation can be written as: formula_0
[ { "math_id": 0, "text": "\\frac{\\mathbf{d}\\varepsilon_1}{\\sigma'_1}=\\frac{\\mathbf{d}\\varepsilon_2}\n{\\sigma'_2}=\\frac{\\mathbf{d}\\varepsilon_3}{\\sigma'_3}=\\mathbf{d}\\lambda" } ]
https://en.wikipedia.org/wiki?curid=11386287
11387202
2-EXPTIME
In computational complexity theory, the complexity class 2-EXPTIME (sometimes called 2-EXP) is the set of all decision problems solvable by a deterministic Turing machine in O(22"p"("n")) time, where "p"("n") is a polynomial function of "n". In terms of DTIME, formula_0 We know P ⊆ NP ⊆ PSPACE ⊆ EXPTIME ⊆ NEXPTIME ⊆ EXPSPACE ⊆ 2-EXPTIME ⊆ ELEMENTARY. 2-EXPTIME can also be reformulated as the space class AEXPSPACE, the problems that can be solved by an alternating Turing machine in exponential space. This is one way to see that EXPSPACE ⊆ 2-EXPTIME, since an alternating Turing machine is at least as powerful as a deterministic Turing machine. 2-EXPTIME is one class in a hierarchy of complexity classes with increasingly higher time bounds. The class 3-EXPTIME is defined similarly to 2-EXPTIME but with a triply exponential time bound formula_1. This can be generalized to higher and higher time bounds. Examples. Examples of algorithms that require at least 2-EXPTIME include: 2-EXPTIME-complete problems. Generalizations of many fully observable games are EXPTIME-complete. These games can be viewed as particular instances of a class of transition systems defined in terms of a set of state variables and actions/events that change the values of the state variables, together with the question of whether a winning strategy exists. A generalization of this class of fully observable problems to partially observable problems lifts the complexity from EXPTIME-complete to 2-EXPTIME-complete.
[ { "math_id": 0, "text": " \\mathsf{2\\mbox{-}EXPTIME} = \\bigcup_{k \\in \\mathbb{N} } \\mathsf{ DTIME } \\left( 2^{ 2^{n^k} } \\right) . " }, { "math_id": 1, "text": "2^{2^{2^{n^k}}}" } ]
https://en.wikipedia.org/wiki?curid=11387202
1138853
Abelian integral
In mathematics, an abelian integral, named after the Norwegian mathematician Niels Henrik Abel, is an integral in the complex plane of the form formula_0 where formula_1 is an arbitrary rational function of the two variables formula_2 and formula_3, which are related by the equation formula_4 where formula_5 is an irreducible polynomial in formula_3, formula_6 whose coefficients formula_7, formula_8 are rational functions of formula_2. The value of an abelian integral depends not only on the integration limits, but also on the path along which the integral is taken; it is thus a multivalued function of formula_9. Abelian integrals are natural generalizations of elliptic integrals, which arise when formula_10 where formula_11 is a polynomial of degree 3 or 4. Another special case of an abelian integral is a hyperelliptic integral, where formula_12, in the formula above, is a polynomial of degree greater than 4. History. The theory of abelian integrals originated with a paper by Abel published in 1841. This paper was written during his stay in Paris in 1826 and presented to Augustin-Louis Cauchy in October of the same year. This theory, later fully developed by others, was one of the crowning achievements of nineteenth century mathematics and has had a major impact on the development of modern mathematics. In more abstract and geometric language, it is contained in the concept of abelian variety, or more precisely in the way an algebraic curve can be mapped into abelian varieties. Abelian integrals were later connected to the prominent mathematician David Hilbert's 16th Problem, and they continue to be considered one of the foremost challenges in contemporary mathematics. Modern view. In the theory of Riemann surfaces, an abelian integral is a function related to the indefinite integral of a differential of the first kind. Suppose we are given a Riemann surface formula_13 and on it a differential 1-form formula_14 that is everywhere holomorphic on formula_13, and fix a point formula_15 on formula_13, from which to integrate. We can regard formula_16 as a multi-valued function formula_17, or (better) an honest function of the chosen path formula_18 drawn on formula_13 from formula_15 to formula_19. Since formula_13 will in general be multiply connected, one should specify formula_18, but the value will in fact only depend on the homology class of formula_18. In the case of formula_13 a compact Riemann surface of genus 1, i.e. an elliptic curve, such functions are the elliptic integrals. Logically speaking, therefore, an abelian integral should be a function such as formula_20. Such functions were first introduced to study hyperelliptic integrals, i.e., for the case where formula_13 is a hyperelliptic curve. This is a natural step in the theory of integration to the case of integrals involving algebraic functions formula_21, where formula_22 is a polynomial of degree formula_23. The first major insights of the theory were given by Abel; it was later formulated in terms of the Jacobian variety formula_24. Choice of formula_15 gives rise to a standard holomorphic function formula_25 of complex manifolds. It has the defining property that the holomorphic 1-forms on formula_25, of which there are "g" independent ones if "g" is the genus of "S", pull back to a basis for the differentials of the first kind on "S".
[ { "math_id": 0, "text": "\\int_{z_0}^z R(x,w) \\, dx," }, { "math_id": 1, "text": "R(x,w)" }, { "math_id": 2, "text": "x" }, { "math_id": 3, "text": "w" }, { "math_id": 4, "text": "F(x,w)=0," }, { "math_id": 5, "text": "F(x,w)" }, { "math_id": 6, "text": "F(x,w)\\equiv\\varphi_n(x)w^n+\\cdots+\\varphi_1(x)w +\\varphi_0\\left(x\\right)," }, { "math_id": 7, "text": "\\varphi_j(x)" }, { "math_id": 8, "text": "j=0,1,\\ldots,n" }, { "math_id": 9, "text": "z" }, { "math_id": 10, "text": "F(x,w)=w^2-P(x), \\, " }, { "math_id": 11, "text": "P\\left(x\\right)" }, { "math_id": 12, "text": "P(x)" }, { "math_id": 13, "text": "S" }, { "math_id": 14, "text": "\\omega" }, { "math_id": 15, "text": "P_0" }, { "math_id": 16, "text": "\\int_{P_0}^P \\omega" }, { "math_id": 17, "text": "f\\left(P\\right)" }, { "math_id": 18, "text": "C" }, { "math_id": 19, "text": "P" }, { "math_id": 20, "text": "f" }, { "math_id": 21, "text": "\\sqrt{A}" }, { "math_id": 22, "text": "A" }, { "math_id": 23, "text": ">4" }, { "math_id": 24, "text": "J\\left(S\\right)" }, { "math_id": 25, "text": "S\\to J(S)" } ]
https://en.wikipedia.org/wiki?curid=1138853
1138912
Differential of the first kind
In mathematics, differential of the first kind is a traditional term used in the theories of Riemann surfaces (more generally, complex manifolds) and algebraic curves (more generally, algebraic varieties), for everywhere-regular differential 1-forms. Given a complex manifold "M", a differential of the first kind ω is therefore the same thing as a 1-form that is everywhere holomorphic; on an algebraic variety "V" that is non-singular it would be a global section of the coherent sheaf Ω1 of Kähler differentials. In either case the definition has its origins in the theory of abelian integrals. The dimension of the space of differentials of the first kind, by means of this identification, is the Hodge number "h"1,0. The differentials of the first kind, when integrated along paths, give rise to integrals that generalise the elliptic integrals to all curves over the complex numbers. They include for example the hyperelliptic integrals of type formula_0 where "Q" is a square-free polynomial of any given degree &gt; 4. The allowable power "k" has to be determined by analysis of the possible pole at the point at infinity on the corresponding hyperelliptic curve. When this is done, one finds that the condition is "k" ≤ "g" − 1, or in other words, "k" at most 1 for degree of "Q" 5 or 6, at most 2 for degree 7 or 8, and so on (as "g" = [(1+ deg "Q")/2]). Quite generally, as this example illustrates, for a compact Riemann surface or algebraic curve, the Hodge number is the genus "g". For the case of algebraic surfaces, this is the quantity known classically as the irregularity "q". It is also, in general, the dimension of the Albanese variety, which takes the place of the Jacobian variety. Differentials of the second and third kind. The traditional terminology also included differentials of the second kind and of the third kind. The idea behind this has been supported by modern theories of algebraic differential forms, both from the side of more Hodge theory, and through the use of morphisms to commutative algebraic groups. The Weierstrass zeta function was called an "integral of the second kind" in elliptic function theory; it is a logarithmic derivative of a theta function, and therefore has simple poles, with integer residues. The decomposition of a (meromorphic) elliptic function into pieces of 'three kinds' parallels the representation as (i) a constant, plus (ii) a linear combination of translates of the Weierstrass zeta function, plus (iii) a function with arbitrary poles but no residues at them. The same type of decomposition exists in general, "mutatis mutandis", though the terminology is not completely consistent. In the algebraic group (generalized Jacobian) theory the three kinds are abelian varieties, algebraic tori, and affine spaces, and the decomposition is in terms of a composition series. On the other hand, a meromorphic abelian differential of the "second kind" has traditionally been one with residues at all poles being zero. One of the third kind is one where all poles are simple. There is a higher-dimensional analogue available, using the Poincaré residue.
[ { "math_id": 0, "text": " \\int\\frac{x^k \\, dx}{\\sqrt{Q(x)}} " } ]
https://en.wikipedia.org/wiki?curid=1138912
11391180
Conditional variance
Variance of a random variable given value of other variables In probability theory and statistics, a conditional variance is the variance of a random variable given the value(s) of one or more other variables. Particularly in econometrics, the conditional variance is also known as the scedastic function or skedastic function. Conditional variances are important parts of autoregressive conditional heteroskedasticity (ARCH) models. Definition. The conditional variance of a random variable "Y" given another random variable "X" is formula_0 The conditional variance tells us how much variance is left if we use formula_1 to "predict" "Y". Here, as usual, formula_1 stands for the conditional expectation of "Y" given "X", which we may recall, is a random variable itself (a function of "X", determined up to probability one). As a result, formula_2 itself is a random variable (and is a function of "X"). Explanation, relation to least-squares. Recall that variance is the expected squared deviation between a random variable (say, "Y") and its expected value. The expected value can be thought of as a reasonable prediction of the outcomes of the random experiment (in particular, the expected value is the best constant prediction when predictions are assessed by expected squared prediction error). Thus, one interpretation of variance is that it gives the smallest possible expected squared prediction error. If we have the knowledge of another random variable ("X") that we can use to predict "Y", we can potentially use this knowledge to reduce the expected squared error. As it turns out, the best prediction of "Y" given "X" is the conditional expectation. In particular, for any formula_3 measurable, formula_4 By selecting formula_5, the second, nonnegative term becomes zero, showing the claim. Here, the second equality used the law of total expectation. We also see that the expected conditional variance of "Y" given "X" shows up as the irreducible error of predicting "Y" given only the knowledge of "X". Special cases, variations. Conditioning on discrete random variables. When "X" takes on countable many values formula_6 with positive probability, i.e., it is a discrete random variable, we can introduce formula_7, the conditional variance of "Y" given that "X=x" for any "x" from "S" as follows: formula_8 where recall that formula_9 is the conditional expectation of "Z" given that "X=x", which is well-defined for formula_10. An alternative notation for formula_7 is formula_11 Note that here formula_7 defines a constant for possible values of "x", and in particular, formula_7, is "not" a random variable. The connection of this definition to formula_12 is as follows: Let "S" be as above and define the function formula_13 as formula_14. Then, formula_15 almost surely. Definition using conditional distributions. The "conditional expectation of "Y" given "X=x"" can also be defined more generally using the conditional distribution of "Y" given "X" (this exists in this case, as both here "X" and "Y" are real-valued). In particular, letting formula_16 be the (regular) conditional distribution formula_16 of "Y" given "X", i.e., formula_17 (the intention is that formula_18 almost surely over the support of "X"), we can define formula_19 This can, of course, be specialized to when "Y" is discrete itself (replacing the integrals with sums), and also when the conditional density of "Y" given "X=x" with respect to some underlying distribution exists. Components of variance. The law of total variance says formula_20 In words: the variance of "Y" is the sum of the expected conditional variance of "Y" given "X" and the variance of the conditional expectation of "Y" given "X". The first term captures the variation left after "using "X" to predict "Y"", while the second term captures the variation due to the mean of the prediction of "Y" due to the randomness of "X". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{Var}(Y\\mid X) = \\operatorname{E}\\Big(\\big(Y - \\operatorname{E}(Y\\mid X)\\big)^{2}\\;\\Big|\\; X\\Big)." }, { "math_id": 1, "text": "\\operatorname{E}(Y\\mid X)" }, { "math_id": 2, "text": "\\operatorname{Var}(Y\\mid X)" }, { "math_id": 3, "text": "f: \\mathbb{R} \\to \\mathbb{R}" }, { "math_id": 4, "text": "\n\\begin{align}\n\\operatorname{E}[ (Y-f(X))^2 ]\n&= \\operatorname{E}[ (Y-\\operatorname{E}(Y|X)\\,\\,+\\,\\, \\operatorname{E}(Y|X)-f(X) )^2 ] \\\\\n&= \\operatorname{E}[ \\operatorname{E}\\{ (Y-\\operatorname{E}(Y|X)\\,\\,+\\,\\, \\operatorname{E}(Y|X)-f(X) )^2|X\\} ] \\\\\n&= \\operatorname{E}[\\operatorname{Var}( Y| X )] + \\operatorname{E}[(\\operatorname{E}(Y|X)-f(X))^2]\\,.\n\\end{align}\n" }, { "math_id": 5, "text": "f(X)=\\operatorname{E}(Y|X)" }, { "math_id": 6, "text": "S = \\{x_1,x_2,\\dots\\}" }, { "math_id": 7, "text": "\\operatorname{Var}(Y|X=x)" }, { "math_id": 8, "text": "\\operatorname{Var}(Y|X=x) = \\operatorname{E}((Y - \\operatorname{E}(Y\\mid X=x))^{2}\\mid X=x)=\\operatorname{E}(Y^2|X=x)-\\operatorname{E}(Y|X=x)^2," }, { "math_id": 9, "text": "\\operatorname{E}(Z\\mid X=x)" }, { "math_id": 10, "text": "x\\in S" }, { "math_id": 11, "text": "\\operatorname{Var}_{Y\\mid X}(Y|x)." }, { "math_id": 12, "text": "\\operatorname{Var}(Y|X)" }, { "math_id": 13, "text": "v: S \\to \\mathbb{R}" }, { "math_id": 14, "text": "v(x) = \\operatorname{Var}(Y|X=x)" }, { "math_id": 15, "text": "v(X) = \\operatorname{Var}(Y|X)" }, { "math_id": 16, "text": "P_{Y|X}" }, { "math_id": 17, "text": "P_{Y|X}:\\mathcal{B} \\times \\mathbb{R}\\to [0,1]" }, { "math_id": 18, "text": "P_{Y|X}(U,x) = P(Y\\in U|X=x)" }, { "math_id": 19, "text": " \\operatorname{Var}(Y|X=x) = \\int \\left(y- \\int y' P_{Y|X}(dy'|x)\\right)^2 P_{Y|X}(dy|x). " }, { "math_id": 20, "text": "\\operatorname{Var}(Y) = \\operatorname{E}(\\operatorname{Var}(Y\\mid X))+\\operatorname{Var}(\\operatorname{E}(Y\\mid X))." } ]
https://en.wikipedia.org/wiki?curid=11391180
11391242
Confirmatory factor analysis
Form of statistical factor analysis In statistics, confirmatory factor analysis (CFA) is a special form of factor analysis, most commonly used in social science research. It is used to test whether measures of a construct are consistent with a researcher's understanding of the nature of that construct (or factor). As such, the objective of confirmatory factor analysis is to test whether the data fit a hypothesized measurement model. This hypothesized model is based on theory and/or previous analytic research. CFA was first developed by Jöreskog (1969) and has built upon and replaced older methods of analyzing construct validity such as the MTMM Matrix as described in Campbell &amp; Fiske (1959). In confirmatory factor analysis, the researcher first develops a hypothesis about what factors they believe are underlying the measures used (e.g., "Depression" being the factor underlying the Beck Depression Inventory and the Hamilton Rating Scale for Depression) and may impose constraints on the model based on these a priori hypotheses. By imposing these constraints, the researcher is forcing the model to be consistent with their theory. For example, if it is posited that there are two factors accounting for the covariance in the measures, and that these factors are unrelated to each other, the researcher can create a model where the correlation between factor A and factor B is constrained to zero. Model fit measures could then be obtained to assess how well the proposed model captured the covariance between all the items or measures in the model. If the constraints the researcher has imposed on the model are inconsistent with the sample data, then the results of statistical tests of model fit will indicate a poor fit, and the model will be rejected. If the fit is poor, it may be due to some items measuring multiple factors. It might also be that some items within a factor are more related to each other than others. For some applications, the requirement of "zero loadings" (for indicators not supposed to load on a certain factor) has been regarded as too strict. A newly developed analysis method, "exploratory structural equation modeling", specifies hypotheses about the relation between observed indicators and their supposed primary latent factors while allowing for estimation of loadings with other latent factors as well. Statistical model. In confirmatory factor analysis, researchers are typically interested in studying the degree to which responses on a "p" x 1 vector of observable random variables can be used to assign a value to one or more unobserved variable(s) "formula_0". The investigation is largely accomplished by estimating and evaluating the loading of each item used to tap aspects of the unobserved latent variable. That is, y[i] is the vector of observed responses predicted by the unobserved latent variable "formula_0," which is defined as: formula_1, where formula_2 is the "p" x 1 vector of observed random variables, formula_0 are the unobserved latent variables and formula_3 is a "p" x "k" matrix with "k" equal to the number of latent variables. Since, formula_2 are imperfect measures of formula_0, the model also consists of error, formula_4. Estimates in the maximum likelihood (ML) case generated by iteratively minimizing the fit function, formula_5 where formula_6 is the variance-covariance matrix implied by the proposed factor analysis model and formula_7 is the observed variance-covariance matrix. That is, values are found for free model parameters that minimize the difference between the model-implied variance-covariance matrix and observed variance-covariance matrix. Alternative estimation strategies. Although numerous algorithms have been used to estimate CFA models, maximum likelihood (ML) remains the primary estimation procedure. That being said, CFA models are often applied to data conditions that deviate from the normal theory requirements for valid ML estimation. For example, social scientists often estimate CFA models with non-normal data and indicators scaled using discrete ordered categories. Accordingly, alternative algorithms have been developed that attend to the diverse data conditions applied researchers encounter. The alternative estimators have been characterized into two general type: (1) robust and (2) limited information estimator. When ML is implemented with data that deviates away from the assumptions of normal theory, CFA models may produce biased parameter estimates and misleading conclusions. Robust estimation typically attempts to correct the problem by adjusting the normal theory model χ2 and standard errors. For example, Satorra and Bentler (1994) recommended using ML estimation in the usual way and subsequently dividing the model χ2 by a measure of the degree of multivariate kurtosis. An added advantage of robust ML estimators is their availability in common SEM software (e.g., LAVAAN). Unfortunately, robust ML estimators can become untenable under common data conditions. In particular, when indicators are scaled using few response categories (e.g., "disagree", "neutral", "agree") robust ML estimators tend to perform poorly. Limited information estimators, such as weighted least squares (WLS), are likely a better choice when manifest indicators take on an ordinal form. Broadly, limited information estimators attend to the ordinal indicators by using polychoric correlations to fit CFA models. Polychoric correlations capture the covariance between two latent variables when only their categorized form is observed, which is achieved largely through the estimation of threshold parameters. Exploratory factor analysis. Both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) are employed to understand shared variance of measured variables that is believed to be attributable to a factor or latent construct. Despite this similarity, however, EFA and CFA are conceptually and statistically distinct analyses. The goal of EFA is to identify factors based on data and to maximize the amount of variance explained. The researcher is not required to have any specific hypotheses about how many factors will emerge, and what items or variables these factors will comprise. If these hypotheses exist, they are not incorporated into and do not affect the results of the statistical analyses. By contrast, CFA evaluates "a priori" hypotheses and is largely driven by theory. CFA analyses require the researcher to hypothesize, in advance, the number of factors, whether or not these factors are correlated, and which items/measures load onto and reflect which factors. As such, in contrast to exploratory factor analysis, where all loadings are free to vary, CFA allows for the explicit constraint of certain loadings to be zero. EFA is often considered to be more appropriate than CFA in the early stages of scale development because CFA does not show how well your items load on the non-hypothesized factors. Another strong argument for the initial use of EFA, is that the misspecification of the number of factors at an early stage of scale development will typically not be detected by confirmatory factor analysis. At later stages of scale development, confirmatory techniques may provide more information by the explicit contrast of competing factor structures. EFA is sometimes reported in research when CFA would be a better statistical approach. It has been argued that CFA can be restrictive and inappropriate when used in an exploratory fashion. However, the idea that CFA is solely a “confirmatory” analysis may sometimes be misleading, as modification indices used in CFA are somewhat exploratory in nature. Modification indices show the improvement in model fit if a particular coefficient were to become unconstrained. Likewise, EFA and CFA do not have to be mutually exclusive analyses; EFA has been argued to be a reasonable follow up to a poor-fitting CFA model. Structural equation modeling. Structural equation modeling software is typically used for performing confirmatory factor analysis. LISREL, EQS, AMOS, Mplus and LAVAAN package in R are popular software programs. There is also the Python package semopy 2. CFA is also frequently used as a first step to assess the proposed measurement model in a structural equation model. Many of the rules of interpretation regarding assessment of model fit and model modification in structural equation modeling apply equally to CFA. CFA is distinguished from structural equation modeling by the fact that in CFA, there are no directed arrows between latent factors. In other words, while in CFA factors are not presumed to directly cause one another, SEM often does specify particular factors and variables to be causal in nature. In the context of SEM, the CFA is often called 'the measurement model', while the relations between the latent variables (with directed arrows) are called 'the structural model'. Evaluating model fit. In CFA, several statistical tests are used to determine how well the model fits to the data. Note that a good fit between the model and the data does not mean that the model is “correct”, or even that it explains a large proportion of the covariance. A “good model fit” only indicates that the model is plausible. When reporting the results of a confirmatory factor analysis, one is urged to report: a) the proposed models, b) any modifications made, c) which measures identify each latent variable, d) correlations between latent variables, e) any other pertinent information, such as whether constraints are used. With regard to selecting model fit statistics to report, one should not simply report the statistics that estimate the best fit, though this may be tempting. Though several varying opinions exist, Kline (2010) recommends reporting the chi-squared test, the root mean square error of approximation (RMSEA), the comparative fit index (CFI), and the standardised root mean square residual (SRMR). Absolute fit indices. Absolute fit indices determine how well the a priori model fits, or reproduces the data. Absolute fit indices include, but are not limited to, the Chi-Squared test, RMSEA, GFI, AGFI, RMR, and SRMR. Chi-squared test. The chi-squared test indicates the difference between observed and expected covariance matrices. Values closer to zero indicate a better fit; smaller difference between expected and observed covariance matrices. Chi-squared statistics can also be used to directly compare the fit of nested models to the data. One difficulty with the chi-squared test of model fit, however, is that researchers may fail to reject an inappropriate model in small sample sizes and reject an appropriate model in large sample sizes. As a result, other measures of fit have been developed. Root mean square error of approximation. The root mean square error of approximation (RMSEA) avoids issues of sample size by analyzing the discrepancy between the hypothesized model, with optimally chosen parameter estimates, and the population covariance matrix. The RMSEA ranges from 0 to 1, with smaller values indicating better model fit. A value of .06 or less is indicative of acceptable model fit. Root mean square residual and standardized root mean square residual. The root mean square residual (RMR) and standardized root mean square residual (SRMR) are the square root of the discrepancy between the sample covariance matrix and the model covariance matrix. The RMR may be somewhat difficult to interpret, however, as its range is based on the scales of the indicators in the model (this becomes tricky when you have multiple indicators with varying scales; e.g., two questionnaires, one on a 0–10 scale, the other on a 1–3 scale). The standardized root mean square residual removes this difficulty in interpretation, and ranges from 0 to 1, with a value of .08 or less being indicative of an acceptable model. Goodness of fit index and adjusted goodness of fit index. The goodness of fit index (GFI) is a measure of fit between the hypothesized model and the observed covariance matrix. The adjusted goodness of fit index (AGFI) corrects the GFI, which is affected by the number of indicators of each latent variable. The GFI and AGFI range between 0 and 1, with a value of over .9 generally indicating acceptable model fit. Relative fit indices. Relative fit indices (also called “incremental fit indices” and “comparative fit indices”) compare the chi-square for the hypothesized model to one from a “null”, or “baseline” model. This null model almost always contains a model in which all of the variables are uncorrelated, and as a result, has a very large chi-square (indicating poor fit). Relative fit indices include the normed fit index and comparative fit index. Normed fit index and non-normed fit index. The normed fit index (NFI) analyzes the discrepancy between the chi-squared value of the hypothesized model and the chi-squared value of the null model. However, NFI tends to be negatively biased. The non-normed fit index (NNFI; also known as the Tucker–Lewis index, as it was built on an index formed by Tucker and Lewis, in 1973) resolves some of the issues of negative bias, though NNFI values may sometimes fall beyond the 0 to 1 range. Values for both the NFI and NNFI should range between 0 and 1, with a cutoff of .95 or greater indicating a good model fit. Comparative fit index. The comparative fit index (CFI) analyzes the model fit by examining the discrepancy between the data and the hypothesized model, while adjusting for the issues of sample size inherent in the chi-squared test of model fit, and the normed fit index. CFI values range from 0 to 1, with larger values indicating better fit. Previously, a CFI value of .90 or larger was considered to indicate acceptable model fit. However, recent studies have indicated that a value greater than .90 is needed to ensure that misspecified models are not deemed acceptable. Thus, a CFI value of .95 or higher is presently accepted as an indicator of good fit. Identification and underidentification. To estimate the parameters of a model, the model must be properly identified. That is, the number of estimated (unknown) parameters ("q") must be less than or equal to the number of unique variances and covariances among the measured variables; "p"("p" + 1)/2. This equation is known as the "t rule". If there is too little information available on which to base the parameter estimates, then the model is said to be underidentified, and model parameters cannot be estimated appropriately. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\xi" }, { "math_id": 1, "text": "Y=\\Lambda\\xi+\\epsilon" }, { "math_id": 2, "text": "Y" }, { "math_id": 3, "text": "\\Lambda" }, { "math_id": 4, "text": "\\epsilon" }, { "math_id": 5, "text": "F_{\\mathrm{ML}}=\\ln|\\Lambda\\Omega\\Lambda{'}+I-\\operatorname{diag}(\\Lambda\\Omega\\Lambda{'})|+\\operatorname{tr}(R(\\Lambda\\Omega\\Lambda{'}+I-\\operatorname{diag}(\\Lambda\\Omega\\Lambda{'}))^{-1})-\\ln(R)-p" }, { "math_id": 6, "text": "\\Lambda\\Omega\\Lambda{'}+I-\\operatorname{diag}(\\Lambda\\Omega\\Lambda{'})" }, { "math_id": 7, "text": "R" } ]
https://en.wikipedia.org/wiki?curid=11391242
1139142
Bruhat decomposition
In mathematics, the Bruhat decomposition (introduced by François Bruhat for classical groups and by Claude Chevalley in general) "G" = "BWB" of certain algebraic groups "G" into cells can be regarded as a general expression of the principle of Gauss–Jordan elimination, which generically writes a matrix as a product of an upper triangular and lower triangular matrices—but with exceptional cases. It is related to the Schubert cell decomposition of flag varieties: see Weyl group for this. More generally, any group with a ("B", "N") pair has a Bruhat decomposition. Definitions. The Bruhat decomposition of "G" is the decomposition formula_0 of "G" as a disjoint union of double cosets of "B" parameterized by the elements of the Weyl group "W". (Note that although "W" is not in general a subgroup of "G", the coset "wB" is still well defined because the maximal torus is contained in "B".) Examples. Let "G" be the general linear group GL"n" of invertible formula_1 matrices with entries in some algebraically closed field, which is a reductive group. Then the Weyl group "W" is isomorphic to the symmetric group "S""n" on "n" letters, with permutation matrices as representatives. In this case, we can take "B" to be the subgroup of upper triangular invertible matrices, so Bruhat decomposition says that one can write any invertible matrix "A" as a product "U"1"PU"2 where "U"1 and "U"2 are upper triangular, and "P" is a permutation matrix. Writing this as "P" = "U"1−1"AU"2−1, this says that any invertible matrix can be transformed into a permutation matrix via a series of row and column operations, where we are only allowed to add row "i" (resp. column "i") to row "j" (resp. column "j") if "i" &gt; "j" (resp. "i" &lt; "j"). The row operations correspond to "U"1−1, and the column operations correspond to "U"2−1. The special linear group SL"n" of invertible formula_1 matrices with determinant 1 is a semisimple group, and hence reductive. In this case, "W" is still isomorphic to the symmetric group "S""n". However, the determinant of a permutation matrix is the sign of the permutation, so to represent an odd permutation in SL"n", we can take one of the nonzero elements to be −1 instead of 1. Here "B" is the subgroup of upper triangular matrices with determinant 1, so the interpretation of Bruhat decomposition in this case is similar to the case of GL"n". Geometry. The cells in the Bruhat decomposition correspond to the Schubert cell decomposition of flag varieties. The dimension of the cells corresponds to the length of the word "w" in the Weyl group. Poincaré duality constrains the topology of the cell decomposition, and thus the algebra of the Weyl group; for instance, the top dimensional cell is unique (it represents the fundamental class), and corresponds to the longest element of a Coxeter group. Computations. The number of cells in a given dimension of the Bruhat decomposition are the coefficients of the "q"-polynomial of the associated Dynkin diagram. Double Bruhat cells. With two opposite Borel subgroups, one may intersect the Bruhat cells for each of them, giving a further decomposition formula_2
[ { "math_id": 0, "text": "G=BWB =\\bigsqcup_{w\\in W}BwB" }, { "math_id": 1, "text": "n \\times n" }, { "math_id": 2, "text": "G=\\bigsqcup_{w_1 , w_2\\in W} ( Bw_1 B \\cap B_- w_2 B_- )." } ]
https://en.wikipedia.org/wiki?curid=1139142
11391440
Kobon triangle problem
&lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: How many non-overlapping triangles can be formed in an arrangement of formula_0 lines? The Kobon triangle problem is an unsolved problem in combinatorial geometry first stated by Kobon Fujimura (1903-1983). The problem asks for the largest number "N"("k") of nonoverlapping triangles whose sides lie on an arrangement of "k" lines. Variations of the problem consider the projective plane rather than the Euclidean plane, and require that the triangles not be crossed by any other lines of the arrangement. Known upper bounds. Saburo Tamura proved that the number of nonoverlapping triangles realizable by formula_0 lines is at most formula_1. G. Clément and J. Bader proved more strongly that this bound cannot be achieved when formula_0 is congruent to 0 or 2 (mod 6). The maximum number of triangles is therefore at most one less in these cases. The same bounds can be equivalently stated, without use of the floor function, as: formula_2 Solutions yielding this number of triangles are known when formula_0 is 3, 4, 5, 6, 7, 8, 9, 13, 15 or 17. For "k" = 10, 11 and 12, the best solutions known reach a number of triangles one less than the upper bound. Known constructions. The following bounds are known: In the projective plane. The version of the problem in the projective plane allows more triangles. In this version, it is convenient to include the line at infinity as one of the given lines, after which the triangles appear in three forms: For instance, an arrangement of five finite lines forming a pentagram, together with a sixth line at infinity, has ten triangles: five in the pentagram, and five more bounded by pairs of rays. D. Forge and J. L. Ramirez Alfonsin provided a method for going from an arrangement in the projective plane with formula_3 lines and formula_4 triangles (the maximum possible for formula_3), with certain additional properties, to another solution with formula_5 lines and formula_6 triangles (again maximum), with the same additional properties. As they observe, it is possible to start this method with the projective arrangement of six lines and ten triangles described above, producing optimal projective arrangements whose numbers of lines are &lt;templatestyles src="Block indent/styles.css"/&gt;6, 11, 21, 41, 81, ... . Thus, in the projective case, there are infinitely many different numbers of lines for which an optimal solution is known. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "\\lfloor k(k-2)/3\\rfloor" }, { "math_id": 2, "text": "\n\\begin{cases}\n\\frac 13 k (k-2) & \\text{when } k \\equiv 3,5 \\pmod{6}; \\\\\n\\frac 13 (k+1)(k-3) & \\text{when } k \\equiv 0,2 \\pmod{6}; \\\\\n\\frac 13 (k^2-2k-2) & \\text{when } k \\equiv 1,4 \\pmod{6}.\n\\end{cases}\n" }, { "math_id": 3, "text": "k>3" }, { "math_id": 4, "text": "\\tfrac13k(k-1)" }, { "math_id": 5, "text": "K=2k-1" }, { "math_id": 6, "text": "\\tfrac13K(K-1)" }, { "math_id": 7, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=11391440
1139145
Cartan decomposition
Generalized matrix decomposition for Lie groups and Lie algebras In mathematics, the Cartan decomposition is a decomposition of a semisimple Lie group or Lie algebra, which plays an important role in their structure theory and representation theory. It generalizes the polar decomposition or singular value decomposition of matrices. Its history can be traced to the 1880s work of Élie Cartan and Wilhelm Killing. Cartan involutions on Lie algebras. Let formula_0 be a real semisimple Lie algebra and let formula_1 be its Killing form. An involution on formula_0 is a Lie algebra automorphism formula_2 of formula_0 whose square is equal to the identity. Such an involution is called a "Cartan involution" on formula_0 if formula_3 is a positive definite bilinear form. Two involutions formula_4 and formula_5 are considered equivalent if they differ only by an inner automorphism. Any real semisimple Lie algebra has a Cartan involution, and any two Cartan involutions are equivalent. Cartan pairs. Let formula_2 be an involution on a Lie algebra formula_0. Since formula_20, the linear map formula_2 has the two eigenvalues formula_21. If formula_22 and formula_23 denote the eigenspaces corresponding to +1 and -1, respectively, then formula_24. Since formula_2 is a Lie algebra automorphism, the Lie bracket of two of its eigenspaces is contained in the eigenspace corresponding to the product of their eigenvalues. It follows that formula_25, formula_26, and formula_27. Thus formula_22 is a Lie subalgebra, while any subalgebra of formula_23 is commutative. Conversely, a decomposition formula_24 with these extra properties determines an involution formula_2 on formula_0 that is formula_28 on formula_22 and formula_29 on formula_23. Such a pair formula_30 is also called a "Cartan pair" of formula_0, and formula_31 is called a "symmetric pair". This notion of a Cartan pair here is not to be confused with the distinct notion involving the relative Lie algebra cohomology formula_32. The decomposition formula_24 associated to a Cartan involution is called a "Cartan decomposition" of formula_0. The special feature of a Cartan decomposition is that the Killing form is negative definite on formula_22 and positive definite on formula_23. Furthermore, formula_22 and formula_23 are orthogonal complements of each other with respect to the Killing form on formula_0. Cartan decomposition on the Lie group level. Let formula_33 be a non-compact semisimple Lie group and formula_0 its Lie algebra. Let formula_2 be a Cartan involution on formula_0 and let formula_34 be the resulting Cartan pair. Let formula_35 be the analytic subgroup of formula_33 with Lie algebra formula_22. Then: The automorphism formula_36 is also called the "global Cartan involution", and the diffeomorphism formula_38 is called the "global Cartan decomposition". If we write formula_40 this says that the product map formula_41 is a diffeomorphism so formula_42. For the general linear group, formula_43 is a Cartan involution. A refinement of the Cartan decomposition for symmetric spaces of compact or noncompact type states that the maximal Abelian subalgebras formula_44 in formula_23 are unique up to conjugation by formula_35. Moreover, formula_45 where formula_46. In the compact and noncompact case the global Cartan decomposition thus implies formula_47 Geometrically the image of the subgroup formula_48 in formula_49 is a totally geodesic submanifold. Relation to polar decomposition. Consider formula_50 with the Cartan involution formula_7. Then formula_51 is the real Lie algebra of skew-symmetric matrices, so that formula_52, while formula_23 is the subspace of symmetric matrices. Thus the exponential map is a diffeomorphism from formula_23 onto the space of positive definite matrices. Up to this exponential map, the global Cartan decomposition is the polar decomposition of a matrix. The polar decomposition of an invertible matrix is unique. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{g}" }, { "math_id": 1, "text": "B(\\cdot,\\cdot)" }, { "math_id": 2, "text": "\\theta" }, { "math_id": 3, "text": "B_\\theta(X,Y) := -B(X,\\theta Y)" }, { "math_id": 4, "text": "\\theta_1" }, { "math_id": 5, "text": "\\theta_2" }, { "math_id": 6, "text": "\\mathfrak{sl}_n(\\mathbb{R})" }, { "math_id": 7, "text": "\\theta(X)=-X^T" }, { "math_id": 8, "text": "X^T" }, { "math_id": 9, "text": "X" }, { "math_id": 10, "text": "\\mathfrak{g}_0" }, { "math_id": 11, "text": "\\mathfrak{su}(n)" }, { "math_id": 12, "text": "\\theta_1(X) = X" }, { "math_id": 13, "text": "\\theta_2 (X) = - X^T" }, { "math_id": 14, "text": "\\mathfrak{su}(2)" }, { "math_id": 15, "text": "n = p+q" }, { "math_id": 16, "text": "\\theta_3 (X) = \\begin{pmatrix} I_p & 0 \\\\ 0 & -I_q \\end{pmatrix} X \\begin{pmatrix} I_p & 0 \\\\ 0 & -I_q \\end{pmatrix}" }, { "math_id": 17, "text": "\\begin{pmatrix} I_p & 0 \\\\ 0 & -I_q \\end{pmatrix} \\notin \\mathfrak {{su}}(n)" }, { "math_id": 18, "text": "n = 2m" }, { "math_id": 19, "text": "\\theta_4 (X) = \\begin{pmatrix} 0 & I_m \\\\ -I_m & 0 \\end{pmatrix} X^T \\begin{pmatrix} 0 & I_m \\\\ -I_m & 0 \\end{pmatrix}" }, { "math_id": 20, "text": "\\theta^2=1" }, { "math_id": 21, "text": "\\pm1" }, { "math_id": 22, "text": "\\mathfrak{k}" }, { "math_id": 23, "text": "\\mathfrak{p}" }, { "math_id": 24, "text": "\\mathfrak{g} = \\mathfrak{k}\\oplus\\mathfrak{p}" }, { "math_id": 25, "text": "[\\mathfrak{k}, \\mathfrak{k}] \\subseteq \\mathfrak{k}" }, { "math_id": 26, "text": "[\\mathfrak{k}, \\mathfrak{p}] \\subseteq \\mathfrak{p}" }, { "math_id": 27, "text": "[\\mathfrak{p}, \\mathfrak{p}] \\subseteq \\mathfrak{k}" }, { "math_id": 28, "text": "+1" }, { "math_id": 29, "text": "-1" }, { "math_id": 30, "text": "(\\mathfrak{k}, \\mathfrak{p})" }, { "math_id": 31, "text": "(\\mathfrak{g},\\mathfrak{k})" }, { "math_id": 32, "text": "H^*(\\mathfrak{g},\\mathfrak{k})" }, { "math_id": 33, "text": "G" }, { "math_id": 34, "text": "(\\mathfrak{k},\\mathfrak{p})" }, { "math_id": 35, "text": "K" }, { "math_id": 36, "text": "\\Theta" }, { "math_id": 37, "text": "\\Theta^2=1" }, { "math_id": 38, "text": "K\\times\\mathfrak{p} \\rightarrow G" }, { "math_id": 39, "text": "(k,X) \\mapsto k\\cdot \\mathrm{exp}(X)" }, { "math_id": 40, "text": "P=\\mathrm{exp}(\\mathfrak{p})\\subset G" }, { "math_id": 41, "text": "K\\times P \\rightarrow G" }, { "math_id": 42, "text": "G=KP" }, { "math_id": 43, "text": " X \\mapsto (X^{-1})^T " }, { "math_id": 44, "text": "\\mathfrak{a}" }, { "math_id": 45, "text": "\\displaystyle{\\mathfrak{p}= \\bigcup_{k\\in K} \\mathrm{Ad}\\, k \\cdot \\mathfrak{a}.}\n\\qquad\\text{and}\\qquad\n\\displaystyle{P= \\bigcup_{k\\in K} \\mathrm{Ad}\\, k \\cdot A.}\n" }, { "math_id": 46, "text": "A = e^\\mathfrak{a}" }, { "math_id": 47, "text": "G = KP = KAK," }, { "math_id": 48, "text": "A" }, { "math_id": 49, "text": "G/K" }, { "math_id": 50, "text": "\\mathfrak{gl}_n(\\mathbb{R})" }, { "math_id": 51, "text": "\\mathfrak{k}=\\mathfrak{so}_n(\\mathbb{R})" }, { "math_id": 52, "text": "K=\\mathrm{SO}(n)" } ]
https://en.wikipedia.org/wiki?curid=1139145
1139148
Iwasawa decomposition
In mathematics, the Iwasawa decomposition (aka KAN from its expression) of a semisimple Lie group generalises the way a square real matrix can be written as a product of an orthogonal matrix and an upper triangular matrix (QR decomposition, a consequence of Gram–Schmidt orthogonalization). It is named after Kenkichi Iwasawa, the Japanese mathematician who developed this method. Definition. Then the Iwasawa decomposition of formula_0 is formula_7 and the Iwasawa decomposition of "G" is formula_8 meaning there is an analytic diffeomorphism (but not a group homomorphism) from the manifold formula_9 to the Lie group formula_10, sending formula_11. The dimension of "A" (or equivalently of formula_3) is equal to the real rank of "G". Iwasawa decompositions also hold for some disconnected semisimple groups "G", where "K" becomes a (disconnected) maximal compact subgroup provided the center of "G" is finite. The restricted root space decomposition is formula_12 where formula_13 is the centralizer of formula_14 in formula_15 and formula_16 is the root space. The number formula_17 is called the multiplicity of formula_18. Examples. If "G"="SLn"(R), then we can take "K" to be the orthogonal matrices, "A" to be the positive diagonal matrices with determinant 1, and "N" to be the unipotent group consisting of upper triangular matrices with 1s on the diagonal. For the case of "n"="2", the Iwasawa decomposition of "G"="SL(2,R)" is in terms of formula_19 formula_20 formula_21 For the symplectic group "G"="Sp(2n", R ")", a possible Iwasawa decomposition is in terms of formula_22 formula_23 formula_24 Non-Archimedean Iwasawa decomposition. There is an analog to the above Iwasawa decomposition for a non-Archimedean field formula_25: In this case, the group formula_26 can be written as a product of the subgroup of upper-triangular matrices and the (maximal compact) subgroup formula_27, where formula_28 is the ring of integers of formula_25. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathfrak{g}_0 " }, { "math_id": 1, "text": " \\mathfrak{g} " }, { "math_id": 2, "text": " \\mathfrak{g}_0 = \\mathfrak{k}_0 \\oplus \\mathfrak{p}_0 " }, { "math_id": 3, "text": " \\mathfrak{a}_0 " }, { "math_id": 4, "text": " \\mathfrak{p}_0 " }, { "math_id": 5, "text": " \\mathfrak{n}_0 " }, { "math_id": 6, "text": " \\mathfrak{k}_0, \\mathfrak{a}_0 " }, { "math_id": 7, "text": "\\mathfrak{g}_0 = \\mathfrak{k}_0 \\oplus \\mathfrak{a}_0 \\oplus \\mathfrak{n}_0" }, { "math_id": 8, "text": "G=KAN" }, { "math_id": 9, "text": " K \\times A \\times N " }, { "math_id": 10, "text": " G " }, { "math_id": 11, "text": " (k,a,n) \\mapsto kan " }, { "math_id": 12, "text": " \\mathfrak{g}_0 = \\mathfrak{m}_0\\oplus\\mathfrak{a}_0\\oplus_{\\lambda\\in\\Sigma}\\mathfrak{g}_{\\lambda} " }, { "math_id": 13, "text": "\\mathfrak{m}_0" }, { "math_id": 14, "text": "\\mathfrak{a}_0" }, { "math_id": 15, "text": "\\mathfrak{k}_0" }, { "math_id": 16, "text": "\\mathfrak{g}_{\\lambda} = \\{X\\in\\mathfrak{g}_0: [H,X]=\\lambda(H)X\\;\\;\\forall H\\in\\mathfrak{a}_0 \\}" }, { "math_id": 17, "text": "m_{\\lambda}= \\text{dim}\\,\\mathfrak{g}_{\\lambda}" }, { "math_id": 18, "text": "\\lambda" }, { "math_id": 19, "text": " \\mathbf{K} = \\left\\{\n \\begin{pmatrix}\n \\cos \\theta & -\\sin \\theta \\\\\n \\sin \\theta & \\cos \\theta \n \\end{pmatrix} \\in SL(2,\\mathbb{R}) \\ | \\ \\theta\\in\\mathbf{R} \\right\\} \\cong SO(2) ,\n" }, { "math_id": 20, "text": "\n\\mathbf{A} = \\left\\{\n \\begin{pmatrix}\n r & 0 \\\\\n 0 & r^{-1} \n \\end{pmatrix} \\in SL(2,\\mathbb{R}) \\ | \\ r > 0 \\right\\},\n" }, { "math_id": 21, "text": "\n\\mathbf{N} = \\left\\{\n \\begin{pmatrix}\n 1 & x \\\\\n 0 & 1 \n \\end{pmatrix} \\in SL(2,\\mathbb{R}) \\ | \\ x\\in\\mathbf{R} \\right\\}.\n" }, { "math_id": 22, "text": " \\mathbf{K} = Sp(2n,\\mathbb{R})\\cap SO(2n) \n = \\left\\{\n \\begin{pmatrix}\n A & B \\\\\n -B & A \n \\end{pmatrix} \\in Sp(2n,\\mathbb{R}) \\ | \\ A+iB \\in U(n) \\right\\} \\cong U(n) ,\n" }, { "math_id": 23, "text": "\n\\mathbf{A} = \\left\\{\n \\begin{pmatrix}\n D & 0 \\\\\n 0 & D^{-1} \n \\end{pmatrix} \\in Sp(2n,\\mathbb{R}) \\ | \\ D \\text{ positive, diagonal} \\right\\},\n" }, { "math_id": 24, "text": "\n\\mathbf{N} = \\left\\{\n \\begin{pmatrix}\n N & M \\\\\n 0 & N^{-T} \n \\end{pmatrix} \\in Sp(2n,\\mathbb{R}) \\ | \\ N \\text{ upper triangular with diagonal elements = 1},\\ NM^T = MN^T \\right\\}.\n" }, { "math_id": 25, "text": "F" }, { "math_id": 26, "text": "GL_n(F)" }, { "math_id": 27, "text": "GL_n(O_F)" }, { "math_id": 28, "text": "O_F" } ]
https://en.wikipedia.org/wiki?curid=1139148
1139167
Wealth tax
Tax on an entity's holdings of assets A wealth tax (also called a capital tax or equity tax) is a tax on an entity's holdings of assets or an entity's net worth. This includes the total value of personal assets, including cash, bank deposits, real estate, assets in insurance and pension plans, ownership of unincorporated businesses, financial securities, and personal trusts (a one-off levy on wealth is a capital levy). Typically, wealth taxation often involves the exclusion of an individual's liabilities, such as mortgages and other debts, from their total assets. Accordingly, this type of taxation is frequently denoted as a net wealth tax. As of 2017[ [update]], five of the 36 OECD countries had a personal wealth tax (down from 12 in 1990). Proponents often argue that wealth taxes can reduce income inequality by making it harder for individuals to accumulate large amounts of wealth. Many critics of wealth taxes claim that wealth taxes can cause wealthy people and businesses to move their wealth to lower tax jurisdiction (such as tax havens). OECD countries with a wealth tax. The Global Revenue Statistics Database presents a roster of countries that have documented instances of revenue collected from wealth taxes (the data is limited to 1965-2021). A total of eight countries (Austria, Denmark, Finland, Germany, Netherlands, Norway, Sweden and Switzerland) were known to have collected revenue through a wealth tax in 1965. In the ensuing decades, the number of countries reporting wealth tax revenue increased gradually and reached its peak in 1995, with 12 countries (Austria, Denmark, Finland, France, Germany, Iceland, Italy, Netherlands, Norway, Spain, Sweden and Switzerland) reporting revenue generated from this form of taxation. Although, as of 2021, only five of the 36 OECD countries continue to implement the wealth tax on individuals. The five countries are Colombia, France, Norway, Spain and Switzerland. In practice. There are jurisdictions of sovereign nation states that require declaration of the taxpayer's balance sheet (assets and liabilities), and from that ask for a tax on net worth (assets minus liabilities), as a percentage of the net worth, or a percentage of the net worth exceeding a certain level. Wealth taxes can be limited to natural persons or they can be extended to also cover legal persons such as corporations. In 1990, about a dozen European countries had a wealth tax, but by 2019, all but three had eliminated the tax because of the difficulties and costs associated with both design and enforcement. Belgium, Norway, Spain, and Switzerland are the countries that raised revenue from net wealth taxes on individuals in 2019 with net wealth taxes accounting for 1.1% of overall tax revenues in Norway, 0.55% in Spain, and 3.6% in Switzerland for 2017. According to an OECD study on wealth taxes, it is "difficult to firmly argue that wealth taxes would have negative effects on entrepreneurship. The magnitude of the effects of wealth taxes on entrepreneurship is also unclear". A 2022 study found that wealth taxes are most likely to be implemented in the aftermath of major economic recessions. Example countries. Argentina. The official term used to denote the wealth tax in Argentina is "Impuesto sobre los Bienes Personales". On 31 December 2021, Argentina’s tax authorities published General Resolution 912/2021, which introduces new modifications to the country’s wealth tax. The modifications made to the wealth tax in Argentina entail an augmentation of the non-taxable minimum to ARS 6,000,000. Moreover, residential real estate assets, wherein the owner's daily domicile is situated, shall not be subject to taxation if their worth equals or falls under ARS 30,000,000 (approx. US$ at April 2023 official exchange rate) . Additionally, the taxation rate structure has been revised. Assets surpassing ARS 100,000,000 (approx. US$ at April 2023 official exchange rate) will now be taxed at a rate of 1.50%, while those exceeding ARS 300,000,000 will be taxed at a rate of 1.75%. Before FY2021, for assets held within Argentina, the tax is progressive from 0.50% on assets above ARS 3,000,000 (approx. US$ at April 2021 official exchange rate) to 1.25% on assets above ARS 18,000,000 (approx. US$ at April 2021 official exchange rate). For assets held outside of Argentina, the tax is progressive from 0.70% on assets above ARS 3,000,000 to 2.25% on assets above ARS 18,000,000. Belgium. The Act of 7 February 2018, which is effectively a "wealth tax", announced an annual tax on securities accounts that imposes a 0.15% annual tax on financial instruments kept in securities accounts that are worth more than €500,000 per account holder. The first taxable period started on 10 March 2018 and ended (at the latest on) 30 September 2018, for which the tax had to be paid by 30 August 2019. The second taxable period runs from 1 October 2018 to 30 September 2019. In October 2019, the Belgian Constitutional Court issued a decision annulling this tax on securities accounts, with effect as of 1 October 2019. However, Belgium now re-introduced the annual tax on securities accounts law with some modifications in February 2021. The Belgian Parliament adopted the adjusted tax on securities accounts law applicable from 26 February 2021, with the first reference period ending on 30 September 2021. A solidarity tax of 0.15% is now applicable on securities accounts that reach or exceed €1,000,000 without regard to the number of accountholders, and the tax amount is limited to 10% of the difference between the taxable base and the threshold of €1,000,000. Bolivia. In December 2020, the Bolivian socialist government of President Luis Arce approved a wealth tax on resident and non-resident individuals with a net fortune of over 30 million Bolivian Boliviano. The tax is progressive with tax rates in the range of 1.4% to 2.4% and includes both domestic and foreign assets. The tax went into effect from 2020 onwards Colombia. On 1 January 2019, the Senate passed a tax reform bill that includes a lower corporate tax rate, a new tax rate for financial corporations, and a new wealth tax. For the years 2019, 2020, and 2021, the new wealth (equity) tax has been set at 1% for Colombian-resident individuals' worldwide net worth, and 1% for non-resident individuals on Colombian properties only, such as real estate, yachts, artwork, vessels, ships, and other assets with a net equity of at least COP5 billion (US$). Shares in Colombian firms, accounts receivable from Colombian debtors, some portfolio assets, and financial lease agreements are all exempt from the tax. Following the COVID-19 pandemic, the richest Colombians will face higher taxes on wages, dividends, and properties, as well as a one-time "solidarity levy" on high incomes. All of which is part of a new bill that was sent to congress in April 2021. The bill aims to collect about 25 trillion pesos (US$) a year through new taxes and budget restraints, equating to 2.2 percent of GDP. On 13 December 2022, the Colombian President Gustavo Petro enacted Law 2277 of 2022, which contains the tax reform proposals previously approved by congress. A new wealth tax will be introduced as a permanent tax on individuals with net worth as of 1 January of the relevant tax year exceeding 72,000 This amount will be calculated as the aggregate value of assets owned (real estate, investments, vehicles, financial products, accounts with financial institutions, etc.,), less the liabilities and debts. The tax will apply to the worldwide assets of resident individuals; nonresident individuals will be subject to wealth tax only on their Colombian assets. The tax rate is between 0 and 1,5% until 2026 and will be between 0 and 1% FY 2027 onwards. France. Since 2018, France has had a wealth tax based on real estate ("impôt sur la fortune immobilière", IFI). It is payable by individuals who own real estate assets with a combined value of more than €1,300,000. French residents with global assets and non-residents who own French real estate may be liable for IFI. For French residents, the figure is calculated on all global real estate assets, and for non-residents, the figure is calculated based on the total value of French property and real estate assets only. From 1989 to 2017, France had the solidarity tax on wealth (, ISF), an annual progressive wealth tax on any net assets above €800,000 for those with total net worth of €1,300,000 or more. Marginal rates ranged from 0.5% to 1.5%. In 2007, it collected €4.07 billion, accounting for 1.4% of total revenue. Italy. Two types of wealth taxes are imposed in Italy. Netherlands. There is a tax called "vermogensrendementheffing". Although its name ("wealth yield tax") suggests that it is a tax on the "yield" of wealth, it qualifies as a wealth tax, since the actual yield (whether positive or negative) is not taken into account in its calculation. Up to and including 2016, the rate was fixed at 1.2% (30% taxation over an assumed yield of 4%). From the fiscal year of 2017 onwards, the tax rate progresses with wealth. See Income tax in the Netherlands. In addition to the "vermogensrendementheffing", owners of real estate pay a tax called "onroerendezaakbelasting", which is based on the estimated value of the real estate they own. This is a local tax, levied by the city council where the property is located. Norway. 0.7% (municipal) and 0.15% (national) a total of 0.85% levied on net assets exceeding 1,500,000 kr (approx. US$) as of 2019. For tax purposes, the value of the primary residence is valued to 25% of the market value, secondary residences to 90% of the market value, while working capital such as commercial real estate, stocks, and stock funds are valued at various percentages. The Conservative Party, Progress Party and the Liberal Party have stated that they aim to reduce and eventually eliminate the wealth tax. Spain. There is a tax called "Patrimonio". The tax rate is progressive, from 0.2 to 3.75% of net assets above the threshold of €700,000 after €300,000 primary residence allowance. The exact amount varies between regions. Switzerland. A progressive wealth tax that varies by residence location. Most cantons have no wealth tax for individual net worth less than SFr 100000 (approx. US$) and progressively raise the tax rate on net assets with a top rate ranging from 0.13% to 0.94% depending on canton and municipality of residence. Wealth tax is levied against worldwide assets of Swiss residents, but it is not levied against assets in Switzerland held by non-residents. Swiss wealth tax is regulated on a cantonal basis. All cantons levy a net wealth tax based on the balance of the worldwide gross assets minus debts, and tax rates can vary depending on the taxpayer's residency, with maximum rates varying from around 0.13% to 1.1%. Historical examples. Ancient Athens had a wealth tax called eisphora (see symmoria), and a wealth registry consisting of self-assessments (τίμημα), limited to the wealthiest. The registry was not very accurate. The religion of Islam has a concept sometimes described as a wealth tax called Zakat. Iceland had a wealth tax until 2006 and a temporary wealth tax reintroduced in 2010 for four years. The tax was levied at a rate of 1.5% on net assets exceeding 75,000,000 kr for individuals and 100,000,000 kr for married couples. Similar to Iceland, Denmark taxed household income above a certain exemption threshold, which was about the 98th percentile of the wealth distribution, until 1997. A dozen OECD countries imposed similar taxes until the 1990s, but the Danish wealth tax was the highest of its kind. Until the late 1980s, the marginal tax rate on wealth was 2.2 percent, leading to a very high rate on the return on wealth. After minimizing the tax for some years, the Danish government eventually abolished the tax altogether in 1997. Some other European countries have discontinued this kind of tax in recent years: Germany (1997), Finland (2006), Luxembourg (2006) and Sweden (2007). In the United Kingdom and other countries, property (real estate) is often a person's main asset, and has been taxed – for example, the window tax of 1696, the rates, to some extent the Council Tax. Proposed examples. Germany. In order to bridge the wealth gap between rich and poor in Germany, the Social Democratic Party of Germany called for a nationwide wealth tax to be reintroduced in 2019. According to the proposed tax reform, wealthy households would be required to pay an extra tax between 1% and 1.5%. A single household would need to pay 1% of their net worth on every euro surpassing €2 Million and married couple would have to pay for every euro surpassing €4 Million. A married household with a combined net worth of €4.2 Million would have to pay an annual wealth tax of €2,000. The proposition was eventually vetoed by the CDU/CSU and therefore never again considered. Concentration of wealth. In 2014, French economist Thomas Piketty published a widely discussed book entitled "Capital in the Twenty-First Century" that starts with the observation that economic inequality is increasing and proposes wealth taxes as a countermeasure. Piketty proposes a global system of progressive wealth taxes to help reduce inequality and avoid the trend towards a vast majority of wealth coming under the control of a tiny minority. This analysis was hailed as a major and important work by some economists. Other economists have challenged Piketty's proposals and interpretations. France. In 2017, when introducing the fiscal reform of the solidarity wealth tax, the government of the French president Emmanuel Macron used the first argument of capital flight. The other argument stated by the comity of evaluation of reforms on wealth fiscalism was that the previous wealth tax was not enough progressive for the top 0.1% wealthier. The “IFI” as the “ISF” are wealth tax thus they concerned high earners. A big part of people paying this tax are in the ninth decile of income distribution and the “IFI” represents one over two household in the wealthiest 0.01%. Therefore, in the general tax system, the “IFI” contributes, as did the ISF, to make the tax system more progressive. But this progressivity has limits: “the IFI represents on average 0.1% of income around the ninth decile and 1.2% of income of 0.1% of very well-off households in 2018. While the income tax rate under the ISF was stable overall, within the top 0.1% of income, the income tax rate under of the IFI declines for the wealthiest and falls to 0.6% for the top 0.01%.” Broadly, this reform largely benefits to the 0.1% wealthier and did not make this wealth tax more progressive as it was supposed to be. In fact, it reduced the number of accountable people of wealth tax leaving the country but in term of investment, the gains of this reforms has been traduced in an increasing of dividend on capital earnings (37.4 billion from non-financial society had been paid) and not on direct investment on corporate (see “Capital flight”). In average and from different studies, those fiscal reforms benefited more to top-wealthier households. For Ben Jelloul and al. (2019), the reforms benefit for the top 1% more wealthier household with +5.5 point of disposable revenue. For Madec and al. (2019) it had affected on the top 2% of the wealthier households and for Pasquier and Sicsic (2019), the 5% of the top distribution perceived 57% of the gain of the fiscal reform. Revenue. Revenue from a wealth tax scheme depends largely on the presence of net wealth and wealth inequality within the target country. Revenue depends on the plan that is in place, but it generally can be modeled as formula_0, where t represents the tax rate and w is the amount of wealth affected by that tax rate. Many plans include tax brackets, where a certain portion of the individual's wealth will be taxed at a given rate and any wealth beyond that amount will be taxed at a different rate. A small number of countries have been using wealth tax regimes for some time. Revenues earned from wealth tax schemes vary by country from 0.98% of GDP in Switzerland to 0.22% in France, for example. 2020 United States presidential candidate Elizabeth Warren claimed a wealth tax plan could generate 1.4% of GDP in revenue for the United States. According to data from the Organisation for Economic Co-operation and Development (OECD), the revenues generated from wealth taxes account for about 0.46% of all tax revenue on average in 2018 for countries which have wealth tax schemes in place. However this varies from country to country, the highest would be that of Luxembourg where it accounted for 7.18% of total tax revenue in 2018, the lowest would be Germany where it accounted for 0.03% of total tax revenue in 2018. Estimates for a wealth tax's potential revenue in the United States vary. Several Democratic presidential candidates in the 2020 election have proposed wealth tax plans. Elizabeth Warren, for example, has proposed a wealth tax of 2% on net wealth above $ and 6% above $. The conservative-leaning nonprofit Tax Foundation estimates revenue generated by Senator Warren's proposal would total around $ over the next 10 years. Separate estimates from campaign advisors and economists Emmanuel Saez and Gabriel Zucman put the revenue at about 1% of GDP per year, in alignment with USD revenue estimates. These estimates put Senator Warren's tax plan revenues at about $ in 2020. The sum of United States tax revenues in 2018 were $ in 2018, meaning the tax collected by this plan would be equal to 4% of current tax revenues. Additionally, the Tax Foundation estimates 2020 presidential candidate Senator Bernie Sanders' wealth tax plan would collect $ between 2020 and 2029. Previous proposals for a wealth tax in the United States had already existed. Eileen Myles proposed a net assets tax in her presidential campaign in 1992, as did Donald Trump during his presidential campaign in 2000. A net wealth tax may also be designed to be revenue-neutral if it is used to broaden the tax base, stabilize the economy, and reduce individual income and other taxes. Effect on investment. A wealth tax serves as a negative reinforcer ("use it or lose it"), which incentivizes the productive use of assets (rather than letting assets accumulate without being used). According to University of Pennsylvania Law School professors David Shakow and Reed Shuldiner, "a wealth tax also taxes capital that is not productively employed. Thus, a wealth tax can be viewed as a tax on potential income from capital." Net wealth taxes can complement rather than replace gift taxes, capital gains taxes, and inheritance taxes to increase administrability and the effectiveness of enforcement efforts. In their article, "Investment Effects of Wealth Taxes Under Uncertainty and Irreversibility," Rainer Niemann and Caren Sureth-Sloane found that the effects of wealth taxation on investment mainly depends upon the tax method employed and the broadness of the wealth threshold for taxation. Niemann and Sureth-Sloane found that, "Broadening the wealth tax base tends to accelerate investment during high interest rate periods." Caren Sureth and Ralf Maiterth concluded that wealth tax revenues from entrepreneurs may decrease in the long term and the revenue from a wealth tax may be negative if the wealth taxation thresholds are too low. Saez and Zucman are two economists that worked on the "Ultra-Millionaire Tax" proposed by Senator Elizabeth Warren. In their paper, "Progressive Wealth Taxation," they assert that a potential wealth tax in the United States needs necessary parameters to limit detrimental effects on investment. One parameter is a high wealth threshold to limit direct taxation on small business and entrepreneurship. The academic literature on the effects of wealth taxation on investment incentives are inconclusive in the United States; Saez and Zucman assert there are three reasons wealth taxes in European countries are weak comparisons to the United States when analyzing potential effects on investment. First, they claim tax competition between European countries allows for individuals to avoid taxation by allocating assets to a different country. Reallocating assets to avoid taxation is more difficult in the United States because tax filings apply equally to United States citizens no matter the country of current residence. Second, low exemption thresholds caused liquidity problems for some individuals who were on the lower end of wealth taxation thresholds. Third, they contend European wealth taxes need modernization and improved methods for systematic information gathering. Saez and Zucman have also argued that a wealth tax with a high threshold, would have the benefit of mainly targeting individuals who have a high degree of liquid assets, circumventing the liquidity problem of small and medium sized businesses and less wealthy individuals. Moreover, they argue that such a tax would not necessarily reduce innovation since innovation is mostly done by young people who haven't acquired a big fortune yet. This must be seen in the context that most of the wealthiest people in the US are older than average. Additionally, they argue that large established businesses use some of their wealth to retain market power, reducing innovation and competition. Therefore, a wealth tax with a high tax payable threshold could potentially increase innovation. Further proponents for a wealth tax claim it could have positive effects on investment in the United States. Some extremely wealthy people use their assets in unproductive ways. For example, an entrepreneur could generate much higher returns (though could conversely lose much more capital operating on leverage) than a wealthy individual with a conservative investment such as United States Treasury Bonds. A wealth tax could lead to negative effects on investment, saving, and economic growth. In the article, "Economic effects of wealth taxation," Kyle Pomerleau states, "A wealth tax, even levied at an apparently low annual rate, places a significant burden on saving." The degree of this impact on savings and investments is reliant on the openness of the United States economy. A wealth tax would shrink national saving and increase foreign ownership of assets. The potential decrease in national savings leads to a decrease in capital stock. An estimate from the Penn Wharton Budget Model indicates that if the revenue from the wealth tax proposed by Elizabeth Warren were used to finance non-productive government spending, GDP would decrease by 2.1 percent by 2050, capital stock would decrease by 6.5 percent, and wages would decrease by 2.3 percent. Some opponents also point out that redistribution through a wealth tax is an inherently counterintuitive way to foster economic growth. Richard Epstein, a senior fellow at the Hoover Institution, contents, "The classical liberal approach wants to simplify taxation and reduce regulation to spur growth. Plain old growth is a much better social tonic that the toxic Warren Wealth Tax." Criticisms. There are many arguments against the implementation of a wealth tax, including claims that a wealth tax would be unconstitutional (in the United States), that property would be too hard to value, and that wealth taxes would reduce the rate of innovation. Capital flight. A 2006 article in "The Washington Post" titled "Old Money, New Money Flee France and Its Wealth Tax" pointed out some of the harm caused by France's wealth tax. The article gave examples of how the tax caused capital flight, brain drain, loss of jobs, and, ultimately, a net loss in tax revenue. Among other things, the article stated, "Éric Pichet, author of a French tax guide, estimates the wealth tax earns the government about $ a year but has cost the country more than $ in capital flight since 1998." In fact the wealth tax named "Impôt sur les Grandes Fortunes" (IGF) ["tax on great wealth"] had been created in 1980, then suppressed in 1986 before finally being reintroduced in 1988 under the name “Impôt de Solidarité sur la Fortune” (ISF) "solidarity tax on wealth". In 1999 a new higher tax category was added which increased the money collected from 0.09% of GDP in 1990 to 0.16% in 2004. For example, in 2003, 370 ISF’s accountables people left France and it continued to grow year by year except between 2010 and 2011 when the tax threshold has been raised and accountable people were discarded from it. This capital flight only decrease after 2015 and in 2017 when the French government announced that it will suppress this tax. After the reforms implementation, there were only 163 departures of wealth tax people in 2018. The capital flight was one of the argument to reforms the wealth tax. After 2017, in the financial law of 2018, the new wealth tax was introduced with other tax reforms. The fiscal reform thus included a unique forfeit tax on saving, combined with the replacement of ISF by the IFI “Impôt sur la Fortune Immobilière” (IFI) which reduce the wealth tax to real-estate property only and finally a decrease of the corporate tax. This argument of capital flight takes its roots on an economic theory, the runoff theory. By decreasing the wealth tax, the wealth households are supposed to come back inside the country to invest and thus raised the GDP growth which will have effect on all the population by reducing unemployment and boost the economy. In France, the fiscal reform did not have the expected effects of runoff. In fact, the capital flight due to wealth tax household leaving only represented 0.3% and 0.5% of the total amount of money collected by the solidarity tax on wealth, between 2004 and 2015. On the other hand, this decrease of the wealth tax represented an income loss of 2.9 billion for the state. In term of investment, there were fewer investments in real-estate from people accountable of wealth tax. However, this movement could be explained more by the increase in household income, the low level of interest rates on mortgage loans and the general dynamics of the real estate market than by a sale, on the part of wealthy households, of property subject to the IFI for the benefit of investments in transferable securities, therefore the result in investment on corporate are not significant. Moreover, the fiscal reform on wealth tax had an insignificant level at the macroeconomic level for the corporate funds. For example, in 2020 for the non-financial society, the part of listed and non-listed share has been lower from the average of the previous period 2001-2019. It is also hard to measure the effect on corporate investment because of the Covid-19 crises which caused a shut-down of the economy in 2020. Valuation issues. In 2012, the "Wall Street Journal" wrote that: "the wealth tax has a fatal flaw: valuation. It has been estimated that 62% of the wealth of the top 1% is "non-financial" – i.e., vehicles, real estate, and (most importantly) private business. Private businesses account for nearly 40% of their wealth and are the largest single category." A particular issue for small business owners is that they cannot accurately value their private business until it is sold. Furthermore, business owners could easily make their businesses look much less valuable than they really are, through accounting, valuations and assumptions about the future. "Even the rich don't know exactly what they're worth in any given moment." Examples of such fraud and malfeasance were revealed in 2013, when French budget minister Jérôme Cahuzac was discovered shifting financial assets into Swiss bank accounts in order to avoid the wealth tax. After further investigation, a French finance ministry official said, "A number of government officials minimised their wealth, by negligence or with intent, but without exceeding 5–10 per cent of their real worth ... however, there are some who have deliberately tried to deceive the authorities." Yet again, in October 2014, France's Finance chairman and President of the National Assembly, Gilles Carrez, was found to have avoided paying the French wealth tax (ISF) for three years by applying a 30 percent tax allowance on one of his homes. However, he had previously converted the home into an SCI, a private, limited company to be used for rental purposes. The 30 percent allowance does not apply to SCI holdings. Once this was revealed, Carrez declared, "if the tax authorities think that I should pay the wealth tax, I won't argue." Carrez is one of more than 60 French parliamentarians battling with the tax offices over 'dodgy' asset declarations. Moreover, this problem of wealth devaluation is undermined by the administration itself. For example, in France in 1999, the government introduced the notion of “the measured application of the tax law”. But this application of the law is mostly reserved for the self-declared tax, like the wealth tax. It means that if there is a fraud in the declaration, there will be no sanction if the household concerned correct his mistake, even if it might have been done in purpose. This flexibility granted to self-declared taxes is indeed unequal. In fact the other tax that concerned most of the households, like income taxes, cannot be self-declared and this fraud flexibility benefits only to the richer household. More broadly, this self-declaration tax has developed what the sociologist Alexis Spire called “tax law domestication”, which enables the richest part of the population to employ fiscal specialists to optimize its declarations and minimize the amount of the wealth tax. Once again this creates an opportunity of optimization, as the flexibility in sanctions is unequally distributed in the tax spectrum and thus in the different parts of the population. Social effects. Opponents of wealth taxes have argued that there is "an undercurrent of envy in the campaign against extremes of wealth." Two Yale University/London School of Economics studies (2006, 2008) on relative income yielded results asserting that 50 percent of the public would prefer to earn less money, as long as they earned as much or more than their neighbor. Many analysts and scholars assert that since wealth taxes are a form of direct asset collection, as well as double-taxation, they are antithetical to personal freedom and individual liberty. They further contend that free nations should have no business helping themselves arbitrarily to the personal belongings of any group of its citizens. Further, these opponents may say wealth taxes place the authority of the government ahead of the rights of the individual, and ultimately undermine the concept of personal sovereignty. "The Daily Telegraph" editor Allister Heath critically described wealth taxes as Marxian in concept and ethically destructive to the values of democracies, "Taxing already acquired property drastically alters the relationship between citizen and state: we become leaseholders, rather than freeholders, with accumulated taxes over long periods of time eventually "returning" our wealth to the state. It breaches a key principle that has made this country great: the gradual expansion of property ownership and the democratisation of wealth." Past repeals. In 2004, a study by the Institut de l'enterprise investigated why several European countries were eliminating wealth taxes and made the following observations: 1. Wealth taxes contributed to capital drain, promoting the flight of capital as well as discouraging investors from coming in. 2. Wealth taxes had high management cost and relatively low returns. 3. Wealth taxes distorted resource allocation, particularly involving certain exemptions and unequal valuation of assets. In its summary, the institute found that the "wealth taxes were not as equitable as they appeared". In a 2011 study, the London School of Economics examined wealth taxes that were being considered by the Labour party in the United Kingdom between 1974 and 1976 but were ultimately abandoned. The findings of the study revealed that the British evaluated similar programs in other countries and determined that the Spanish wealth tax may have contributed to a banking crisis and the French wealth tax had been undergoing review by its government for being unpopular and overly complex. As efforts progressed, concerns were developing over the practicality and implementation of wealth taxes as well as worry that they would undermine confidence in the British economy. Eventually, plans were dropped. Former British Chancellor Denis Healey concluded that attempting to implement wealth taxes was a mistake, "We had committed ourselves to a Wealth Tax: but in five years I found it impossible to draft one which would yield enough revenue to be worth the administrative cost and political hassle." The conclusion of the study stated that there were lingering questions, such as the impacts on personal saving and small business investment, consequences of capital flight, complexity of implementation, and ability to raise predicted revenues that must be adequately addressed before further consideration of wealth taxes. "See also" Pollock v. Farmers' Loan &amp; Trust Co.; "Sixteenth Amendment to the United States Constitution" Legal impediments. United States. In part because a wealth tax has never been implemented in the United States, there is no legal consensus about its constitutionality. As evidenced below, much scholarly debate on the topic hinges on whether or not such a tax is understood to be a "direct tax," per Article 1, Section 9 of the Constitution, which requires that the burden of "direct taxes" be apportioned across the states by their population. Barry L. Isaacs interprets current case law in the United States to hold that a wealth tax is a direct tax under Article 1, Section 9. Given the extreme difficulty of apportioning a wealth tax by state population, the implementation of a wealth tax in the United States would require either a constitutional amendment or the overturning of current case law. Unlike federal wealth taxes, states and localities are not bound by Article 1, Section 9, which is why they are able to levy taxes on real estate. Other legal scholars have argued that a wealth tax does not represent a direct tax and that such a tax could be implemented in the United States without a constitutional amendment. In a lengthy essay from 2018, authors in the "Indiana Journal of Law" argued that "... the belief that the U.S. Constitution effectively makes a national wealth tax impossible ... is wrong." The authors noted that in the 1796 Supreme Court decision for "Hylton v. United States", Supreme Court justices who had personally taken part in the creation of the U.S. Constitution "unanimously rejected a challenge to the constitutionality of an annual tax on carriages, a tax akin to a national wealth tax in that it taxed a luxury property." However, Alexander Hamilton, who supported the carriage tax, told the Supreme Court that it was constitutional because it was an "excise tax", not a direct tax. Hamilton's brief defines direct taxes as "Capitation or poll taxes, taxes on lands and buildings, general assessments, whether on the whole property of individuals or on their whole real or personal estate" which would include the wealth tax. Tax scholars have repeatedly noted that the critical difference between income taxes and wealth taxes, the realization requirement, is a matter of administrative convenience, not a constitutional requirement. To prevent capital flight, proponents of wealth taxes have argued for the implementation of a one-time exit tax on high net worth individuals who renounce their citizenship and leave the country. An additional constitutional objection to such a tax could be raised on the grounds that it violates the takings clause of the Fifth Amendment, which prohibits the federal government from taking private property for public use without just compensation. In 2023, Texas voters approved a constitutional amendment prohibiting state lawmakers from imposing a wealth tax. Germany. The Federal Constitutional Court of Germany in Karlsruhe found that wealth taxes "would need to be confiscatory in order to bring about any real redistribution". In addition, the court held that the sum of wealth tax and income tax should not be greater than half of a taxpayer's income. "The tax thus gives rise to a dilemma: either it is ineffective in fighting inequalities, or it is confiscatory – and it is for that reason that the Germans chose to eliminate it." Thus, finding such wealth taxes unconstitutional in 1995. In 2006, the Constitutional Court revised this decision on the so-called "Halbteilungsgrundsatz", stating that "From the property guarantee of the Basic Law, no generally binding absolute upper limit of taxation in the vicinity of a half division can be derived." See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R = t\\times w" } ]
https://en.wikipedia.org/wiki?curid=1139167
11391827
Normal polytope
Type of polytope in mathematics In mathematics, specifically in combinatorial commutative algebra, a convex lattice polytope "P" is called normal if it has the following property: given any positive integer "n", every lattice point of the dilation "nP", obtained from "P" by scaling its vertices by the factor "n" and taking the convex hull of the resulting points, can be written as the sum of exactly "n" lattice points in "P". This property plays an important role in the theory of toric varieties, where it corresponds to projective normality of the toric variety determined by "P". Normal polytopes have popularity in algebraic combinatorics. These polytopes also represent the homogeneous case of the Hilbert bases of finite positive rational cones and the connection to algebraic geometry is that they define projectively normal embeddings of toric varieties. Definition. Let formula_0 be a lattice polytope. Let formula_1 denote the lattice (possibly in an affine subspace of formula_2) generated by the integer points in formula_3. Letting formula_4 be an arbitrary lattice point in formula_3, this can be defined as formula_5 P is integrally closed if the following condition is satisfied: formula_6 such that formula_7. "P" is normal if the following condition is satisfied: formula_8 such that formula_9. The normality property is invariant under affine-lattice isomorphisms of lattice polytopes and the integrally closed property is invariant under an affine change of coordinates. Note sometimes in combinatorial literature the difference between normal and integrally closed is blurred. Examples. The simplex in R"k" with the vertices at the origin and along the unit coordinate vectors is normal. unimodular simplices are the smallest polytope in the world of normal polytopes. After unimodular simplices, lattice parallelepipeds are the simplest normal polytopes. For any lattice polytope P and formula_10, c ≥ dimP-1 cP is normal. All polygons or two-dimensional polytopes are normal. If "A" is a totally unimodular matrix, then the convex hull of the column vectors in "A" is a normal polytope. The Birkhoff polytope is normal. This can easily be proved using Hall's marriage theorem. In fact, the Birkhoff polytope is compressed, which is a much stronger statement. All order polytopes are known to be compressed. This implies that these polytopes are normal. Properties. P ⊂ formula_12"d" a lattice polytope. Let C(P)=formula_12+(P,1) ⊂ formula_12"d"+1 the following are equivalent: Conversely, for a full dimensional rational pointed cone C⊂formula_12d if the Hilbert basis of C∩formula_11d is in a hyperplane H ⊂ formula_12"d" (dim H = "d" − 1). Then C ∩ H is a normal polytope of dimension "d" − 1. Relation to normal monoids. Any cancellative commutative monoid "M" can be embedded into an abelian group. More precisely, the canonical map from "M" into its Grothendieck group "K"("M") is an embedding. Define the normalization of "M" to be the set formula_13 where "nx" here means "x" added to itself "n" times. If "M" is equal to its normalization, then we say that "M" is a normal monoid. For example, the monoid N"n" consisting of "n"-tuples of natural numbers is a normal monoid, with the Grothendieck group Z"n". For a polytope "P"  ⊆ Rk, lift "P" into R"k"+1 so that it lies in the hyperplane "x"k+1 = 1, and let "C"("P") be the set of all linear combinations with nonnegative coefficients of points in ("P",1). Then "C"("P") is a convex cone, formula_14 If "P" is a convex lattice polytope, then it follows from Gordan's lemma that the intersection of "C"("P") with the lattice Z"k"+1 is a finitely generated (commutative, cancellative) monoid. One can prove that "P" is a normal polytope if and only if this monoid is normal. Open problem. Oda's question: "Are all smooth polytopes integrally closed?" A lattice polytope is smooth if the primitive edge vectors at every vertex of the polytope define a part of a basis of formula_11"d". So far, every smooth polytope that has been found has a regular unimodular triangulation. It is known that up to trivial equivalences, there are only a finite number of smooth formula_15-dimensional polytopes with formula_16 lattice points, for each natural number formula_16 and formula_15. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P\\subset\\mathbb{R}^d" }, { "math_id": 1, "text": "L\\subseteq \\mathbb{Z}^d" }, { "math_id": 2, "text": "\\mathbb{R}^d" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "v" }, { "math_id": 5, "text": " L=v+\\sum_{x,y \\in P \\cap \\mathbb{Z}^d} \\mathbb{Z}(x-y)." }, { "math_id": 6, "text": " c\\in\\mathbb{N}, z\\in cP\\cap\\mathbb{Z}^d\\implies \\exists x_1,\\ldots,x_c\\in P\\cap\\mathbb{Z}^d" }, { "math_id": 7, "text": " x_1+\\cdots+x_c=z" }, { "math_id": 8, "text": " c\\in \\mathbb{N}, z\\in cP\\cap L\\implies \\exists x_1,\\ldots,x_c\\in P\\cap L" }, { "math_id": 9, "text": "x_1+\\cdots+x_c=z" }, { "math_id": 10, "text": "c \\isin \\mathbb{N}" }, { "math_id": 11, "text": "\\mathbb{Z}" }, { "math_id": 12, "text": "\\mathbb{R}" }, { "math_id": 13, "text": "\\{ x \\in K(M) \\mid nx \\in M,\\ n\\in\\mathbb{N} \\}," }, { "math_id": 14, "text": "C(P)=\\{ \\lambda_1(\\textbf{x}_1, 1) + \\cdots + \\lambda_n(\\textbf{x}_n, 1) \\mid \\textbf{x}_i \\in P,\\ \\lambda_i \\in \\mathbb{R}, \\lambda_i\\geq 0\\}." }, { "math_id": 15, "text": "d" }, { "math_id": 16, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=11391827
11391832
Intraclass correlation
Descriptive statistic In statistics, the intraclass correlation, or the intraclass correlation coefficient (ICC), is a descriptive statistic that can be used when quantitative measurements are made on units that are organized into groups. It describes how strongly units in the same group resemble each other. While it is viewed as a type of correlation, unlike most other correlation measures, it operates on data structured as groups rather than data structured as paired observations. The "intraclass correlation" is commonly used to quantify the degree to which individuals with a fixed degree of relatedness (e.g. full siblings) resemble each other in terms of a quantitative trait (see heritability). Another prominent application is the assessment of consistency or reproducibility of quantitative measurements made by different observers measuring the same quantity. Early ICC definition: unbiased but complex formula. The earliest work on intraclass correlations focused on the case of paired measurements, and the first intraclass correlation (ICC) statistics to be proposed were modifications of the interclass correlation (Pearson correlation). Consider a data set consisting of "N" paired data values ("x""n",1, "x""n",2), for "n" = 1, ..., "N". The intraclass correlation "r" originally proposed by Ronald Fisher is formula_0 where formula_1 formula_2 Later versions of this statistic used the degrees of freedom 2"N" −1 in the denominator for calculating "s"2 and "N" −1 in the denominator for calculating "r", so that "s"2 becomes unbiased, and "r" becomes unbiased if "s" is known. The key difference between this ICC and the interclass (Pearson) correlation is that the data are pooled to estimate the mean and variance. The reason for this is that in the setting where an intraclass correlation is desired, the pairs are considered to be unordered. For example, if we are studying the resemblance of twins, there is usually no meaningful way to order the values for the two individuals within a twin pair. Like the interclass correlation, the intraclass correlation for paired data will be confined to the interval [−1, +1]. The intraclass correlation is also defined for data sets with groups having more than 2 values. For groups consisting of three values, it is defined as formula_3 where formula_4 formula_5 As the number of items per group grows, so does the number of cross-product terms in this expression grows. The following equivalent form is simpler to calculate: formula_6 where "K" is the number of data values per group, and formula_7 is the sample mean of the "n"th group. This form is usually attributed to Harris. The left term is non-negative; consequently the intraclass correlation must satisfy formula_8 For large "K", this ICC is nearly equal to formula_9 which can be interpreted as the fraction of the total variance that is due to variation between groups. Ronald Fisher devotes an entire chapter to intraclass correlation in his classic book "Statistical Methods for Research Workers". For data from a population that is completely noise, Fisher's formula produces ICC values that are distributed about 0, i.e. sometimes being negative. This is because Fisher designed the formula to be unbiased, and therefore its estimates are sometimes overestimates and sometimes underestimates. For small or 0 underlying values in the population, the ICC calculated from a sample may be negative. Modern ICC definitions: simpler formula but positive bias. Beginning with Ronald Fisher, the intraclass correlation has been regarded within the framework of analysis of variance (ANOVA), and more recently in the framework of random effects models. A number of ICC estimators have been proposed. Most of the estimators can be defined in terms of the random effects model formula_10 where "Y""ij" is the "i"th observation in the "j"th group, "μ" is an unobserved overall mean, "αj" is an unobserved random effect shared by all values in group "j", and "εij" is an unobserved noise term. For the model to be identified, the "αj" and "εij" are assumed to have expected value zero and to be uncorrelated with each other. Also, the "αj" are assumed to be identically distributed, and the "εij" are assumed to be identically distributed. The variance of "αj" is denoted "σ" and the variance of "ε""ij" is denoted "σ". The population ICC in this framework is formula_11 With this framework, the ICC is the correlation of two observations from the same group. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;[Proof] For a one-way random effects model: formula_12 formula_13, formula_14, formula_15s and formula_16s independent and formula_15s are independent from formula_16s. The variance of any observation is: formula_17 The covariance of two observations from the same group formula_18 (for formula_19) is: formula_20 In this, we've used properties of the covariance. Put together we get: formula_21 An advantage of this ANOVA framework is that different groups can have different numbers of data values, which is difficult to handle using the earlier ICC statistics. This ICC is always non-negative, allowing it to be interpreted as the proportion of total variance that is "between groups." This ICC can be generalized to allow for covariate effects, in which case the ICC is interpreted as capturing the within-class similarity of the covariate-adjusted data values. This expression can never be negative (unlike Fisher's original formula) and therefore, in samples from a population which has an ICC of 0, the ICCs in the samples will be higher than the ICC of the population. A number of different ICC statistics have been proposed, not all of which estimate the same population parameter. There has been considerable debate about which ICC statistics are appropriate for a given use, since they may produce markedly different results for the same data. Relationship to Pearson's correlation coefficient. In terms of its algebraic form, Fisher's original ICC is the ICC that most resembles the Pearson correlation coefficient. One key difference between the two statistics is that in the ICC, the data are centered and scaled using a pooled mean and standard deviation, whereas in the Pearson correlation, each variable is centered and scaled by its own mean and standard deviation. This pooled scaling for the ICC makes sense because all measurements are of the same quantity (albeit on units in different groups). For example, in a paired data set where each "pair" is a single measurement made for each of two units (e.g., weighing each twin in a pair of identical twins) rather than two different measurements for a single unit (e.g., measuring height and weight for each individual), the ICC is a more natural measure of association than Pearson's correlation. An important property of the Pearson correlation is that it is invariant to application of separate linear transformations to the two variables being compared. Thus, if we are correlating "X" and "Y", where, say, "Y" = 2"X" + 1, the Pearson correlation between "X" and "Y" is 1 — a perfect correlation. This property does not make sense for the ICC, since there is no basis for deciding which transformation is applied to each value in a group. However, if all the data in all groups are subjected to the same linear transformation, the ICC does not change. Use in assessing conformity among observers. The ICC is used to assess the consistency, or conformity, of measurements made by multiple observers measuring the same quantity. For example, if several physicians are asked to score the results of a CT scan for signs of cancer progression, we can ask how consistent the scores are to each other. If the truth is known (for example, if the CT scans were on patients who subsequently underwent exploratory surgery), then the focus would generally be on how well the physicians' scores matched the truth. If the truth is not known, we can only consider the similarity among the scores. An important aspect of this problem is that there is both inter-observer and intra-observer variability. Inter-observer variability refers to systematic differences among the observers — for example, one physician may consistently score patients at a higher risk level than other physicians. Intra-observer variability refers to deviations of a particular observer's score on a particular patient that are not part of a systematic difference. The ICC is constructed to be applied to exchangeable measurements — that is, grouped data in which there is no meaningful way to order the measurements within a group. In assessing conformity among observers, if the same observers rate each element being studied, then systematic differences among observers are likely to exist, which conflicts with the notion of exchangeability. If the ICC is used in a situation where systematic differences exist, the result is a composite measure of intra-observer and inter-observer variability. One situation where exchangeability might reasonably be presumed to hold would be where a specimen to be scored, say a blood specimen, is divided into multiple aliquots, and the aliquots are measured separately on the same instrument. In this case, exchangeability would hold as long as no effect due to the sequence of running the samples was present. Since the "intraclass correlation coefficient" gives a composite of intra-observer and inter-observer variability, its results are sometimes considered difficult to interpret when the observers are not exchangeable. Alternative measures such as Cohen's kappa statistic, the Fleiss kappa, and the concordance correlation coefficient have been proposed as more suitable measures of agreement among non-exchangeable observers. Calculation in software packages. ICC is supported in the open source software package R (using the function "icc" with the packages "psy" or "irr", or via the function "ICC" in the package "psych".) The rptR package provides methods for the estimation of ICC and repeatabilities for Gaussian, binomial and Poisson distributed data in a mixed-model framework. Notably, the package allows estimation of adjusted ICC (i.e. controlling for other variables) and computes confidence intervals based on parametric bootstrapping and significances based on the permutation of residuals. Commercial software also supports ICC, for instance Stata or SPSS The three models are: Number of measurements: Consistency or absolute agreement: The consistency ICC cannot be estimated in the one-way random effects model, as there is no way to separate the inter-rater and residual variances. An overview and re-analysis of the three models for the single measures ICC, with an alternative recipe for their use, has also been presented by Liljequist et al. (2019). Interpretation. Cicchetti (1994) gives the following often quoted guidelines for interpretation for kappa or ICC inter-rater agreement measures: A different guideline is given by Koo and Li (2016): References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "r = \\frac{1}{Ns^2} \\sum_{n=1}^N (x_{n,1} - \\bar{x}) ( x_{n,2} - \\bar{x}), " }, { "math_id": 1, "text": "\\bar{x} = \\frac{1}{2N} \\sum_{n=1}^N (x_{n,1} + x_{n,2}), " }, { "math_id": 2, "text": "s^2 = \\frac{1}{2N} \\left\\{ \\sum_{n=1}^N ( x_{n,1} - \\bar{x})^2 + \\sum_{n=1}^N ( x_{n,2} - \\bar{x})^2 \\right\\}. " }, { "math_id": 3, "text": "r = \\frac{1}{3Ns^2} \\sum_{n=1}^N \\left\\{ ( x_{n,1} - \\bar{x})( x_{n,2} - \\bar{x}) + (x_{n,1} - \\bar{x})( x_{n,3} - \\bar{x})+( x_{n,2} - \\bar{x})( x_{n,3} - \\bar{x}) \\right\\}, " }, { "math_id": 4, "text": "\\bar{x} = \\frac{1}{3 N} \\sum_{n=1}^N (x_{n,1} + x_{n,2} + x_{n,3}), " }, { "math_id": 5, "text": "s^2 = \\frac{1}{3N} \\left\\{ \\sum_{n=1}^N ( x_{n,1} - \\bar{x})^2 + \\sum_{n=1}^N ( x_{n,2} - \\bar{x})^2 + \\sum_{n=1}^N ( x_{n,3} - \\bar{x})^2\\right\\}. " }, { "math_id": 6, "text": "r = \\frac{K}{K-1}\\cdot\\frac{N^{-1}\\sum_{n=1}^N(\\bar{x}_n-\\bar{x})^2}{s^2} - \\frac{1}{K-1}," }, { "math_id": 7, "text": "\\bar{x}_n" }, { "math_id": 8, "text": "r \\geq \\frac {-1} {K-1}." }, { "math_id": 9, "text": "\n\\frac{N^{-1}\\sum_{n=1}^N(\\bar{x}_n-\\bar{x})^2}{s^2},\n" }, { "math_id": 10, "text": "\nY_{ij} = \\mu + \\alpha_j + \\varepsilon_{ij},\n" }, { "math_id": 11, "text": "\n\\frac{\\sigma_\\alpha^2}{\\sigma_\\alpha^2+\\sigma_\\varepsilon^2}.\n" }, { "math_id": 12, "text": "Y_{ij}=\\mu+\\alpha_i+\\epsilon_{ij}" }, { "math_id": 13, "text": "\\alpha_i \\sim N(0,\\sigma_\\alpha^2)" }, { "math_id": 14, "text": "\\epsilon_{ij} \\sim N(0,\\sigma_\\varepsilon^2)" }, { "math_id": 15, "text": "\\alpha_i" }, { "math_id": 16, "text": "\\epsilon_{ij}" }, { "math_id": 17, "text": "Var(Y_{ij})=\\sigma_\\varepsilon^2 + \\sigma_\\alpha^2" }, { "math_id": 18, "text": "i" }, { "math_id": 19, "text": "j \\neq k" }, { "math_id": 20, "text": "\n\\begin{align}\n\\text{Cov}(Y_{ij}, Y_{ik}) &= \\text{Cov}(\\mu + \\alpha_i + \\epsilon_{ij}, \\mu + \\alpha_i + \\epsilon_{ik}) \\\\\n &= \\text{Cov}(\\alpha_i + \\epsilon_{ij}, \\alpha_i + \\epsilon_{ik}) \\\\\n &= \\text{Cov}(\\alpha_i, \\alpha_i) + 2\\text{Cov}(\\alpha_i, \\epsilon_{ik}) + \\text{Cov}(\\epsilon_{ij}, \\epsilon_{ik}) \\\\\n &= \\text{Cov}(\\alpha_i, \\alpha_i) \\\\\n &= \\text{Var}(\\alpha_i) \\\\\n &= \\sigma^2_\\alpha .\\\\\n\\end{align}\n" }, { "math_id": 21, "text": "\n\\text{Cor}(Y_{ij}, Y_{ik}) = \\frac{\\text{Cov}(Y_{ij}, Y_{ik})}{\\sqrt{Var(Y_{ij})Var(Y_{ik})}} = \\frac{\\sigma^2_\\alpha }{\\sigma_\\varepsilon^2 + \\sigma_\\alpha^2}\n" } ]
https://en.wikipedia.org/wiki?curid=11391832
11391853
Dom Bédos de Celles
French monk and organ builder (1709–1779) François-Lamathe Dom Bédos de Celles de Salelles (24 January 1709 – 25 November 1779) was a Benedictine monk best known for being a master pipe organ builder. Life and work. He was born in Caux, Hérault, near Béziers, France. He was elected to the French Academy of Sciences at Bordeaux and correspondent of the academy at Paris in 1758. As a recognized organ-builder, he was called upon to carry out repairs and appraise and advise other organ-builders in many locations across France. In 1760 he published "La Gnomonique pratique ou l’Art de tracer les cadrans solaires" under the patronage of the Jean-Paul Grandjean de Fouchy, Secretary of the Academy of Sciences and an authority in gnomonics and sundials. In 1766–78 he published his treatise L'art du facteur d'orgues (The Art of the Organ-Builder), a part of the series Descriptions des Arts et Métiers. Dom Bédos's work, in four folio volumes, contains great historical detail about eighteenth-century organ building, and is still referred to by modern organ-builders. He is buried in the former Abbey (now Basilica) of Saint-Denis. Organ building in the mid 18th century. The 26 images below are taken from this work, kept in the St.Bernard's abbey library in Bornem. Horizontal Sundial layout. The Dom Francois Bedos de Celles method (1790) otherwise known as the Waugh method (1973), enables a dial to be constructed on a narrower piece of paper or velum, than using Dürers (1525) method- though it is essentially the same for the hourlines 9 to 3. It relies on a theorem proved on 1682 by P. de la Hire. The method became well known when it was adopted by Waugh, as the construction method to be used for horizontal dials by Albert Waugh, in his 1973 book Sundials: their theory and construction. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sin\\phi " }, { "math_id": 1, "text": "\\tan h \\sin\\phi " } ]
https://en.wikipedia.org/wiki?curid=11391853
1139338
Computable function
Mathematical function that can be computed by a program Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithms, in the sense that a function is computable if there exists an algorithm that can do the job of the function, i.e. given an input of the function domain it can return the corresponding output. Computable functions are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions. Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the general recursive functions. According to the Church–Turing thesis, computable functions are exactly the functions that can be calculated using a mechanical (that is, automatic) calculation device given unlimited amounts of time and storage space. More precisely, every model of computation that has ever been imagined can compute only computable functions, and all computable functions can be computed by any of several models of computation that are apparently very different, such as Turing machines, register machines, lambda calculus and general recursive functions. Before the precise definition of computable function, mathematicians often used the informal term "effectively calculable". This term has since come to be identified with the computable functions. The effective computability of these functions does not imply that they can be "efficiently" computed (i.e. computed within a reasonable amount of time). In fact, for some effectively calculable functions it can be shown that any algorithm that computes them will be very inefficient in the sense that the running time of the algorithm increases exponentially (or even superexponentially) with the length of the input. The fields of feasible computability and computational complexity study functions that can be computed efficiently. The Blum axioms can be used to define an abstract computational complexity theory on the set of computable functions. In computational complexity theory, the problem of determining the complexity of a computable function is known as a function problem. Definition. Computability of a function is an informal notion. One way to describe it is to say that a function is computable if its value can be obtained by an effective procedure. With more rigor, a function formula_0 is computable if and only if there is an effective procedure that, given any k-tuple formula_1 of natural numbers, will produce the value formula_2. In agreement with this definition, the remainder of this article presumes that computable functions take finitely many natural numbers as arguments and produce a value which is a single natural number. As counterparts to this informal description, there exist multiple formal, mathematical definitions. The class of computable functions can be defined in many equivalent models of computation, including Although these models use different representations for the functions, their inputs, and their outputs, translations exist between any two models, and so every model describes essentially the same class of functions, giving rise to the opinion that formal computability is both natural and not too narrow. These functions are sometimes referred to as "recursive", to contrast with the informal term "computable", a distinction stemming from a 1934 discussion between Kleene and Gödel.p.6 For example, one can formalize computable functions as μ-recursive functions, which are partial functions that take finite tuples of natural numbers and return a single natural number (just as above). They are the smallest class of partial functions that includes the constant, successor, and projection functions, and is closed under composition, primitive recursion, and the μ operator. Equivalently, computable functions can be formalized as functions which can be calculated by an idealized computing agent such as a Turing machine or a register machine. Formally speaking, a partial function formula_0 can be calculated if and only if there exists a computer program with the following properties: Characteristics of computable functions. The basic characteristic of a computable function is that there must be a finite procedure (an algorithm) telling how to compute the function. The models of computation listed above give different interpretations of what a procedure is and how it is used, but these interpretations share many properties. The fact that these models give equivalent classes of computable functions stems from the fact that each model is capable of reading and mimicking a procedure for any of the other models, much as a compiler is able to read instructions in one computer language and emit instructions in another language. Enderton [1977] gives the following characteristics of a procedure for computing a computable function; similar characterizations have been given by Turing [1936], Rogers [1967], and others. Enderton goes on to list several clarifications of these 3 requirements of the procedure for a computable function: To summarise, based on this view a function is computable if: The field of computational complexity studies functions with prescribed bounds on the time and/or space allowed in a successful computation. Computable sets and relations. A set A of natural numbers is called computable (synonyms: recursive, decidable) if there is a computable, total function "f" such that for any natural number n, "f"(n) 1 if n is in A and "f"(n) 0 if n is not in A. A set of natural numbers is called computably enumerable (synonyms: recursively enumerable, semidecidable) if there is a computable function "f" such that for each number n, "f"(n) is defined if and only if n is in the set. Thus a set is computably enumerable if and only if it is the domain of some computable function. The word "enumerable" is used because the following are equivalent for a nonempty subset B of the natural numbers: If a set B is the range of a function "f" then the function can be viewed as an enumeration of B, because the list "f"(0), "f"(1), ... will include every element of B. Because each finitary relation on the natural numbers can be identified with a corresponding set of finite sequences of natural numbers, the notions of computable relation and computably enumerable relation can be defined from their analogues for sets. Formal languages. In computability theory in computer science, it is common to consider formal languages. An alphabet is an arbitrary set. A word on an alphabet is a finite sequence of symbols from the alphabet; the same symbol may be used more than once. For example, binary strings are exactly the words on the alphabet {0, 1}. A language is a subset of the collection of all words on a fixed alphabet. For example, the collection of all binary strings that contain exactly 3 ones is a language over the binary alphabet. A key property of a formal language is the level of difficulty required to decide whether a given word is in the language. Some coding system must be developed to allow a computable function to take an arbitrary word in the language as input; this is usually considered routine. A language is called computable (synonyms: recursive, decidable) if there is a computable function "f" such that for each word w over the alphabet, "f"(w) 1 if the word is in the language and "f"(w) 0 if the word is not in the language. Thus a language is computable just in case there is a procedure that is able to correctly tell whether arbitrary words are in the language. A language is computably enumerable (synonyms: recursively enumerable, semidecidable) if there is a computable function "f" such that "f"(w) is defined if and only if the word w is in the language. The term "enumerable" has the same etymology as in computably enumerable sets of natural numbers. Examples. The following functions are computable: If "f" and "g" are computable, then so are: "f" + "g", "f" * "g", formula_3 if "f" is unary, max("f","g"), min("f","g"), arg max{"y" ≤ "f"("x")} and many more combinations. The following examples illustrate that a function may be computable though it is not known which algorithm computes it. Church–Turing thesis. The Church–Turing thesis states that any function computable from a procedure possessing the three properties listed above is a computable function. Because these three properties are not formally stated, the Church–Turing thesis cannot be proved. The following facts are often taken as evidence for the thesis: The Church–Turing thesis is sometimes used in proofs to justify that a particular function is computable by giving a concrete description of a procedure for the computation. This is permitted because it is believed that all such uses of the thesis can be removed by the tedious process of writing a formal procedure for the function in some model of computation. Provability. Given a function (or, similarly, a set), one may be interested not only if it is computable, but also whether this can be "proven" in a particular proof system (usually first order Peano arithmetic). A function that can be proven to be computable is called provably total. The set of provably total functions is recursively enumerable: one can enumerate all the provably total functions by enumerating all their corresponding proofs, that prove their computability. This can be done by enumerating all the proofs of the proof system and ignoring irrelevant ones. Relation to recursively defined functions. In a function defined by a recursive definition, each value is defined by a fixed first-order formula of other, previously defined values of the same function or other functions, which might be simply constants. A subset of these is the primitive recursive functions. Another example is the Ackermann function, which is recursively defined but not primitive recursive. For definitions of this type to avoid circularity or infinite regress, it is necessary that recursive calls to the same function within a definition be to arguments that are smaller in some well-partial-order on the function's domain. For instance, for the Ackermann function formula_4, whenever the definition of formula_5 refers to formula_6, then formula_7 w.r.t. the lexicographic order on pairs of natural numbers. In this case, and in the case of the primitive recursive functions, well-ordering is obvious, but some "refers-to" relations are nontrivial to prove as being well-orderings. Any function defined recursively in a well-ordered way is computable: each value can be computed by expanding a tree of recursive calls to the function, and this expansion must terminate after a finite number of calls, because otherwise Kőnig's lemma would lead to an infinite descending sequence of calls, violating the assumption of well-ordering. Total functions that are not provably total. In a sound proof system, every provably total function is indeed total, but the converse is not true: in every first-order proof system that is strong enough and sound (including Peano arithmetic), one can prove (in another proof system) the existence of total functions that cannot be proven total in the proof system. If the total computable functions are enumerated via the Turing machines that produces them, then the above statement can be shown, if the proof system is sound, by a similar diagonalization argument to that used above, using the enumeration of provably total functions given earlier. One uses a Turing machine that enumerates the relevant proofs, and for every input "n" calls "f""n"("n") (where "f""n" is "n"-th function by "this" enumeration) by invoking the Turing machine that computes it according to the n-th proof. Such a Turing machine is guaranteed to halt if the proof system is sound. Uncomputable functions and unsolvable problems. Every computable function has a finite procedure giving explicit, unambiguous instructions on how to compute it. Furthermore, this procedure has to be encoded in the finite alphabet used by the computational model, so there are only countably many computable functions. For example, functions may be encoded using a string of bits (the alphabet Σ {0, 1}). The real numbers are uncountable so most real numbers are not computable. See computable number. The set of finitary functions on the natural numbers is uncountable so most are not computable. Concrete examples of such functions are Busy beaver, Kolmogorov complexity, or any function that outputs the digits of a noncomputable number, such as Chaitin's constant. Similarly, most subsets of the natural numbers are not computable. The halting problem was the first such set to be constructed. The Entscheidungsproblem, proposed by David Hilbert, asked whether there is an effective procedure to determine which mathematical statements (coded as natural numbers) are true. Turing and Church independently showed in the 1930s that this set of natural numbers is not computable. According to the Church–Turing thesis, there is no effective procedure (with an algorithm) which can perform these computations. Extensions of computability. Relative computability. The notion of computability of a function can be relativized to an arbitrary set of natural numbers "A". A function "f" is defined to be computable in "A (equivalently A"-computable or computable relative to "A) when it satisfies the definition of a computable function with modifications allowing access to "A" as an oracle. As with the concept of a computable function relative computability can be given equivalent definitions in many different models of computation. This is commonly accomplished by supplementing the model of computation with an additional primitive operation which asks whether a given integer is a member of "A". We can also talk about "f" being computable in "g by identifying "g" with its graph. Higher recursion theory. Hyperarithmetical theory studies those sets that can be computed from a computable ordinal number of iterates of the Turing jump of the empty set. This is equivalent to sets defined by both a universal and existential formula in the language of second order arithmetic and to some models of Hypercomputation. Even more general recursion theories have been studied, such as E-recursion theory in which any set can be used as an argument to an E-recursive function. Hyper-computation. Although the Church–Turing thesis states that the computable functions include all functions with algorithms, it is possible to consider broader classes of functions that relax the requirements that algorithms must possess. The field of Hypercomputation studies models of computation that go beyond normal Turing computation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f:\\mathbb N^k\\rightarrow\\mathbb N" }, { "math_id": 1, "text": "\\mathbf x" }, { "math_id": 2, "text": "f(\\mathbf x)" }, { "math_id": 3, "text": "\\color{Blue} f \\circ g" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "A(x,y)" }, { "math_id": 6, "text": "A(p,q)" }, { "math_id": 7, "text": "(p,q) < (x,y)" } ]
https://en.wikipedia.org/wiki?curid=1139338
11393506
Tribimaximal mixing
Tribimaximal mixing is a specific postulated form for the Pontecorvo–Maki–Nakagawa–Sakata (PMNS) lepton mixing matrix "U". Tribimaximal mixing is defined by a particular choice of the matrix of moduli-squared of the elements of the PMNS matrix as follows: formula_0 This mixing is historically interesting as it is quite close to reality when compared to other simple hypotheses where the squares of matrix elements take exact ratios, and also compared to the naive supposition that the matrix would be approximately diagonal like the CKM matrix. However the precision of modern experiments mean that such a simple form is excluded by experiment at a level of over 5σ, mainly due to the fact the tribimaximal scheme has a zero in the formula_1 element, but also (to a much lesser extent) because it predicts no violation of CP symmetry. The tribimaximal mixing form was compatible with pre-2011 neutrino oscillation experiments and may be used as a zeroth-order approximation to more general forms for the PMNS matrix, including some that are consistent with the data. In the PDG convention for the PMNS matrix, tribimaximal mixing may be specified in terms of lepton mixing angles as follows: formula_2 The above prediction has been falsified experimentally, because "θ"13 was found to be nontrivial, "θ"13 =8.5°. A non-negligible value of "θ"13 has been foreseen in certain theoretical schemes that were put forward before tribimaximal mixing and that supported a large solar mixing, before it was confirmed experimentally (these theoretical schemes do not have a special name, but for the reasons explained above, they could be called pre-tribimaximal or also non-tribimaximal). This situation is not new: also in the 1990s, the solar mixing angle was supposed to be small by most theorists, until KamLAND proved the contrary to be true. Explanation of name. The name "tribimaximal" reflects the commonality of the tribimaximal mixing matrix with two previously proposed specific forms for the PMNS matrix, the trimaximal and bimaximal mixing schemes, both now ruled out by data. In tribimaximal mixing, the formula_3 neutrino mass eigenstate is said to be "trimaximally mixed" in that it consists of a uniform admixture of formula_4, formula_5 and formula_6 flavour eigenstates, i.e. maximal mixing among all three flavour states. The formula_7 neutrino mass eigenstate, on the other hand, is "bimaximally mixed" in that it comprises a uniform admixture of only two flavour components, i.e. formula_5 and formula_6 maximal mixing, with effective decoupling of the formula_4 from the formula_7, just as in the original bimaximal scheme. Phenomenology. By virtue of the zero (formula_8) in the tribimaximal mixing matrix, exact tribimaximal mixing would predict zero for all CP-violating asymmetries in the case of Dirac neutrinos (in the case of Majorana neutrinos, Majorana phases are still permitted, and could still lead to CP-violating effects). For solar neutrinos the large angle MSW effect in tribimaximal mixing accounts for the experimental data, predicting average suppressions formula_9 in the Sudbury Neutrino Observatory (SNO) and formula_10 in lower energy solar neutrino experiments (and in long baseline reactor neutrino experiments). The bimaximally mixed formula_11 in tribimaximal mixing accounts for the factor of two suppression formula_12 observed for atmospheric muon-neutrinos (and confirmed in long-baseline accelerator experiments). Near-zero formula_13 appearance in a formula_14 beam is predicted in exact tribimaximal mixing (formula_8), and this has been strongly ruled out by modern reactor neutrino experiments. Further characteristic predictions of tribimaximal mixing – e.g. for very long baseline formula_14 and formula_15 vacuum survival probabilities formula_16 – will be extremely hard to test experimentally. The L/E flatness of the electron-like event ratio at Super-Kamiokande severely restricts the neutrino mixing matrices to the form given by Stancu &amp; Ahluwalia (1999): formula_17 Additional experimental data fixes formula_18 The extension of this result to the CP-violating case is found in Ahluwalia, Liu, &amp; Stancu (2002). History. The name "tribimaximal" first appeared in the literature in 2002 although this specific scheme had been previously published in 1999 as a viable alternative to the trimaximal scheme. Tribimaximal mixing is sometimes confused with other mixing schemes, e.g. which differ from tribimaximal mixing by row- and/or column-wise permutations of the mixing-matrix elements. Such permuted forms are experimentally distinct however, and are now ruled out by data. That the L/E flatness of the electron-like event ratio at Superkamiokande severely restricts the neutrino mixing matrices was first presented by D. V. Ahluwalia in a Nuclear and Particle Physics Seminar of the Los Alamos National Laboratory on June 5, 1998. It was just a few hours after the Super-Kamiokande press conference that announced the results on atmospheric neutrinos. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{bmatrix}\n|U_{e 1}|^2 & |U_{e 2}|^2 & |U_{e 3}|^2 \\\\\n|U_{\\mu 1}|^2 & |U_{\\mu 2}|^2 & |U_{\\mu 3}|^2 \\\\ \n|U_{\\tau 1}|^2 & |U_{\\tau 2}|^2 & |U_{\\tau 3}|^2 \n\\end{bmatrix}\n= \n\\begin{bmatrix}\n\\frac{2}{3} & \\frac{1}{3} & 0 \\\\\n\\frac{1}{6} & \\frac{1}{3} & \\frac{1}{2} \\\\ \n\\frac{1}{6} & \\frac{1}{3} & \\frac{1}{2} \n\\end{bmatrix}.\n" }, { "math_id": 1, "text": " U_{e3}" }, { "math_id": 2, "text": "\n\\begin{matrix}\n\\theta_{12}=\\sin^{-1} \\left({\\frac{1}{\\sqrt{3}}}\\right)\\simeq 35.3^{\\circ} & \\theta_{23}=\\sin^{-1} \\left({\\frac{1}{\\sqrt{2}}}\\right)=45^{\\circ} & \\theta_{13}=0; & (\\delta \\text{ is undefined}).\n\\end{matrix}\n" }, { "math_id": 3, "text": "\\nu_2" }, { "math_id": 4, "text": "\\nu_e" }, { "math_id": 5, "text": "\\nu_{\\mu}" }, { "math_id": 6, "text": "\\nu_{\\tau}" }, { "math_id": 7, "text": "\\nu_3" }, { "math_id": 8, "text": "\\ |U_{e3}|^2 = 0\\ " }, { "math_id": 9, "text": "\\ \\langle P_{ee} \\rangle \\simeq \\tfrac{1}{3}\\ " }, { "math_id": 10, "text": "\\ \\langle P_{ee}\\rangle \\simeq \\tfrac{5}{9}\\ " }, { "math_id": 11, "text": "\\ \\nu_3\\ " }, { "math_id": 12, "text": "\\ \\langle P_{\\mu \\mu}\\rangle \\simeq \\tfrac{1}{2}\\ " }, { "math_id": 13, "text": "\\ \\nu_e\\ " }, { "math_id": 14, "text": "\\ \\nu_\\mu\\ " }, { "math_id": 15, "text": "\\ \\nu_\\tau\\ " }, { "math_id": 16, "text": "\\ P_{\\mu \\mu} = P_{\\tau \\tau} \\simeq \\tfrac{7}{18}\\ " }, { "math_id": 17, "text": "\nU=\n\\begin{bmatrix}\n \\qquad ~ \\cos\\theta & \\qquad \\qquad \\sin\\theta & \\qquad 0 ~~ \\\\\n -\\frac{1}{\\sqrt{2\\ }} \\sin\\theta & \\qquad ~~ \\frac{1}{\\sqrt{2\\ }} \\cos\\theta & \\qquad \\frac{1}{\\sqrt{2\\ }} ~~ \\\\ \n~~ \\frac{1}{\\sqrt{2\\ }} \\sin\\theta & \\qquad -\\frac{1}{\\sqrt{2\\ }} \\cos\\theta & \\qquad \\frac{1}{\\sqrt{2\\ }} ~~\n\\end{bmatrix} ~.\n" }, { "math_id": 18, "text": "\\ \\sin \\theta = \\tfrac{1}{\\sqrt{3\\ }} ~." } ]
https://en.wikipedia.org/wiki?curid=11393506
11394693
Scatter matrix
Concept in probability theory "For the notion in quantum mechanics, see scattering matrix." In multivariate statistics and probability theory, the scatter matrix is a statistic that is used to make estimates of the covariance matrix, for instance of the multivariate normal distribution. Definition. Given "n" samples of "m"-dimensional data, represented as the m-by-n matrix, formula_0, the sample mean is formula_1 where formula_2 is the "j"-th column of formula_3. The scatter matrix is the "m"-by-"m" positive semi-definite matrix formula_4 where formula_5 denotes matrix transpose, and multiplication is with regards to the outer product. The scatter matrix may be expressed more succinctly as formula_6 where formula_7 is the "n"-by-"n" centering matrix. Application. The maximum likelihood estimate, given "n" samples, for the covariance matrix of a multivariate normal distribution can be expressed as the normalized scatter matrix formula_8 When the columns of formula_3 are independently sampled from a multivariate normal distribution, then formula_9 has a Wishart distribution. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X=[\\mathbf{x}_1,\\mathbf{x}_2,\\ldots,\\mathbf{x}_n]" }, { "math_id": 1, "text": "\\overline{\\mathbf{x}} = \\frac{1}{n}\\sum_{j=1}^n \\mathbf{x}_j" }, { "math_id": 2, "text": "\\mathbf{x}_j" }, { "math_id": 3, "text": "X" }, { "math_id": 4, "text": "S = \\sum_{j=1}^n (\\mathbf{x}_j-\\overline{\\mathbf{x}})(\\mathbf{x}_j-\\overline{\\mathbf{x}})^T = \\sum_{j=1}^n (\\mathbf{x}_j-\\overline{\\mathbf{x}})\\otimes(\\mathbf{x}_j-\\overline{\\mathbf{x}}) = \\left( \\sum_{j=1}^n \\mathbf{x}_j \\mathbf{x}_j^T \\right) - n \\overline{\\mathbf{x}} \\overline{\\mathbf{x}}^T " }, { "math_id": 5, "text": "(\\cdot)^T" }, { "math_id": 6, "text": "S = X\\,C_n\\,X^T" }, { "math_id": 7, "text": "\\,C_n" }, { "math_id": 8, "text": "C_{ML}=\\frac{1}{n}S." }, { "math_id": 9, "text": "S" }, { "math_id": 10, "text": "XX^\\top" } ]
https://en.wikipedia.org/wiki?curid=11394693
11395056
Centering matrix
Kind of matrix In mathematics and multivariate statistics, the centering matrix is a symmetric and idempotent matrix, which when multiplied with a vector has the same effect as subtracting the mean of the components of the vector from every component of that vector. Definition. The centering matrix of size "n" is defined as the "n"-by-"n" matrix formula_0 where formula_1 is the identity matrix of size "n" and formula_2 is an "n"-by-"n" matrix of all 1's. For example formula_3, formula_4 , formula_5 Properties. Given a column-vector, formula_6 of size "n", the centering property of formula_7 can be expressed as formula_8 where formula_9 is a column vector of ones and formula_10 is the mean of the components of formula_6. formula_7 is symmetric positive semi-definite. formula_7 is idempotent, so that formula_11, for formula_12. Once the mean has been removed, it is zero and removing it again has no effect. formula_7 is singular. The effects of applying the transformation formula_13 cannot be reversed. formula_7 has the eigenvalue 1 of multiplicity "n" − 1 and eigenvalue 0 of multiplicity 1. formula_7 has a nullspace of dimension 1, along the vector formula_9. formula_7 is an orthogonal projection matrix. That is, formula_14 is a projection of formula_6 onto the ("n" − 1)-dimensional subspace that is orthogonal to the nullspace formula_9. (This is the subspace of all "n"-vectors whose components sum to zero.) The trace of formula_15 is formula_16. Application. Although multiplication by the centering matrix is not a computationally efficient way of removing the mean from a vector, it is a convenient analytical tool. It can be used not only to remove the mean of a single vector, but also of multiple vectors stored in the rows or columns of an "m"-by-"n" matrix formula_17. The left multiplication by formula_18 subtracts a corresponding mean value from each of the "n" columns, so that each column of the product formula_19 has a zero mean. Similarly, the multiplication by formula_15 on the right subtracts a corresponding mean value from each of the "m" rows, and each row of the product formula_20 has a zero mean. The multiplication on both sides creates a doubly centred matrix formula_21, whose row and column means are equal to zero. The centering matrix provides in particular a succinct way to express the scatter matrix, formula_22 of a data sample formula_23, where formula_24 is the sample mean. The centering matrix allows us to express the scatter matrix more compactly as formula_25 formula_15 is the covariance matrix of the multinomial distribution, in the special case where the parameters of that distribution are formula_26, and formula_27. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C_n = I_n - \\tfrac{1}{n}J_n " }, { "math_id": 1, "text": "I_n\\," }, { "math_id": 2, "text": "J_n" }, { "math_id": 3, "text": "C_1 = \\begin{bmatrix}\n0 \\end{bmatrix}\n" }, { "math_id": 4, "text": "C_2= \\left[ \\begin{array}{rrr} \n1 & 0 \\\\\n0 & 1 \n\\end{array} \\right] - \\frac{1}{2}\\left[ \\begin{array}{rrr} \n1 & 1 \\\\\n1 & 1\n\\end{array} \\right] = \\left[ \\begin{array}{rrr} \n\\frac{1}{2} & -\\frac{1}{2} \\\\\n-\\frac{1}{2} & \\frac{1}{2} \n\\end{array} \\right]\n" }, { "math_id": 5, "text": "C_3 = \\left[ \\begin{array}{rrr}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1 \n\\end{array} \\right] - \\frac{1}{3}\\left[ \\begin{array}{rrr}\n1 & 1 & 1 \\\\\n1 & 1 & 1 \\\\\n1 & 1 & 1 \n\\end{array} \\right]\n = \\left[ \\begin{array}{rrr}\n\\frac{2}{3} & -\\frac{1}{3} & -\\frac{1}{3} \\\\\n-\\frac{1}{3} & \\frac{2}{3} & -\\frac{1}{3} \\\\\n-\\frac{1}{3} & -\\frac{1}{3} & \\frac{2}{3} \n\\end{array} \\right] \n" }, { "math_id": 6, "text": "\\mathbf{v}\\," }, { "math_id": 7, "text": "C_n\\," }, { "math_id": 8, "text": "C_n\\,\\mathbf{v} = \\mathbf{v} - (\\tfrac{1}{n}J_{n,1}^\\textrm{T}\\mathbf{v})J_{n,1}" }, { "math_id": 9, "text": "J_{n,1}" }, { "math_id": 10, "text": "\\tfrac{1}{n}J_{n,1}^\\textrm{T}\\mathbf{v}" }, { "math_id": 11, "text": "C_n^k=C_n" }, { "math_id": 12, "text": "k=1,2,\\ldots" }, { "math_id": 13, "text": "C_n\\,\\mathbf{v}" }, { "math_id": 14, "text": "C_n\\mathbf{v}" }, { "math_id": 15, "text": "C_n" }, { "math_id": 16, "text": "n(n-1)/n = n-1" }, { "math_id": 17, "text": "X" }, { "math_id": 18, "text": "C_m" }, { "math_id": 19, "text": "C_m\\,X" }, { "math_id": 20, "text": "X\\,C_n" }, { "math_id": 21, "text": "C_m\\,X\\,C_n" }, { "math_id": 22, "text": "S=(X-\\mu J_{n,1}^{\\mathrm{T}})(X-\\mu J_{n,1}^{\\mathrm{T}})^{\\mathrm{T}}" }, { "math_id": 23, "text": "X\\," }, { "math_id": 24, "text": "\\mu=\\tfrac{1}{n}X J_{n,1}" }, { "math_id": 25, "text": "S=X\\,C_n(X\\,C_n)^{\\mathrm{T}}=X\\,C_n\\,C_n\\,X\\,^{\\mathrm{T}}=X\\,C_n\\,X\\,^{\\mathrm{T}}." }, { "math_id": 26, "text": "k=n" }, { "math_id": 27, "text": "p_1=p_2=\\cdots=p_n=\\frac{1}{n}" } ]
https://en.wikipedia.org/wiki?curid=11395056
11397
Freeman Dyson
British theoretical physicist and mathematician (1923–2020) Freeman John Dyson (15 December 1923 – 28 February 2020) was a British-American theoretical physicist and mathematician known for his works in quantum field theory, astrophysics, random matrices, mathematical formulation of quantum mechanics, condensed matter physics, nuclear physics, and engineering. He was professor emeritus in the Institute for Advanced Study in Princeton and a member of the board of sponsors of the "Bulletin of the Atomic Scientists". Dyson originated several concepts that bear his name, such as Dyson's transform, a fundamental technique in additive number theory, which he developed as part of his proof of Mann's theorem; the Dyson tree, a hypothetical genetically engineered plant capable of growing in a comet; the Dyson series, a perturbative series where each term is represented by Feynman diagrams; the Dyson sphere, a thought experiment that attempts to explain how a space-faring civilization would meet its energy requirements with a hypothetical megastructure that completely encompasses a star and captures a large percentage of its power output; and Dyson's eternal intelligence, a means by which an immortal society of intelligent beings in an open universe could escape the prospect of the heat death of the universe by extending subjective time to infinity while expending only a finite amount of energy. Dyson disagreed with the scientific consensus on climate change. He believed that some of the effects of increased CO2 levels are favourable and not taken into account by climate scientists, such as increased agricultural yield, and further that the positive benefits of CO2 likely outweigh the negative effects. He was sceptical about the simulation models used to predict climate change, arguing that political efforts to reduce causes of climate change distract from other global problems that should take priority. Biography. Early life. Dyson was born on 15 December 1923, in Crowthorne in Berkshire, England. He was the son of Mildred (née Atkey) and the composer George Dyson, who was later knighted. His mother had a law degree, and after Dyson was born she worked as a social worker. Dyson had one sibling, his older sister, Alice, who remembered him as a boy surrounded by encyclopedias and always calculating on sheets of paper. At the age of four he tried to calculate the number of atoms in the Sun. As a child, he showed an interest in large numbers and in the solar system, and was strongly influenced by the book "Men of Mathematics" by Eric Temple Bell. Politically, Dyson said he was "brought up as a socialist". From 1936 to 1941 Dyson was a scholar at Winchester College, where his father was Director of Music. At the age of 17 he studied pure mathematics with Abram Besicovitch as his tutor at Trinity College, Cambridge, where he won a scholarship at age 15. During this stay, Dyson also practised night climbing on the university buildings, and once walked from Cambridge to London in a day with his friend Oscar Hahn, nephew of Kurt Hahn, who was a wheelchair user due to polio. At the age of 19 he was assigned to war work in the Operational Research Section (ORS) of RAF Bomber Command, where he developed analytical methods for calculating the ideal density for bomber formations to help the Royal Air Force bomb German targets during the Second World War. After the war, Dyson was readmitted to Trinity College, where he obtained a BA degree in mathematics. From 1946 to 1949 he was a fellow of his college, occupying rooms just below those of the philosopher Ludwig Wittgenstein, who resigned his professorship in 1947. In 1947 Dyson published two papers in number theory. Friends and colleagues described him as shy and self-effacing, with a contrarian streak that his friends found refreshing but intellectual opponents found exasperating. "I have the sense that when consensus is forming like ice hardening on a lake, Dyson will do his best to chip at the ice", Steven Weinberg said of him. His friend the neurologist and author Oliver Sacks said: "A favourite word of Freeman's about doing science and being creative is the word 'subversive'. He feels it's rather important not only to be not orthodox, but to be subversive, and he's done that all his life." Career in the United States. On G. I. Taylor's advice and recommendation, Dyson moved to the United States in 1947 as a Commonwealth Fellow for postgraduate study with Hans Bethe at Cornell University (1947–1948). There he made the acquaintance of Richard Feynman. Dyson recognized the brilliance of Feynman and worked with him. He then moved to the Institute for Advanced Study (1948–1949), before returning to England (1949–51), where he was a research fellow at the University of Birmingham. In 1949, Dyson demonstrated the equivalence of two formulations of quantum electrodynamics (QED): Richard Feynman's diagrams and the operator method developed by Julian Schwinger and Shin'ichirō Tomonaga. He was the first person after their creator to appreciate the power of Feynman diagrams and his paper written in 1948 and published in 1949 was the first to make use of them. He said in that paper that Feynman diagrams were not just a computational tool but a physical theory and developed rules for the diagrams that completely solved the renormalization problem. Dyson's paper and his lectures presented Feynman's theories of QED in a form that other physicists could understand, facilitating the physics community's acceptance of Feynman's work. J. Robert Oppenheimer, in particular, was persuaded by Dyson that Feynman's new theory was as valid as Schwinger's and Tomonaga's. Also in 1949, in related work, Dyson invented the Dyson series. It was this paper that inspired John Ward to derive his celebrated Ward–Takahashi identity. Dyson joined the faculty at Cornell as a physics professor in 1951, though he still had no doctorate. In December 1952, Oppenheimer, the director of the Institute for Advanced Study in Princeton, New Jersey, offered Dyson a lifetime appointment at the institute, "for proving me wrong", in Oppenheimer's words. Dyson remained at the Institute until the end of his career. In 1957 he became a US citizen. From 1957 to 1961 Dyson worked on Project Orion, which proposed the possibility of space-flight using nuclear pulse propulsion. A prototype was demonstrated using conventional explosives, but the 1963 Partial Test Ban Treaty, in which Dyson was involved and which he supported, permitted only underground nuclear weapons testing, and the project was abandoned in 1965. In 1958 Dyson was a member of the design team under Edward Teller for TRIGA, a small, inherently safe nuclear reactor used throughout the world in hospitals and universities for the production of medical isotopes. In 1966, independently of Elliott H. Lieb and Walter Thirring, Dyson and Andrew Lenard published a paper proving that the Pauli exclusion principle plays the main role in the stability of matter. Hence it is not the electromagnetic repulsion between outer-shell orbital electrons that prevents two stacked wood blocks from coalescing into a single piece, but the exclusion principle applied to electrons and protons that generates the classical macroscopic normal force. In condensed matter physics, Dyson also analysed the phase transition of the Ising model in one dimension and spin waves. Dyson also did work in a variety of topics in mathematics, such as topology, analysis, number theory and random matrices. In 1973 the number theorist Hugh Lowell Montgomery was visiting the Institute for Advanced Study and had just made his pair correlation conjecture concerning the distribution of the zeros of the Riemann zeta function. He showed his formula to the mathematician Atle Selberg, who said that it looked like something in mathematical physics and that Montgomery should show it to Dyson, which he did. Dyson recognized the formula as the pair correlation function of the Gaussian unitary ensemble, which physicists have studied extensively. This suggested that there might be an unexpected connection between the distribution of primes (2, 3, 5, 7, 11, ...) and the energy levels in the nuclei of heavy elements such as uranium. Around 1979 Dyson worked with the Institute for Energy Analysis on climate studies. This group, under Alvin Weinberg's direction, pioneered multidisciplinary climate studies, including a strong biology group. Also during the 1970s, Dyson worked on climate studies conducted by the JASON defense advisory group. Dyson retired from the Institute for Advanced Study in 1994. In 1998 he joined the board of the Solar Electric Light Fund. In 2003 he was president of the Space Studies Institute, the space research organization founded by Gerard K. O'Neill; in 2013 he was on its board of trustees. Dyson was a longtime member of the JASON group. Dyson won numerous scientific awards, but never a Nobel Prize. Nobel physics laureate Steven Weinberg said that the Nobel committee "fleeced" Dyson, but Dyson remarked in 2009, "I think it's almost true without exception if you want to win a Nobel Prize, you should have a long attention span, get hold of some deep and important problem and stay with it for ten years. That wasn't my style." Dyson was a regular contributor to "The New York Review of Books", and published a memoir, "Maker of Patterns: An Autobiography Through Letters" in 2018. In 2012 Dyson published (with William H. Press) a fundamental new result about the prisoner's dilemma in the Proceedings of the National Academy of Sciences of the United States of America. He wrote a foreword to a treatise on psychic phenomena in which he concluded that "ESP is real... but cannot be tested with the clumsy tools of science". Personal life and death. Dyson married his first wife, the Swiss mathematician Verena Huber, on 11 August 1950. They had two children, Esther and George, before divorcing in 1958. In November 1958 he married Imme Jung (born 1936) and they had four more children: Dorothy, Mia, Rebecca, and Emily Dyson. Dyson's eldest daughter, Esther, is a digital technology consultant and investor; she has been called "the most influential woman in all the computer world". His son George is a historian of science, one of whose books is "Project Orion: The Atomic Spaceship 1957–1965". Dyson died on 28 February 2020 at a hospital near Princeton, New Jersey, from complications following a fall. He was 96. Concepts. Biotechnology and genetic engineering. Dyson admitted his record as a prophet was mixed, but thought it is better to be wrong than vague, and that in meeting the world's material needs, technology must be beautiful and cheap. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;My book "The Sun, the Genome, and the Internet" (1999) describes a vision of green technology enriching villages all over the world and halting the migration from villages to megacities. The three components of the vision are all essential: the sun to provide energy where it is needed, the genome to provide plants that can convert sunlight into chemical fuels cheaply and efficiently, the Internet to end the intellectual and economic isolation of rural populations. With all three components in place, every village in Africa could enjoy its fair share of the blessings of civilization. Dyson coined the term "green technologies", based on biology instead of physics or chemistry, to describe new species of microorganisms and plants designed to meet human needs. He argued that such technologies would be based on solar power rather than the fossil fuels whose use he saw as part of what he calls "gray technologies" of industry. He believed that genetically engineered crops, which he described as green, can help end rural poverty, with a movement based in ethics to end the inequitable distribution of wealth on the planet. "The Origin of Life". Dyson favoured the dual origin theory: that life first formed as cells, then enzymes, and finally, much later, genes. This was first propounded by the Russian biochemist, Alexander Oparin. J. B. S. Haldane developed the same theory independently. In Dyson's version of the theory, life evolved in two stages, widely separated in time. Because of the biochemistry, he regards it as too unlikely that genes could have developed fully blown in one process. Current cells contain adenosine triphosphate or ATP and adenosine 5'-monophosphate or AMP, which greatly resemble each other but have completely different functions. ATP transports energy around the cell, and AMP is part of RNA and the genetic apparatus. Dyson proposed that in a primitive early cell containing ATP and AMP, RNA and replication came into existence only because of the similarity between AMP and RNA. He suggested that AMP was produced when ATP molecules lost two of their phosphate radicals, and then one cell somewhere performed Eigen's experiment and produced RNA. There is no direct evidence for the dual origin theory, because once genes developed, they took over, obliterating all traces of the earlier forms of life. In the first origin, the cells were probably just drops of water held together by surface tension, teeming with enzymes and chemical reactions, and having a primitive kind of growth or replication. When the liquid drop became too big, it split into two drops. Many complex molecules formed in these "little city economies" and the probability that genes would eventually develop in them was much greater than in the prebiotic environment. Dyson sphere. In 1960 Dyson wrote a short paper for the journal "Science" titled "Search for Artificial Stellar Sources of Infrared Radiation". In it he speculated that a technologically advanced extraterrestrial civilization might surround its native star with artificial structures to maximize the capture of the star's energy. Eventually, the civilization would enclose the star, intercepting electromagnetic radiation with wavelengths from visible light downward and radiating waste heat outward as infrared radiation. One method of searching for extraterrestrial civilizations would be to look for large objects radiating in the infrared range of the electromagnetic spectrum. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;One should expect that, within a few thousand years of its entering the stage of industrial development, any intelligent species should be found occupying an artificial biosphere which surrounds its parent star. Dyson conceived that such structures would be clouds of asteroid-sized space habitats, though science fiction writers have preferred a solid structure: either way, such an artifact is often called a Dyson sphere, although Dyson used the term "shell". Dyson said that he used the term "artificial biosphere" in the article to mean a habitat, not a shape. The general concept of such an energy-transferring shell had been created decades earlier by science fiction writer Olaf Stapledon in his 1937 novel "Star Maker", a source which Dyson credited publicly. Dyson tree. Dyson also proposed the creation of a "Dyson tree", a genetically engineered plant capable of growing inside a comet. He suggested that comets could be engineered to contain hollow spaces filled with a breathable atmosphere, thus providing self-sustaining habitats for humanity in the outer Solar System. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Plants could grow greenhouses… just as turtles grow shells and polar bears grow fur and polyps build coral reefs in tropical seas. These plants could keep warm by the light from a distant Sun and conserve the oxygen that they produce by photosynthesis. The greenhouse would consist of a thick skin providing thermal insulation, with small transparent windows to admit sunlight. Outside the skin would be an array of simple lenses, focusing sunlight through the windows into the interior… Groups of greenhouses could grow together to form extended habitats for other species of plants and animals. Space colonies. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I've done some historical research on the costs of the Mayflower's voyage, and on the Mormons' emigration to Utah, and I think it's possible to go into space on a much smaller scale. A cost on the order of $40,000 per person [1978 dollars, $181,600 in 2022 dollars] would be the target to shoot for; in terms of real wages, that would make it comparable to the colonization of America. Unless it's brought down to that level it's not really interesting to me, because otherwise, it would be a luxury that only governments could afford. Dyson was interested in space travel since he was a child, reading such science fiction classics as Olaf Stapledon's "Star Maker". As a young man, he worked for General Atomics on the nuclear-powered Orion spacecraft. He hoped Project Orion would put men on Mars by 1965, and Saturn by 1970. For a quarter-century, Dyson was unhappy about how the government conducted space travel: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;The problem is, of course, that they can't afford to fail. The rules of the game are that you don't take a chance, because if you fail, then probably your whole program gets wiped out. Dyson still hoped for cheap space travel, but was resigned to waiting for private entrepreneurs to develop something new and inexpensive. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;No law of physics or biology forbids cheap travel and settlement all over the solar system and beyond. But it is impossible to predict how long this will take. Predictions of the dates of future achievements are notoriously fallible. My guess is that the era of cheap unmanned missions will be the next fifty years, and the era of cheap manned missions will start sometime late in the twenty-first century. Any affordable program of manned exploration must be centred in biology, and its time frame tied to the time frame of biotechnology; a hundred years, roughly the time it will take us to learn to grow warm-blooded plants, is probably reasonable. Space exploration. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;A direct search for life in Europa's ocean would today be prohibitively expensive. Impacts on Europa give us an easier way to look for evidence of life there. Every time a major impact occurs on Europa, a vast quantity of water is splashed from the ocean into the space around Jupiter. Some of the water evaporates, and some condenses into snow. Creatures living in the water far enough from the impact have a chance of being splashed intact into space and quickly freeze-dried. Therefore, an easy way to look for evidence of life in Europa's ocean is to look for freeze-dried fish in the ring of space debris orbiting Jupiter. Freeze-dried fish orbiting Jupiter is a fanciful notion, but nature in the biological realm has a tendency to be fanciful. Nature is usually more imaginative than we are. …To have the best chance of success, we should keep our eyes open for all possibilities. Dyson's eternal intelligence. Dyson proposed that an immortal group of intelligent beings could escape the prospect of heat death by extending time to infinity while expending only a finite amount of energy. This is also known as the Dyson scenario. Dyson's transform. His concept "Dyson's transform" led to one of the most important lemmas of Olivier Ramaré's theorem: that every even integer can be written as a sum of no more than six primes. Dyson series. The Dyson series, the formal solution of an explicitly time-dependent Schrödinger equation by iteration, and the corresponding Dyson time-ordering operator formula_0 an entity of basic importance in the mathematical formulation of quantum mechanics, are also named after Dyson. Quantum physics and prime numbers. Dyson and Hugh Montgomery discovered an intriguing connection between quantum physics and Montgomery's pair correlation conjecture about the zeros of the zeta function. The primes 2, 3, 5, 7, 11, 13, 17, 19... are described by the Riemann zeta function, and Dyson had previously developed a description of quantum physics based on m by m arrays of totally random numbers. Montgomery and Dyson discovered that the "eigenvalues" of these matrices are spaced apart in exactly the same manner as Montgomery conjectured for the nontrivial zeros of the zeta function. Andrew Odlyzko has verified the conjecture on a computer, using his Odlyzko–Schönhage algorithm to calculate many zeros. There are in nature one, two, and three-dimensional quasicrystals. Mathematicians define a quasicrystal as a set of discrete points whose Fourier transform is also a set of discrete points. Odlyzko has done extensive computations of the Fourier transform of the nontrivial zeros of the zeta function, and they seem to form a one-dimensional quasicrystal. This would in fact follow from the Riemann hypothesis. Rank of a partition. In number theory and combinatorics, the rank of an integer partition is a certain integer associated with the partition. Dyson introduced the concept in a paper published in the journal "Eureka". It was presented in the context of a study of certain congruence properties of the partition function discovered by the mathematician Srinivasa Ramanujan. Crank of a partition. In number theory, the crank of a partition is a certain integer associated with the partition. Dyson first introduced the term without a definition in a 1944 paper in a journal published by the Mathematics Society of Cambridge University. He then gave a list of properties this yet-to-be-defined quantity should have. In 1988, George E. Andrews and Frank Garvan discovered a definition for the crank satisfying the properties Dyson had hypothesized. Astrochicken. Astrochicken is the name given to a thought experiment Dyson expounded in his book "Disturbing the Universe" (1979). He contemplated how humanity could build a small, self-replicating automaton that could explore space more efficiently than a crewed craft could. He attributed the general idea to John von Neumann, based on a lecture von Neumann gave in 1948 titled "The General and Logical Theory of Automata". Dyson expanded on von Neumann's automata theories and added a biological component. Lumpers and splitters. Dyson suggested that philosophers can be broadly if simplistically, divided into lumpers and splitters. These roughly correspond to Platonists, who regard the world as made up of ideas, and materialists, who imagine it divided into atoms. Views. Climate change. Dyson agreed that technically humans and additional CO2 emissions contribute to warming. However, he felt that the benefits of additional CO2 outweighed any associated negative effects. He said that in many ways increased atmospheric carbon dioxide is beneficial, and that it is increasing biological growth, agricultural yields and forests. He believed that existing simulation models of climate change fail to account for some important factors, and that the results thus contain too great a margin of error to reliably predict trends. He argued that political efforts to reduce the causes of climate change distract from other global problems that should take priority, and viewed the acceptance of climate change as comparable to religion. In 2009, Dyson criticised James Hansen's climate-change activism. "The person who is really responsible for this overestimate of global warming is Jim Hansen. He consistently exaggerates all the dangers... Hansen has turned his science into ideology." Hansen responded that Dyson "doesn't know what he's talking about... If he's going to wander into something with major consequences for humanity and other life on the planet, then he should first do his homework- which he obviously has not done on global warming". Dyson replied that "[m]y objections to the global warming propaganda are not so much over the technical facts, about which I do not know much, but it's rather against the way those people behave and the kind of intolerance to criticism that a lot of them have." Dyson stated in an interview that the argument with Hansen was exaggerated by "The New York Times", stating that he and Hansen are "friends, but we don't agree on everything." Since originally taking an interest in climate studies in the 1970s, Dyson suggested that carbon dioxide levels in the atmosphere could be controlled by planting fast-growing trees. He calculated that it would take a trillion trees to remove all carbon from the atmosphere. In a 2014 interview he said, "What I'm convinced of is that we don't understand climate… It will take a lot of very hard work before that question is settled." Dyson was a member of the academic advisory council of the Global Warming Policy Foundation. Warfare and weapons. At RAF Bomber Command, Dyson and colleagues proposed removing two gun turrets from Avro Lancaster bombers, to cut the catastrophic losses due to German fighters in the Battle of Berlin. A Lancaster without turrets could fly faster and be much more manoeuvrable. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;All our advice to the commander in chief [went] through the chief of our section, who was a career civil servant. His guiding principle was to tell the commander in chief things that the commander in chief liked to hear… To push the idea of ripping out gun turrets, against the official mythology of the gallant gunner defending his crew mates… was not the kind of suggestion the commander in chief liked to hear. On hearing the news of the bombing of Hiroshima: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; I agreed emphatically with Henry Stimson. Once we had got ourselves into the business of bombing cities, we might as well do the job competently and get it over with. I felt better that morning than I had felt for years… Those fellows who had built the atomic bombs obviously knew their stuff… Later, much later, I would remember [the downside]. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I am convinced that to avoid nuclear war it is not sufficient to be afraid of it. It is necessary to be afraid, but it is equally necessary to understand. And the first step in understanding is to recognize that the problem of nuclear war is basically not technical but human and historical. If we are to avoid destruction we must first of all understand the human and historical context out of which destruction arises. In 1967, in his capacity as a military adviser, Dyson wrote an influential paper on the issue of possible US use of tactical nuclear weapons in the Vietnam War. When a general said in a meeting, "I think it might be a good idea to throw in a nuke now and then, just to keep the other side guessing…" Dyson became alarmed and obtained permission to write a report on the pros and cons of using such weapons from a purely military point of view. (This report, "Tactical Nuclear Weapons in Southeast Asia", published by the Institute for Defense Analyses, was obtained, with some redactions, by the Nautilus Institute for Security and Sustainability under the Freedom of Information act in 2002.) It was sufficiently objective that both sides of the debate based their arguments on it. Dyson says that the report showed that, even from a narrow military point of view, the US was better off not using nuclear weapons. Dyson opposed the Vietnam War, the Gulf War and the invasion of Iraq. He supported Barack Obama in the 2008 US presidential election and "The New York Times" described him as a political liberal. He was one of 29 leading US scientists who wrote Obama a strongly supportive letter about his administration's 2015 nuclear deal with Iran. Science and religion. Dyson was raised in what he described as a "watered-down Church of England Christianity". He was a nondenominational Christian and attended various churches, from Presbyterian to Roman Catholic. Regarding doctrinal or Christological issues, he said, "I am neither a saint nor a theologian. To me, good works are more important than theology." &lt;templatestyles src="Template:Blockquote/styles.css" /&gt; Dyson partially disagreed with the remark by his fellow physicist Steven Weinberg that "With or without religion, good people can behave well and bad people can do evil; but for good people to do evil – that takes religion." &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Weinberg's statement is true as far as it goes, but it is not the whole truth. To make it the whole truth, we must add an additional clause: "And for bad people to do good things – that [also] takes religion." The main point of Christianity is that it is a religion for sinners. Jesus made that very clear. When the Pharisees asked his disciples, "Why eateth your Master with publicans and sinners?" he said, "I come to call not the righteous but sinners to repentance." Only a small fraction of sinners repent and do good things but only a small fraction of good people are led by their religion to do bad things. Dyson identified himself as agnostic about some of the specifics of his faith. For example, in reviewing "The God of Hope and the End of the World" by John Polkinghorne, Dyson wrote: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I am myself a Christian, a member of a community that preserves an ancient heritage of great literature and great music, provides help and counsel to young and old when they are in trouble, educates children in moral responsibility, and worships God in its own fashion. But I find Polkinghorne's theology altogether too narrow for my taste. I have no use for a theology that claims to know the answers to deep questions but bases its arguments on the beliefs of a single tribe. I am a practicing Christian but not a believing Christian. To me, to worship God means to recognize that mind and intelligence are woven into the fabric of our universe in a way that altogether surpasses our comprehension. In "The God Delusion" (2006), evolutionary biologist and atheist activist Richard Dawkins singled out Dyson for accepting the Templeton Prize in 2000: "It would be taken as an endorsement of religion by one of the world's most distinguished physicists." In 2000, Dyson declared that he was a (non-denominational) Christian, and he disagreed with Dawkins on several subjects, such as that group selection is less important than individual selection on the subject of evolution. Publications. &lt;templatestyles src="Refbegin/styles.css" /&gt; References. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal T\\,," } ]
https://en.wikipedia.org/wiki?curid=11397
11397248
Tajima's D
Population genetic test statistic Tajima's D is a population genetic test statistic created by and named after the Japanese researcher Fumio Tajima. Tajima's D is computed as the difference between two measures of genetic diversity: the mean number of pairwise differences and the number of segregating sites, each scaled so that they are expected to be the same in a neutrally evolving population of constant size. The purpose of Tajima's D test is to distinguish between a DNA sequence evolving randomly ("neutrally") and one evolving under a non-random process, including directional selection or balancing selection, demographic expansion or contraction, genetic hitchhiking, or introgression. A randomly evolving DNA sequence contains mutations with no effect on the fitness and survival of an organism. The randomly evolving mutations are called "neutral", while mutations under selection are "non-neutral". For example, a mutation that causes prenatal death or severe disease would be expected to be under selection. In the population as a whole, the frequency of a neutral mutation fluctuates randomly (i.e. the percentage of individuals in the population with the mutation changes from one generation to the next, and this percentage is equally likely to go up or down) through genetic drift. The strength of genetic drift depends on population size. If a population is at a constant size with constant mutation rate, the population will reach an equilibrium of gene frequencies. This equilibrium has important properties, including the number of segregating sites formula_0, and the number of nucleotide differences between pairs sampled (these are called pairwise differences). To standardize the pairwise differences, the mean or 'average' number of pairwise differences is used. This is simply the sum of the pairwise differences divided by the number of pairs, and is often symbolized by formula_1. The purpose of Tajima's test is to identify sequences which do not fit the neutral theory model at equilibrium between mutation and genetic drift. In order to perform the test on a DNA sequence or gene, you need to sequence homologous DNA for at least 3 individuals. Tajima's statistic computes a standardized measure of the total number of segregating sites (these are DNA sites that are polymorphic) in the sampled DNA and the average number of mutations between pairs in the sample. The two quantities whose values are compared are both method of moments estimates of the population genetic parameter theta, and so are expected to equal the same value. If these two numbers only differ by as much as one could reasonably expect by chance, then the null hypothesis of neutrality cannot be rejected. Otherwise, the null hypothesis of neutrality is rejected. Scientific explanation. Under the neutral theory model, for a population at constant size at equilibrium: formula_2 for diploid DNA, and formula_3 for haploid. In the above formulas, "S" is the number of segregating sites, "n" is the number of samples, "N" is the effective population size, formula_4 is the mutation rate at the examined genomic locus, and "i" is the index of summation. But selection, demographic fluctuations and other violations of the neutral model (including rate heterogeneity and introgression) will change the expected values of formula_0 and formula_1, so that they are no longer expected to be equal. The difference in the expectations for these two variables (which can be positive or negative) is the crux of Tajima's "D" test statistic. formula_5 is calculated by taking the difference between the two estimates of the population genetics parameter formula_6. This difference is called formula_7, and D is calculated by dividing formula_7 by the square root of its variance formula_8 (its standard deviation, by definition). formula_9 Fumio Tajima demonstrated by computer simulation that the formula_5 statistic described above could be modeled using a beta distribution. If the formula_5 value for a sample of sequences is outside the confidence interval then one can reject the null hypothesis of neutral mutation for the sequence in question. However, in real world uses, one must be careful as past population changes (for instance, a population bottleneck) can bias the value of the formula_5 statistic. formula_10 Mathematical details. where formula_11 and formula_12 are two estimates of the expected number of single nucleotide polymorphisms (SNPs) between two DNA sequences under the neutral mutation model in a sample size formula_13 from an effective population size formula_14. The first estimate is the average number of SNPs found in formula_15 pairwise comparisons of sequences formula_16 in the sample, formula_17 The second estimate is derived from the expected value of formula_0, the total number of polymorphisms in the sample formula_18 Tajima defines formula_19, whereas Hartl &amp; Clark use a different symbol to define the same parameter formula_20. Example. Suppose you are a geneticist studying an unknown gene. As part of your research you get DNA samples from four random people (plus yourself). For simplicity, you label your sequence as a string of zeroes, and for the other four people you put a zero when their DNA is the same as yours and a one when it is different. (For this example, the specific type of difference is not important.) Notice the four polymorphic sites (positions where someone differs from you, at 3, 7, 13 and 19 above). Now compare each pair of sequences and get the average number of polymorphisms between two sequences. There are "five choose two" (ten) comparisons that need to be done. Person Y is you! You vs A: 3 polymorphisms You vs B: 2 polymorphisms You vs C: 2 polymorphisms You vs D: 3 polymorphisms A vs B: 1 polymorphism A vs C: 3 polymorphisms A vs D: 2 polymorphisms B vs C: 2 polymorphisms B vs D: 1 polymorphism C vs D: 1 polymorphism The average number of polymorphisms is formula_21. The second estimate of the equilibrium is M=S/a1 Since there were n=5 individuals and S=4 segregating sites a1=1/1+1/2+1/3+1/4=2.08 M=4/2.08=1.92 The lower-case "d" described above is the difference between these two numbers—the average number of polymorphisms found in pairwise comparison (2) and M. Thus formula_22. Since this is a statistical test, you need to assess the significance of this value. A discussion of how to do this is provided below. Interpreting Tajima's D. A negative Tajima's D signifies an excess of low frequency polymorphisms relative to expectation, indicating population size expansion (e.g., after a bottleneck or a selective sweep). A positive Tajima's D signifies low levels of both low and high frequency polymorphisms, indicating a decrease in population size and/or balancing selection. However, calculating a conventional "p-value" associated with any Tajima's D value that is obtained from a sample is impossible. Briefly, this is because there is no way to describe the distribution of the statistic that is independent of the true, and unknown, theta parameter (no pivot quantity exists). To circumvent this issue, several options have been proposed. However, this interpretation should be made only if the D-value is deemed statistically significant. Determining significance. When performing a statistical test such as Tajima's D, the critical question is whether the value calculated for the statistic is unexpected under a null process. For Tajima's "D", the magnitude of the statistic is expected to increase the more the data deviates from a pattern expected under a population evolving according to the standard coalescent model. Tajima (1989) found an empirical similarity between the distribution of the test statistic and a beta distribution with mean zero and variance one. He estimated theta by taking Watterson's estimator and dividing it by the number of samples. Simulations have shown this distribution to be conservative, and now that the computing power is more readily available this approximation is not frequently used. A more nuanced approach was presented in a paper by Simonsen et al. These authors advocated constructing a confidence interval for the true theta value, and then performing a grid search over this interval to obtain the critical values at which the statistic is significant below a particular alpha value. An alternative approach is for the investigator to perform the grid search over the values of theta which they believe to be plausible based on their knowledge of the organism under study. Bayesian approaches are a natural extension of this method. A very rough rule of thumb to significance is that values greater than +2 or less than -2 are likely to be significant. This rule is based on an appeal to asymptotic properties of some statistics, and thus +/- 2 does not actually represent a critical value for a significance test. Finally, genome wide scans of Tajima's D in sliding windows along a chromosomal segment are often performed. With this approach, those regions that have a value of D that greatly deviates from the bulk of the empirical distribution of all such windows are reported as significant. This method does not assess significance in the traditional statistical sense, but is quite powerful given a large genomic region, and is unlikely to falsely identify interesting regions of a chromosome if only the greatest outliers are reported. References. &lt;templatestyles src="Reflist/styles.css" /&gt; External links. Computational tools: * DNAsp (Windows) * Variscan (Mac OS X, Linux, Windows) * Arlequin (Windows) * Online view of Tajima's D values in human genome * Python3 package for computation of Tajima's D * MEGA4 or MEGA5 * Bio::PopGen::Statistics in BioPerl
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "\\pi" }, { "math_id": 2, "text": "E[\\pi]=\\theta=E\\left[\\frac{S}{\\sum_{i=1}^{n-1} \\frac{1}{i}}\\right]=4N\\mu" }, { "math_id": 3, "text": "E[\\pi]=\\theta=E\\left[\\frac{S}{\\sum_{i=1}^{n-1} \\frac{1}{i}}\\right]=2N\\mu" }, { "math_id": 4, "text": "\\mu" }, { "math_id": 5, "text": "D\\," }, { "math_id": 6, "text": "\\theta\\," }, { "math_id": 7, "text": "d\\," }, { "math_id": 8, "text": "\\sqrt{\\hat{V}(d)}" }, { "math_id": 9, "text": "\nD=\\frac\n{d}\n{\\sqrt\n{\\hat{V}(d)}\n} \n" }, { "math_id": 10, "text": "\nD=\\frac\n{d}\n{\\sqrt\n{\\hat{V}(d)}\n} = \n\\frac\n{\\hat{k} -\n\\frac{S}{a_1}\n}\n{\\sqrt\n{[e_1S+e_2S(S-1)]}\n}\n" }, { "math_id": 11, "text": "\\hat{k}\\," }, { "math_id": 12, "text": "\\frac{S}{a_1}" }, { "math_id": 13, "text": "n\\," }, { "math_id": 14, "text": "N" }, { "math_id": 15, "text": "n \\choose 2" }, { "math_id": 16, "text": "(i,j)" }, { "math_id": 17, "text": "\n\\hat{k}=\n\\frac\n{\n\\sum\\sum_{i<j} k_{ij}\n}\n{\n\\binom{n}{2}\n}.\n" }, { "math_id": 18, "text": "\nE(S)=a_1M.\n" }, { "math_id": 19, "text": "M=4N\\mu" }, { "math_id": 20, "text": "\\theta=4N\\mu" }, { "math_id": 21, "text": "{3 + 2 + 2 + 3 + 1 + 3 + 2 + 2 + 1 + 1\\over 10} = 2" }, { "math_id": 22, "text": "d = 2 - 1.92= .08" } ]
https://en.wikipedia.org/wiki?curid=11397248
11397469
Population model
Mathematical model A population model is a type of mathematical model that is applied to the study of population dynamics. Rationale. Models allow a better understanding of how complex interactions and processes work. Modeling of dynamic interactions in nature can provide a manageable way of understanding how numbers change over time or in relation to each other. Many patterns can be noticed by using population modeling as a tool. Ecological population modeling is concerned with the changes in parameters such as population size and age distribution within a population. This might be due to interactions with the environment, individuals of their own species, or other species. Population models are used to determine maximum harvest for agriculturists, to understand the dynamics of biological invasions, and for environmental conservation. Population models are also used to understand the spread of parasites, viruses, and disease. Another way populations models are useful are when species become endangered. Population models can track the fragile species and work and curb the decline. History. Late 18th-century biologists began to develop techniques in population modeling in order to understand the dynamics of growing and shrinking of all populations of living organisms. Thomas Malthus was one of the first to note that populations grew with a geometric pattern while contemplating the fate of humankind. One of the most basic and milestone models of population growth was the logistic model of population growth formulated by Pierre François Verhulst in 1838. The logistic model takes the shape of a sigmoid curve and describes the growth of a population as exponential, followed by a decrease in growth, and bound by a carrying capacity due to environmental pressures. Population modeling became of particular interest to biologists in the 20th century as pressure on limited means of sustenance due to increasing human populations in parts of Europe were noticed by biologist like Raymond Pearl. In 1921 Pearl invited physicist Alfred J. Lotka to assist him in his lab. Lotka developed paired differential equations that showed the effect of a parasite on its prey. Mathematician Vito Volterra equated the relationship between two species independent from Lotka. Together, Lotka and Volterra formed the Lotka–Volterra model for competition that applies the logistic equation to two species illustrating competition, predation, and parasitism interactions between species. In 1939 contributions to population modeling were given by Patrick Leslie as he began work in biomathematics. Leslie emphasized the importance of constructing a life table in order to understand the effect that key life history strategies played in the dynamics of whole populations. Matrix algebra was used by Leslie in conjunction with life tables to extend the work of Lotka. Matrix models of populations calculate the growth of a population with life history variables. Later, Robert MacArthur and E. O. Wilson characterized island biogeography. The equilibrium model of island biogeography describes the number of species on an island as an equilibrium of immigration and extinction. The logistic population model, the Lotka–Volterra model of community ecology, life table matrix modeling, the equilibrium model of island biogeography and variations thereof are the basis for ecological population modeling today. Equations. Logistic growth equation: formula_0 Competitive Lotka–Volterra equations: formula_1 Island biogeography: formula_2 Species–area relationship: formula_3 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\frac{dN}{dt} = rN\\left(1-\\frac{N}{K}\\right)\\," }, { "math_id": 1, "text": "\\frac{dN_1}{dt} = r_1 N_1\\frac{K_1-N_1 - \\alpha N_2}{K_1}\\," }, { "math_id": 2, "text": "S = \\frac{IP}{I+E}" }, { "math_id": 3, "text": "\\log(S) = \\log(c)+z \\log(A)\\," } ]
https://en.wikipedia.org/wiki?curid=11397469
1139926
Hyperbolic sector
Region of the Cartesian plane bounded by a hyperbola and two radii A hyperbolic sector is a region of the Cartesian plane bounded by a hyperbola and two rays from the origin to it. For example, the two points ("a", 1/"a") and ("b", 1/"b") on the rectangular hyperbola "xy" = 1, or the corresponding region when this hyperbola is re-scaled and its orientation is altered by a rotation leaving the center at the origin, as with the unit hyperbola. A hyperbolic sector in standard position has "a" = 1 and "b" &gt; 1. Hyperbolic sectors are the basis for the hyperbolic functions. Area. The area of a hyperbolic sector in standard position is natural logarithm of "b" . Proof: Integrate under 1/"x" from 1 to "b", add triangle {(0, 0), (1, 0), (1, 1)}, and subtract triangle {(0, 0), ("b", 0), ("b", 1/"b")} (both triangles of which have the same area). When in standard position, a hyperbolic sector corresponds to a positive hyperbolic angle at the origin, with the measure of the latter being defined as the area of the former. Hyperbolic triangle. When in standard position, a hyperbolic sector determines a hyperbolic triangle, the right triangle with one vertex at the origin, base on the diagonal ray "y" = "x", and third vertex on the hyperbola formula_0 with the hypotenuse being the segment from the origin to the point ("x, y") on the hyperbola. The length of the base of this triangle is formula_1 and the altitude is formula_2 where "u" is the appropriate hyperbolic angle. The analogy between circular and hyperbolic functions was described by Augustus De Morgan in his "Trigonometry and Double Algebra" (1849). William Burnside used such triangles, projecting from a point on the hyperbola "xy" = 1 onto the main diagonal, in his article "Note on the addition theorem for hyperbolic functions". Hyperbolic logarithm. It is known that f("x") = "x""p" has an algebraic antiderivative except in the case "p" = –1 corresponding to the quadrature of the hyperbola. The other cases are given by Cavalieri's quadrature formula. Whereas quadrature of the parabola had been accomplished by Archimedes in the third century BC (in "The Quadrature of the Parabola"), the hyperbolic quadrature required the invention in 1647 of a new function: Gregoire de Saint-Vincent addressed the problem of computing the areas bounded by a hyperbola. His findings led to the natural logarithm function, once called the hyperbolic logarithm since it is obtained by integrating, or finding the area, under the hyperbola. Before 1748 and the publication of Introduction to the Analysis of the Infinite, the natural logarithm was known in terms of the area of a hyperbolic sector. Leonhard Euler changed that when he introduced transcendental functions such as 10x. Euler identified e as the value of "b" producing a unit of area (under the hyperbola or in a hyperbolic sector in standard position). Then the natural logarithm could be recognized as the inverse function to the transcendental function ex. To accommodate the case of negative logarithms and the corresponding negative hyperbolic angles, different hyperbolic sectors are constructed according to whether "x" is greater or less than one. A variable right triangle with area 1/2 is formula_3 The isosceles case is formula_4 The natural logarithm is known as the area under "y" = 1/"x" between one and "x". A positive hyperbolic angle is given by the area of formula_5 A negative hyperbolic angle is given by the "negative" of the area formula_6 This convention is in accord with a negative natural logarithm for "x" in (0,1). Hyperbolic geometry. When Felix Klein's book on non-Euclidean geometry was published in 1928, it provided a foundation for the subject by reference to projective geometry. To establish hyperbolic measure on a line, Klein noted that the area of a hyperbolic sector provided visual illustration of the concept. Hyperbolic sectors can also be drawn to the hyperbola formula_7. The area of such hyperbolic sectors has been used to define hyperbolic distance in a geometry textbook. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "xy=1,\\," }, { "math_id": 1, "text": "\\sqrt 2 \\cosh u,\\," }, { "math_id": 2, "text": "\\sqrt 2 \\sinh u,\\," }, { "math_id": 3, "text": "V = \\{(x, 1/x), \\ (x,0), \\ (0,0)\\} ." }, { "math_id": 4, "text": "T = \\{(1,1),\\ (1,0),\\ (0,0)\\}." }, { "math_id": 5, "text": "\\int_1^x \\frac{dt}{t} + T - V." }, { "math_id": 6, "text": "\\int_x^1 \\frac{dt}{t} + V - T." }, { "math_id": 7, "text": "y = \\sqrt{1 + x^2}" } ]
https://en.wikipedia.org/wiki?curid=1139926
1139981
Hyperbolic angle
Argument of the hyperbolic functions In geometry, hyperbolic angle is a real number determined by the area of the corresponding hyperbolic sector of "xy" = 1 in Quadrant I of the Cartesian plane. The hyperbolic angle parametrises the unit hyperbola, which has hyperbolic functions as coordinates. In mathematics, hyperbolic angle is an invariant measure as it is preserved under hyperbolic rotation. The hyperbola "xy" = 1 is rectangular with a semi-major axis of formula_0, analogous to the magnitude of a circular angle corresponding to the area of a circular sector in a circle with radius formula_0. Hyperbolic angle is used as the independent variable for the hyperbolic functions sinh, cosh, and tanh, because these functions may be premised on hyperbolic analogies to the corresponding circular trigonometric functions by regarding a hyperbolic angle as defining a hyperbolic triangle. The parameter thus becomes one of the most useful in the calculus of real variables. Definition. Consider the rectangular hyperbola formula_1, and (by convention) pay particular attention to the "branch" formula_2. First define: Note that, because of the role played by the natural logarithm: Finally, extend the definition of "hyperbolic angle" to that subtended by any interval on the hyperbola. Suppose formula_8 are positive real numbers such that formula_9 and formula_10, so that formula_11 and formula_12 are points on the hyperbola formula_13 and determine an interval on it. Then the squeeze mapping formula_14 maps the angle formula_15 to the "standard position" angle formula_16. By the result of Gregoire de Saint-Vincent, the hyperbolic sectors determined by these angles have the same area, which is taken to be the magnitude of the angle. This magnitude is formula_17. Comparison with circular angle. A unit circle formula_18 has a circular sector with an area half of the circular angle in radians. Analogously, a unit hyperbola formula_19 has a hyperbolic sector with an area half of the hyperbolic angle. There is also a projective resolution between circular and hyperbolic cases: both curves are conic sections, and hence are treated as projective ranges in projective geometry. Given an origin point on one of these ranges, other points correspond to angles. The idea of addition of angles, basic to science, corresponds to addition of points on one of these ranges as follows: Circular angles can be characterised geometrically by the property that if two chords "P"0"P"1 and "P"0"P"2 subtend angles "L"1 and "L"2 at the centre of a circle, their sum "L"1 + "L"2 is the angle subtended by a chord "P"0"Q", where "P"0"Q" is required to be parallel to "P"1"P"2. The same construction can also be applied to the hyperbola. If "P"0 is taken to be the point (1, 1), "P"1 the point ("x"1, 1/"x"1), and "P"2 the point ("x"2, 1/"x"2), then the parallel condition requires that "Q" be the point ("x"1"x"2, 1/"x"11/"x"2). It thus makes sense to define the hyperbolic angle from "P"0 to an arbitrary point on the curve as a logarithmic function of the point's value of "x". Whereas in Euclidean geometry moving steadily in an orthogonal direction to a ray from the origin traces out a circle, in a pseudo-Euclidean plane steadily moving orthogonally to a ray from the origin traces out a hyperbola. In Euclidean space, the multiple of a given angle traces equal distances around a circle while it traces exponential distances upon the hyperbolic line. Both circular and hyperbolic angle provide instances of an invariant measure. Arcs with an angular magnitude on a circle generate a measure on certain measurable sets on the circle whose magnitude does not vary as the circle turns or rotates. For the hyperbola the turning is by squeeze mapping, and the hyperbolic angle magnitudes stay the same when the plane is squeezed by a mapping ("x", "y") ↦ ("rx", "y" / "r"), with "r" &gt; 0 . Relation To The Minkowski Line Element. There is also a curious relation to a hyperbolic angle and the metric defined on Minkowski space. Just as two dimensional Euclidean geometry defines its line element as formula_20 the line element on Minkowski space is formula_21 Consider a curve imbedded in two dimensional Euclidean space, formula_22 Where the parameter formula_23 is a real number that runs between formula_24 and formula_25 (formula_26). The arclength of this curve in Euclidean space is computed as: formula_27 If formula_18 defines a unit circle, a single parameterized solution set to this equation is formula_28 and formula_29. Letting formula_30, computing the arclength formula_31 gives formula_32. Now doing the same procedure, except replacing the Euclidean element with the Minkowski line element, formula_33 and defining a "unit" hyperbola as formula_34 with its corresponding parameterized solution set formula_35 and formula_36, and by letting formula_37 (the hyperbolic angle), we arrive at the result of formula_38. In other words, this means just as how the circular angle can be defined as the arclength of an arc on the unit circle subtended by the same angle using the Euclidean defined metric, the hyperbolic angle is the arclength of the arc on the "unit" hyperbola subtended by the hyperbolic angle using the Minkowski defined metric. History. The quadrature of the hyperbola is the evaluation of the area of a hyperbolic sector. It can be shown to be equal to the corresponding area against an asymptote. The quadrature was first accomplished by Gregoire de Saint-Vincent in 1647 in "Opus geometricum quadrature circuli et sectionum coni". As expressed by a historian, [He made the] quadrature of a hyperbola to its asymptotes, and showed that as the area increased in arithmetic series the abscissas increased in geometric series. A. A. de Sarasa interpreted the quadrature as a logarithm and thus the geometrically defined natural logarithm (or "hyperbolic logarithm") is understood as the area under "y" = 1/"x" to the right of "x" = 1. As an example of a transcendental function, the logarithm is more familiar than its motivator, the hyperbolic angle. Nevertheless, the hyperbolic angle plays a role when the theorem of Saint-Vincent is advanced with squeeze mapping. Circular trigonometry was extended to the hyperbola by Augustus De Morgan in his textbook "Trigonometry and Double Algebra". In 1878 W.K. Clifford used the hyperbolic angle to parametrize a unit hyperbola, describing it as "quasi-harmonic motion". In 1894 Alexander Macfarlane circulated his essay "The Imaginary of Algebra", which used hyperbolic angles to generate hyperbolic versors, in his book "Papers on Space Analysis". The following year Bulletin of the American Mathematical Society published Mellen W. Haskell's outline of the hyperbolic functions. When Ludwik Silberstein penned his popular 1914 textbook on the new theory of relativity, he used the rapidity concept based on hyperbolic angle "a", where tanh "a" = "v"/"c", the ratio of velocity "v" to the speed of light. He wrote: It seems worth mentioning that to "unit" rapidity corresponds a huge velocity, amounting to 3/4 of the velocity of light; more accurately we have "v" = (.7616)"c" for "a" = 1. [...] the rapidity "a" = 1, [...] consequently will represent the velocity .76 "c" which is a little above the velocity of light in water. Silberstein also uses Lobachevsky's concept of angle of parallelism Π("a") to obtain cos Π("a") = "v"/"c". Imaginary circular angle. The hyperbolic angle is often presented as if it were an imaginary number, formula_39 and formula_40 so that the hyperbolic functions cosh and sinh can be presented through the circular functions. But in the Euclidean plane we might alternately consider circular angle measures to be imaginary and hyperbolic angle measures to be real scalars, formula_41 and formula_42 These relationships can be understood in terms of the exponential function, which for a complex argument formula_43 can be broken into even and odd parts formula_44 and formula_45 respectively. Then formula_46 or if the argument is separated into real and imaginary parts formula_47 the exponential can be split into the product of scaling formula_48 and rotation formula_49 formula_50 As infinite series, formula_51 The infinite series for cosine is derived from cosh by turning it into an alternating series, and the series for sine comes from making sinh into an alternating series. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt 2" }, { "math_id": 1, "text": "\\textstyle\\{(x,\\frac 1 x): x>0\\}" }, { "math_id": 2, "text": "x > 1" }, { "math_id": 3, "text": "(0, 0)" }, { "math_id": 4, "text": "(1, 1)" }, { "math_id": 5, "text": "\\textstyle(x, \\frac 1 x)" }, { "math_id": 6, "text": "\\operatorname{ln}x" }, { "math_id": 7, "text": "0 < x < 1" }, { "math_id": 8, "text": "a, b, c, d" }, { "math_id": 9, "text": "ab = cd = 1" }, { "math_id": 10, "text": "c > a > 1" }, { "math_id": 11, "text": "(a, b)" }, { "math_id": 12, "text": "(c, d)" }, { "math_id": 13, "text": "xy=1" }, { "math_id": 14, "text": "\\textstyle f:(x, y)\\to(bx, ay)" }, { "math_id": 15, "text": "\\angle\\!\\left ((a, b), (0,0), (c, d)\\right)" }, { "math_id": 16, "text": "\\angle\\!\\left ((1, 1), (0,0), (bc, ad)\\right)" }, { "math_id": 17, "text": "\\operatorname{ln}{(bc)}=\\operatorname{ln}(c/a) =\\operatorname{ln}c-\\operatorname{ln}a" }, { "math_id": 18, "text": " x^2 + y^2 = 1 " }, { "math_id": 19, "text": " x^2 - y^2 = 1 " }, { "math_id": 20, "text": "ds_{e}^2 = dx^2 + dy^2," }, { "math_id": 21, "text": "ds_{m}^2 = dx^2 - dy^2." }, { "math_id": 22, "text": "x = f(t), y=g(t)." }, { "math_id": 23, "text": "t" }, { "math_id": 24, "text": " a " }, { "math_id": 25, "text": " b " }, { "math_id": 26, "text": " a\\leqslant t<b " }, { "math_id": 27, "text": "S = \\int_{a}^{b}ds_{e} = \\int_{a}^{b} \\sqrt{\\left (\\frac{dx}{dt}\\right )^2 + \\left (\\frac{dy}{dt}\\right )^2 }dt." }, { "math_id": 28, "text": " x = \\cos t " }, { "math_id": 29, "text": " y = \\sin t " }, { "math_id": 30, "text": " 0\\leqslant t < \\theta " }, { "math_id": 31, "text": " S " }, { "math_id": 32, "text": " S = \\theta " }, { "math_id": 33, "text": "S = \\int_{a}^{b}ds_{m} = \\int_{a}^{b} \\sqrt{\\left (\\frac{dx}{dt}\\right )^2 - \\left (\\frac{dy}{dt}\\right )^2 }dt," }, { "math_id": 34, "text": " y^2 - x^2 = 1 " }, { "math_id": 35, "text": " y = \\cosh t " }, { "math_id": 36, "text": " x = \\sinh t " }, { "math_id": 37, "text": " 0\\leqslant t < \\eta " }, { "math_id": 38, "text": " S = \\eta " }, { "math_id": 39, "text": " \\cos ix = \\cosh x" }, { "math_id": 40, "text": "\\sin ix = i \\sinh x," }, { "math_id": 41, "text": " \\cosh ix = \\cos x" }, { "math_id": 42, "text": "\\sinh ix = i \\sin x." }, { "math_id": 43, "text": "z" }, { "math_id": 44, "text": "\\cosh z = \\tfrac12(e^z + e^{-z})" }, { "math_id": 45, "text": "\\sinh z = \\tfrac12(e^z - e^{-z})," }, { "math_id": 46, "text": "e^z = \\cosh z + \\sinh z = \\cos(iz) - i \\sin(iz), " }, { "math_id": 47, "text": "z = x + iy," }, { "math_id": 48, "text": "e^{x}" }, { "math_id": 49, "text": "e^{iy}," }, { "math_id": 50, "text": "e^{x + iy} = e^{x}e^{iy} = (\\cosh x + \\sinh x)(\\cos y + i \\sin y)." }, { "math_id": 51, "text": "\\begin{alignat}{3}\ne^z &= \\,\\,\\sum_{k=0}^\\infty \\frac{z^k}{k!} &&\n = 1 + z + \\tfrac{1}{2}z^2 + \\tfrac16z^3 + \\tfrac1{24}z^4 + \\dots \\\\\n\\cosh z &= \\sum_{k \\text{ even} } \\frac{z^k}{k!} &&\n = 1 + \\tfrac{1}{2}z^2 + \\tfrac1{24}z^4 + \\dots \\\\\n\\sinh z &= \\,\\sum_{k \\text{ odd} } \\frac{z^k}{k!} &&\n = z + \\tfrac{1}{6}z^3 + \\tfrac1{120}z^5 + \\dots \\\\\n\\cos z &= \\sum_{k \\text{ even} } \\frac{(iz)^k}{k!} &&\n = 1 - \\tfrac{1}{2}z^2 + \\tfrac1{24}z^4 - \\dots \\\\\ni \\sin z &= \\,\\sum_{k \\text{ odd} } \\frac{(iz)^k}{k!} &&\n = i\\left(z - \\tfrac{1}{6}z^3 + \\tfrac1{120}z^5 - \\dots\\right) \\\\\n\\end{alignat}" } ]
https://en.wikipedia.org/wiki?curid=1139981
11399877
V-optimal histograms
Histograms are most commonly used as visual representations of data. However, Database systems use histograms to summarize data internally and provide size estimates for queries. These histograms are not presented to users or displayed visually, so a wider range of options are available for their construction. Simple or exotic histograms are defined by four parameters, Sort Value, Source Value, Partition Class and Partition Rule. The most basic histogram is the equi-width histogram, where each bucket represents the same range of values. That histogram would be defined as having a Sort Value of Value, a Source Value of Frequency, be in the Serial Partition Class and have a Partition Rule stating that all buckets have the same range. V-optimal histograms are an example of a more "exotic" histogram. V-optimality is a Partition Rule which states that the bucket boundaries are to be placed as to minimize the cumulative weighted variance of the buckets. Implementation of this rule is a complex problem and construction of these histograms is also a complex process. Definition. A v-optimal histogram is based on the concept of minimizing a quantity which is called the "weighted variance" in this context. This is defined as formula_0 where the histogram consists of "J" bins or buckets, "nj" is the number of items contained in the "j"th bin and where "Vj" is the variance between the values associated with the items in the "j"th bin. Examples. The following example will construct a V-optimal histogram having a Sort Value of Value, a Source Value of Frequency, and a Partition Class of Serial. In practice, almost all histograms used in research or commercial products are of the Serial class, meaning that sequential sort values are placed in either the same bucket, or sequential buckets. For example, values 1, 2, 3 and 4 will be in buckets 1 and 2, or buckets 1, 2 and 3, but never in buckets 1 and 3. That will be taken as an assumption in any further discussion. Take a simple set of data, for example, a list of integers: 1, 3, 4, 7, 2, 8, 3, 6, 3, 6, 8, 2, 1, 6, 3, 5, 3, 4, 7, 2, 6, 7, 2 Compute the value and frequency pairs Our V-optimal histogram will have two buckets. Since one bucket must end at the data point for 8, we must decide where to put the other bucket boundary. The V-optimality rule states that the cumulative weighted variance of the buckets must be minimized. We will look at two options and compute the cumulative variance of those options. Option 1: Bucket 1 contains values 1 through 4. Bucket 2 contains values 5 through 8. Bucket 1: Average frequency 3.25 Weighted variance 2.28 Bucket 2: Average frequency 2.5 Weighted variance 2.19 Sum of Weighted Variance 4.47 Option 2: Bucket 1 contains values 1 through 2. Bucket 2 contains values 3 through 8. Bucket 1: Average frequency 3 Weighted variance 1.41 Bucket 2: Average frequency 2.83 Weighted variance 3.29 Sum of Weighted Variance 4.70 The first choice is better, so the histogram that would wind up being stored is: Bucket 1: Range (1–4), Average Frequency 3.25 Bucket 2: Range (5–8), Average Frequency 2.5 Advantages of V-optimality vs. equi-width or equi-depth. V-optimal histograms do a better job of estimating the bucket contents. A histogram is an estimation of the base data, and any histogram will have errors. The partition rule used in VOptimal histograms attempts to have the smallest variance possible among the buckets, which provides for a smaller error. Research done by Poosala and Ioannidis 1 has demonstrated that the most accurate estimation of data is done with a VOptimal histogram using value as a sort parameter and frequency as a source parameter. Disadvantages of V-optimality vs. equi-width or equi-depth. While the V-optimal histogram is more accurate, it does have drawbacks. It is a difficult structure to update. Any changes to the source parameter could potentially result in having to re-build the histogram entirely, rather than updating the existing histogram. An equi-width histogram does not have this problem. Equi-depth histograms will experience this issue to some degree, but because the equi-depth construction is simpler, there is a lower cost to maintain it. The difficulty in updating VOptimal histograms is an outgrowth of the difficulty involved in constructing these histograms. Computing the V-optimal histogram is computationally expensive to compute compared to other types of histograms. Construction issues. The above example is a simple one. There are only 7 choices of bucket boundaries. One could compute the cumulative variance for all 7 options easily and choose the absolute best placement. However, as the range of values gets larger and the number of buckets gets larger, the set of possible histograms grows exponentially and it becomes a dauntingly complex problem to find the set of boundaries that provide the absolute minimum variance using the naïve approach. Using dynamic programming, it is possible to compute the V-optimal histogram in formula_1 where N is the number of data points and B is the number buckets. Since finding the optimal histogram is quadratic, it is common to instead approximate the V-optimal histogram. By creating random solutions, using those as a starting point and improving upon them, one can find a solution that is a fair approximation of the "best" solution. One construction method used to get around this problem is the Iterative Improvement algorithm. Another is Simulated Annealing. The two may be combined in Two Phase Optimization, or 2PO. These algorithms are put forth in "Randomized Algorithms..." (cited below) as a method to optimize queries, but the general idea may be applied to construction of V-optimal Histograms. Iterative improvement. Iterative Improvement (II) is a fairly naïve greedy algorithm. Starting from a random state, iterative steps in many directions are considered. The step that offers the best improvement of cost (in this case Total Variance) is taken. The process is repeated until one settles at the local minimum, where no further improvement is possible. Applied to the construction of V-optimal histograms, the initial random state would be a set of values representing the bucket boundary placements. The iterative improvement steps would involve moving each boundary until it was at its local minimum, then moving to the next boundary and adjusting it accordingly. Simulated annealing. A basic explanation of Simulated Annealing is that it is a lot like II, only instead of taking the greedy step each time, it will sometimes accept a step that results in an increase in cost. In theory, SA will be less likely to stop at a very local minimum, and more likely to find a more global one. A useful piece of imagery is an M-shaped graph, representing overall cost on the Y axis. If the initial state is on the V-shaped part of the "M", II will settle into the high valley, the local minimum. Because SA will accept uphill moves, it is more likely to climb up the slope of the "V" and wind up at the foot of the "M", the global minimum. Two phase optimization. Two phase optimization, or 2PO, combines the II and SA methods. II is run until a local minimum is reached, then SA is run on that solution in an attempt to find less obvious improvements. Variation. The idea behind V-optimal histograms is to minimize the variance inside each bucket. In considering this, a thought occurs that the variance of any set with one member is 0. This is the idea behind "End-Biased" V-optimal Histograms. The value with the highest frequency is always placed in its own bucket. This ensures that the estimate for that value (which is likely to be the most frequently requested estimate, since it is the most frequent value) will always be accurate and also removes the value most likely to cause a high variance from the data set. Another thought that might occur is that variance would be reduced if one were to sort by frequency, instead of value. This would naturally tend to place like values next to each other. Such a histogram can be constructed by using a Sort Value of Frequency and a Source Value of Frequency. At this point, however, the buckets must carry additional information indicating what data values are present in the bucket. These histograms have been shown to be less accurate, due to the additional layer of estimation required. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "W=\\sum_{j=1}^Jn_jV_j \\, ," }, { "math_id": 1, "text": "O(N^2 B)" } ]
https://en.wikipedia.org/wiki?curid=11399877
1140
Amplitude modulation
Radio modulation via wave amplitude Amplitude modulation (AM) is a modulation technique used in electronic communication, most commonly for transmitting messages with a radio wave. In amplitude modulation, the amplitude (signal strength) of the wave is varied in proportion to that of the message signal, such as an audio signal. This technique contrasts with angle modulation, in which either the frequency of the carrier wave is varied, as in frequency modulation, or its phase, as in phase modulation. AM was the earliest modulation method used for transmitting audio in radio broadcasting. It was developed during the first quarter of the 20th century beginning with Roberto Landell de Moura and Reginald Fessenden's radiotelephone experiments in 1900. This original form of AM is sometimes called double-sideband amplitude modulation (DSBAM), because the standard method produces sidebands on either side of the carrier frequency. Single-sideband modulation uses bandpass filters to eliminate one of the sidebands and possibly the carrier signal, which improves the ratio of message power to total transmission power, reduces power handling requirements of line repeaters, and permits better bandwidth utilization of the transmission medium. AM remains in use in many forms of communication in addition to AM broadcasting: shortwave radio, amateur radio, two-way radios, VHF aircraft radio, citizens band radio, and in computer modems in the form of QAM. Foundation. In electronics, telecommunications and mechanics, modulation means varying some aspect of a continuous wave carrier signal with an information-bearing modulation waveform, such as an audio signal which represents sound, or a video signal which represents images. In this sense, the carrier wave, which has a much higher frequency than the message signal, "carries" the information. At the receiving station, the message signal is extracted from the modulated carrier by demodulation. In general form, a modulation process of a sinusoidal carrier wave may be described by the following equation: formula_0. "A(t)" represents the time-varying amplitude of the sinusoidal carrier wave and the cosine-term is the carrier at its angular frequency formula_1, and the instantaneous phase deviation formula_2. This description directly provides the two major groups of modulation, amplitude modulation and angle modulation. In angle modulation, the term "A"("t") is constant and the second term of the equation has a functional relationship to the modulating message signal. Angle modulation provides two methods of modulation, frequency modulation and phase modulation. In amplitude modulation, the angle term is held constant and the first term, "A"("t"), of the equation has a functional relationship to the modulating message signal. The modulating message signal may be analog in nature, or it may be a digital signal, in which case the technique is generally called amplitude-shift keying. For example, in AM radio communication, a continuous wave radio-frequency signal has its amplitude modulated by an audio waveform before transmission. The message signal determines the "envelope" of the transmitted waveform. In the frequency domain, amplitude modulation produces a signal with power concentrated at the carrier frequency and two adjacent sidebands. Each sideband is equal in bandwidth to that of the modulating signal, and is a mirror image of the other. Standard AM is thus sometimes called "double-sideband amplitude modulation" (DSBAM). A disadvantage of all amplitude modulation techniques, not only standard AM, is that the receiver amplifies and detects noise and electromagnetic interference in equal proportion to the signal. Increasing the received signal-to-noise ratio, say, by a factor of 10 (a 10 decibel improvement), thus would require increasing the transmitter power by a factor of 10. This is in contrast to frequency modulation (FM) and digital radio where the effect of such noise following demodulation is strongly reduced so long as the received signal is well above the threshold for reception. For this reason AM broadcast is not favored for music and high fidelity broadcasting, but rather for voice communications and broadcasts (sports, news, talk radio etc.). AM is also inefficient in power usage; at least two-thirds of the power is concentrated in the carrier signal. The carrier signal contains none of the original information being transmitted (voice, video, data, etc.). However its presence provides a simple means of demodulation using envelope detection, providing a frequency and phase reference to extract the modulation from the sidebands. In some modulation systems based on AM, a lower transmitter power is required through partial or total elimination of the carrier component, however receivers for these signals are more complex because they must provide a precise carrier frequency reference signal (usually as shifted to the intermediate frequency) from a greatly reduced "pilot" carrier (in reduced-carrier transmission or DSB-RC) to use in the demodulation process. Even with the carrier totally eliminated in double-sideband suppressed-carrier transmission, carrier regeneration is possible using a Costas phase-locked loop. This does not work for single-sideband suppressed-carrier transmission (SSB-SC), leading to the characteristic "Donald Duck" sound from such receivers when slightly detuned. Single-sideband AM is nevertheless used widely in amateur radio and other voice communications because it has power and bandwidth efficiency (cutting the RF bandwidth in half compared to standard AM). On the other hand, in medium wave and short wave broadcasting, standard AM with the full carrier allows for reception using inexpensive receivers. The broadcaster absorbs the extra power cost to greatly increase potential audience. Shift keying. A simple form of digital amplitude modulation which can be used for transmitting binary data is on–off keying, the simplest form of amplitude-shift keying, in which ones and zeros are represented by the presence or absence of a carrier. On–off keying is likewise used by radio amateurs to transmit Morse code where it is known as continuous wave (CW) operation, even though the transmission is not strictly "continuous". A more complex form of AM, quadrature amplitude modulation is now more commonly used with digital data, while making more efficient use of the available bandwidth. Analog telephony. A simple form of amplitude modulation is the transmission of speech signals from a traditional analog telephone set using a common battery local loop. The direct current provided by the central office battery is a carrier with a frequency of 0 Hz. It is modulated by a microphone ("transmitter") in the telephone set according to the acoustic signal from the speaker. The result is a varying amplitude direct current, whose AC-component is the speech signal extracted at the central office for transmission to another subscriber. Amplitude reference. An additional function provided by the carrier in standard AM, but which is lost in either single or double-sideband suppressed-carrier transmission, is that it provides an amplitude reference. In the receiver, the automatic gain control (AGC) responds to the carrier so that the reproduced audio level stays in a fixed proportion to the original modulation. On the other hand, with suppressed-carrier transmissions there is "no" transmitted power during pauses in the modulation, so the AGC must respond to peaks of the transmitted power during peaks in the modulation. This typically involves a so-called "fast attack, slow decay" circuit which holds the AGC level for a second or more following such peaks, in between syllables or short pauses in the program. This is very acceptable for communications radios, where compression of the audio aids intelligibility. However it is absolutely undesired for music or normal broadcast programming, where a faithful reproduction of the original program, including its varying modulation levels, is expected. ITU type designations. In 1982, the International Telecommunication Union (ITU) designated the types of amplitude modulation: History. Amplitude modulation was used in experiments of multiplex telegraph and telephone transmission in the late 1800s. However, the practical development of this technology is identified with the period between 1900 and 1920 of radiotelephone transmission, that is, the effort to send audio signals by radio waves. The first radio transmitters, called spark gap transmitters, transmitted information by wireless telegraphy, using pulses of the carrier wave to spell out text messages in Morse code. They could not transmit audio because the carrier consisted of strings of damped waves, pulses of radio waves that declined to zero, and sounded like a buzz in receivers. In effect they were already amplitude modulated. Continuous waves. The first AM transmission was made by Canadian-born American researcher Reginald Fessenden on 23 December 1900 using a spark gap transmitter with a specially designed high frequency 10 kHz interrupter, over a distance of one mile (1.6 km) at Cobb Island, Maryland, US. His first transmitted words were, "Hello. One, two, three, four. Is it snowing where you are, Mr. Thiessen?". The words were barely intelligible above the background buzz of the spark. Fessenden was a significant figure in the development of AM radio. He was one of the first researchers to realize, from experiments like the above, that the existing technology for producing radio waves, the spark transmitter, was not usable for amplitude modulation, and that a new kind of transmitter, one that produced sinusoidal "continuous waves", was needed. This was a radical idea at the time, because experts believed the impulsive spark was necessary to produce radio frequency waves, and Fessenden was ridiculed. He invented and helped develop one of the first continuous wave transmitters – the Alexanderson alternator, with which he made what is considered the first AM public entertainment broadcast on Christmas Eve, 1906. He also discovered the principle on which AM is based, heterodyning, and invented one of the first detectors able to rectify and receive AM, the electrolytic detector or "liquid baretter", in 1902. Other radio detectors invented for wireless telegraphy, such as the Fleming valve (1904) and the crystal detector (1906) also proved able to rectify AM signals, so the technological hurdle was generating AM waves; receiving them was not a problem. Early technologies. Early experiments in AM radio transmission, conducted by Fessenden, Valdemar Poulsen, Ernst Ruhmer, Quirino Majorana, Charles Herrold, and Lee de Forest, were hampered by the lack of a technology for amplification. The first practical continuous wave AM transmitters were based on either the huge, expensive Alexanderson alternator, developed 1906–1910, or versions of the Poulsen arc transmitter (arc converter), invented in 1903. The modifications necessary to transmit AM were clumsy and resulted in very low quality audio. Modulation was usually accomplished by a carbon microphone inserted directly in the antenna or ground wire; its varying resistance varied the current to the antenna. The limited power handling ability of the microphone severely limited the power of the first radiotelephones; many of the microphones were water-cooled. Vacuum tubes. The 1912 discovery of the amplifying ability of the Audion tube, invented in 1906 by Lee de Forest, solved these problems. The vacuum tube feedback oscillator, invented in 1912 by Edwin Armstrong and Alexander Meissner, was a cheap source of continuous waves and could be easily modulated to make an AM transmitter. Modulation did not have to be done at the output but could be applied to the signal before the final amplifier tube, so the microphone or other audio source didn't have to modulate a high-power radio signal. Wartime research greatly advanced the art of AM modulation, and after the war the availability of cheap tubes sparked a great increase in the number of radio stations experimenting with AM transmission of news or music. The vacuum tube was responsible for the rise of AM broadcasting around 1920, the first electronic mass communication medium. Amplitude modulation was virtually the only type used for radio broadcasting until FM broadcasting began after World War II. At the same time as AM radio began, telephone companies such as AT&amp;T were developing the other large application for AM: sending multiple telephone calls through a single wire by modulating them on separate carrier frequencies, called "frequency division multiplexing". Single-sideband. In 1915, John Renshaw Carson formulated the first mathematical description of amplitude modulation, showing that a signal and carrier frequency combined in a nonlinear device creates a sideband on both sides of the carrier frequency. Passing the modulated signal through another nonlinear device can extract the original baseband signal. His analysis also showed that only one sideband was necessary to transmit the audio signal, and Carson patented single-sideband modulation (SSB) on 1 December 1915. This advanced variant of amplitude modulation was adopted by AT&amp;T for longwave transatlantic telephone service beginning 7 January 1927. After WW-II, it was developed for military aircraft communication. Analysis. The carrier wave (sine wave) of frequency "fc" and amplitude "A" is expressed by formula_3. The message signal, such as an audio signal that is used for modulating the carrier, is "m"("t"), and has a frequency "fm", much lower than "fc": formula_4, where "m" is the amplitude sensitivity, "M" is the amplitude of modulation. If "m" &lt; 1, "(1 + m(t)/A)" is always positive for undermodulation. If "m" &gt; 1 then overmodulation occurs and reconstruction of message signal from the transmitted signal would lead in loss of original signal. Amplitude modulation results when the carrier "c(t)" is multiplied by the positive quantity "(1 + m(t)/A)": formula_5 In this simple case "m" is identical to the modulation index, discussed below. With "m" = 0.5 the amplitude modulated signal "y"("t") thus corresponds to the top graph (labelled "50% Modulation") in figure 4. Using prosthaphaeresis identities, "y"("t") can be shown to be the sum of three sine waves: formula_6 Therefore, the modulated signal has three components: the carrier wave "c(t)" which is unchanged in frequency, and two sidebands with frequencies slightly above and below the carrier frequency "fc". Spectrum. A useful modulation signal "m(t)" is usually more complex than a single sine wave, as treated above. However, by the principle of Fourier decomposition, "m(t)" can be expressed as the sum of a set of sine waves of various frequencies, amplitudes, and phases. Carrying out the multiplication of "1 + m(t)" with "c(t)" as above, the result consists of a sum of sine waves. Again, the carrier "c(t)" is present unchanged, but each frequency component of "m" at "fi" has two sidebands at frequencies "fc + fi" and "fc – fi". The collection of the former frequencies above the carrier frequency is known as the upper sideband, and those below constitute the lower sideband. The modulation "m(t)" may be considered to consist of an equal mix of positive and negative frequency components, as shown in the top of figure 2. One can view the sidebands as that modulation "m(t)" having simply been shifted in frequency by "fc" as depicted at the bottom right of figure 2. The short-term spectrum of modulation, changing as it would for a human voice for instance, the frequency content (horizontal axis) may be plotted as a function of time (vertical axis), as in figure 3. It can again be seen that as the modulation frequency content varies, an upper sideband is generated according to those frequencies shifted "above" the carrier frequency, and the same content mirror-imaged in the lower sideband below the carrier frequency. At all times, the carrier itself remains constant, and of greater power than the total sideband power. Power and spectrum efficiency. The RF bandwidth of an AM transmission (refer to figure 2, but only considering positive frequencies) is twice the bandwidth of the modulating (or "baseband") signal, since the upper and lower sidebands around the carrier frequency each have a bandwidth as wide as the highest modulating frequency. Although the bandwidth of an AM signal is narrower than one using frequency modulation (FM), it is twice as wide as single-sideband techniques; it thus may be viewed as spectrally inefficient. Within a frequency band, only half as many transmissions (or "channels") can thus be accommodated. For this reason analog television employs a variant of single-sideband (known as vestigial sideband, somewhat of a compromise in terms of bandwidth) in order to reduce the required channel spacing. Another improvement over standard AM is obtained through reduction or suppression of the carrier component of the modulated spectrum. In figure 2 this is the spike in between the sidebands; even with full (100%) sine wave modulation, the power in the carrier component is twice that in the sidebands, yet it carries no unique information. Thus there is a great advantage in efficiency in reducing or totally suppressing the carrier, either in conjunction with elimination of one sideband (single-sideband suppressed-carrier transmission) or with both sidebands remaining (double sideband suppressed carrier). While these suppressed carrier transmissions are efficient in terms of transmitter power, they require more sophisticated receivers employing synchronous detection and regeneration of the carrier frequency. For that reason, standard AM continues to be widely used, especially in broadcast transmission, to allow for the use of inexpensive receivers using envelope detection. Even (analog) television, with a (largely) suppressed lower sideband, includes sufficient carrier power for use of envelope detection. But for communications systems where both transmitters and receivers can be optimized, suppression of both one sideband and the carrier represent a net advantage and are frequently employed. A technique used widely in broadcast AM transmitters is an application of the Hapburg carrier, first proposed in the 1930s but impractical with the technology then available. During periods of low modulation the carrier power would be reduced and would return to full power during periods of high modulation levels. This has the effect of reducing the overall power demand of the transmitter and is most effective on speech type programmes. Various trade names are used for its implementation by the transmitter manufacturers from the late 80's onwards. Modulation index. The AM modulation index is a measure based on the ratio of the modulation excursions of the RF signal to the level of the unmodulated carrier. It is thus defined as: formula_7 where formula_8 and formula_9 are the modulation amplitude and carrier amplitude, respectively; the modulation amplitude is the peak (positive or negative) change in the RF amplitude from its unmodulated value. Modulation index is normally expressed as a percentage, and may be displayed on a meter connected to an AM transmitter. So if formula_10, carrier amplitude varies by 50% above (and below) its unmodulated level, as is shown in the first waveform, below. For formula_11, it varies by 100% as shown in the illustration below it. With 100% modulation the wave amplitude sometimes reaches zero, and this represents full modulation using standard AM and is often a target (in order to obtain the highest possible signal-to-noise ratio) but mustn't be exceeded. Increasing the modulating signal beyond that point, known as overmodulation, causes a standard AM modulator (see below) to fail, as the negative excursions of the wave envelope cannot become less than zero, resulting in distortion ("clipping") of the received modulation. Transmitters typically incorporate a limiter circuit to avoid overmodulation, and/or a compressor circuit (especially for voice communications) in order to still approach 100% modulation for maximum intelligibility above the noise. Such circuits are sometimes referred to as a vogad. However it is possible to talk about a modulation index exceeding 100%, without introducing distortion, in the case of double-sideband reduced-carrier transmission. In that case, negative excursions beyond zero entail a reversal of the carrier phase, as shown in the third waveform below. This cannot be produced using the efficient high-level (output stage) modulation techniques (see below) which are widely used especially in high power broadcast transmitters. Rather, a special modulator produces such a waveform at a low level followed by a linear amplifier. What's more, a standard AM receiver using an envelope detector is incapable of properly demodulating such a signal. Rather, synchronous detection is required. Thus double-sideband transmission is generally "not" referred to as "AM" even though it generates an identical RF waveform as standard AM as long as the modulation index is below 100%. Such systems more often attempt a radical reduction of the carrier level compared to the sidebands (where the useful information is present) to the point of double-sideband suppressed-carrier transmission where the carrier is (ideally) reduced to zero. In all such cases the term "modulation index" loses its value as it refers to the ratio of the modulation amplitude to a rather small (or zero) remaining carrier amplitude. Modulation methods. Modulation circuit designs may be classified as low- or high-level (depending on whether they modulate in a low-power domain—followed by amplification for transmission—or in the high-power domain of the transmitted signal). Low-level generation. In modern radio systems, modulated signals are generated via digital signal processing (DSP). With DSP many types of AM are possible with software control (including DSB with carrier, SSB suppressed-carrier and independent sideband, or ISB). Calculated digital samples are converted to voltages with a digital-to-analog converter, typically at a frequency less than the desired RF-output frequency. The analog signal must then be shifted in frequency and linearly amplified to the desired frequency and power level (linear amplification must be used to prevent modulation distortion). This low-level method for AM is used in many Amateur Radio transceivers. AM may also be generated at a low level, using analog methods described in the next section. High-level generation. High-power AM transmitters (such as those used for AM broadcasting) are based on high-efficiency class-D and class-E power amplifier stages, modulated by varying the supply voltage. Older designs (for broadcast and amateur radio) also generate AM by controlling the gain of the transmitter's final amplifier (generally class-C, for efficiency). The following types are for vacuum tube transmitters (but similar options are available with transistors): Demodulation methods. The simplest form of AM demodulator consists of a diode which is configured to act as envelope detector. Another type of demodulator, the product detector, can provide better-quality demodulation with additional circuit complexity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "m(t) = A(t) \\cdot \\cos(\\omega t + \\phi(t))\\," }, { "math_id": 1, "text": "\\omega" }, { "math_id": 2, "text": "\\phi(t)" }, { "math_id": 3, "text": "c(t) = A \\sin(2 \\pi f_c t)\\," }, { "math_id": 4, "text": "m(t) = M \\cos\\left(2\\pi f_m t + \\phi\\right)= Am \\cos\\left(2\\pi f_m t + \\phi\\right)\\," }, { "math_id": 5, "text": "\\begin{align}\n y(t) &= \\left[1 + \\frac{m(t)}{A}\\right] c(t) \\\\\n &= \\left[1 + m \\cos\\left(2\\pi f_m t + \\phi\\right)\\right] A \\sin\\left(2\\pi f_c t\\right)\n\\end{align}" }, { "math_id": 6, "text": "y(t) = A \\sin(2\\pi f_c t) + \\frac{1}{2}Am\\left[\\sin\\left(2\\pi \\left[f_c + f_m\\right] t + \\phi\\right) + \\sin\\left(2\\pi \\left[f_c - f_m\\right] t - \\phi\\right)\\right].\\," }, { "math_id": 7, "text": "m = \\frac{\\mathrm{peak\\ value\\ of\\ } m(t)}{A} = \\frac{M}{A} " }, { "math_id": 8, "text": "M\\," }, { "math_id": 9, "text": "A\\," }, { "math_id": 10, "text": "m=0.5" }, { "math_id": 11, "text": "m=1.0" } ]
https://en.wikipedia.org/wiki?curid=1140
1140043
Squeeze mapping
Linear mapping permuting rectangles of the same area In linear algebra, a squeeze mapping, also called a squeeze transformation, is a type of linear map that preserves Euclidean area of regions in the Cartesian plane, but is "not" a rotation or shear mapping. For a fixed positive real number "a", the mapping formula_0 is the "squeeze mapping" with parameter "a". Since formula_1 is a hyperbola, if "u" "ax" and "v" "y"/"a", then "uv" "xy" and the points of the image of the squeeze mapping are on the same hyperbola as ("x","y") is. For this reason it is natural to think of the squeeze mapping as a hyperbolic rotation, as did Émile Borel in 1914, by analogy with "circular rotations", which preserve circles. Logarithm and hyperbolic angle. The squeeze mapping sets the stage for development of the concept of logarithms. The problem of finding the area bounded by a hyperbola (such as "xy" 1) is one of quadrature. The solution, found by Grégoire de Saint-Vincent and Alphonse Antonio de Sarasa in 1647, required the natural logarithm function, a new concept. Some insight into logarithms comes through hyperbolic sectors that are permuted by squeeze mappings while preserving their area. The area of a hyperbolic sector is taken as a measure of a hyperbolic angle associated with the sector. The hyperbolic angle concept is quite independent of the ordinary circular angle, but shares a property of invariance with it: whereas circular angle is invariant under rotation, hyperbolic angle is invariant under squeeze mapping. Both circular and hyperbolic angle generate invariant measures but with respect to different transformation groups. The hyperbolic functions, which take hyperbolic angle as argument, perform the role that circular functions play with the circular angle argument. Group theory. In 1688, long before abstract group theory, the squeeze mapping was described by Euclid Speidell in the terms of the day: "From a Square and an infinite company of Oblongs on a Superficies, each Equal to that square, how a curve is begotten which shall have the same properties or affections of any Hyperbola inscribed within a Right Angled Cone." If "r" and "s" are positive real numbers, the composition of their squeeze mappings is the squeeze mapping of their product. Therefore, the collection of squeeze mappings forms a one-parameter group isomorphic to the multiplicative group of positive real numbers. An additive view of this group arises from consideration of hyperbolic sectors and their hyperbolic angles. From the point of view of the classical groups, the group of squeeze mappings is SO+(1,1), the identity component of the indefinite orthogonal group of 2×2 real matrices preserving the quadratic form "u"2 − "v"2. This is equivalent to preserving the form "xy" via the change of basis formula_2 and corresponds geometrically to preserving hyperbolae. The perspective of the group of squeeze mappings as hyperbolic rotation is analogous to interpreting the group SO(2) (the connected component of the definite orthogonal group) preserving quadratic form "x"2 + "y"2 as being "circular rotations". Note that the "SO+" notation corresponds to the fact that the reflections formula_3 are not allowed, though they preserve the form (in terms of "x" and "y" these are "x" ↦ "y", "y" ↦ "x" and "x" ↦ −"x", "y" ↦ −"y"); the additional "+" in the hyperbolic case (as compared with the circular case) is necessary to specify the identity component because the group O(1,1) has 4 connected components, while the group O(2) has 2 components: SO(1,1) has 2 components, while SO(2) only has 1. The fact that the squeeze transforms preserve area and orientation corresponds to the inclusion of subgroups SO ⊂ SL – in this case SO(1,1) ⊂ SL(2) – of the subgroup of hyperbolic rotations in the special linear group of transforms preserving area and orientation (a volume form). In the language of Möbius transformations, the squeeze transformations are the hyperbolic elements in the classification of elements. A geometric transformation is called conformal when it preserves angles. Hyperbolic angle is defined using area under "y" = 1/"x". Since squeeze mappings preserve areas of transformed regions such as hyperbolic sectors, the angle measure of sectors is preserved. Thus squeeze mappings are "conformal" in the sense of preserving hyperbolic angle. Applications. Here some applications are summarized with historic references. Relativistic spacetime. Spacetime geometry is conventionally developed as follows: Select (0,0) for a "here and now" in a spacetime. Light radiant left and right through this central event tracks two lines in the spacetime, lines that can be used to give coordinates to events away from (0,0). Trajectories of lesser velocity track closer to the original timeline (0,"t"). Any such velocity can be viewed as a zero velocity under a squeeze mapping called a Lorentz boost. This insight follows from a study of split-complex number multiplications and the diagonal basis which corresponds to the pair of light lines. Formally, a squeeze preserves the hyperbolic metric expressed in the form "xy"; in a different coordinate system. This application in the theory of relativity was noted in 1912 by Wilson and Lewis, by Werner Greub, and by Louis Kauffman. Furthermore, the squeeze mapping form of Lorentz transformations was used by Gustav Herglotz (1909/10) while discussing Born rigidity, and was popularized by Wolfgang Rindler in his textbook on relativity, who used it in his demonstration of their characteristic property. The term "squeeze transformation" was used in this context in an article connecting the Lorentz group with Jones calculus in optics. Corner flow. In fluid dynamics one of the fundamental motions of an incompressible flow involves bifurcation of a flow running up against an immovable wall. Representing the wall by the axis "y" = 0 and taking the parameter "r" = exp("t") where "t" is time, then the squeeze mapping with parameter "r" applied to an initial fluid state produces a flow with bifurcation left and right of the axis "x" = 0. The same model gives fluid convergence when time is run backward. Indeed, the area of any hyperbolic sector is invariant under squeezing. For another approach to a flow with hyperbolic streamlines, see . In 1989 Ottino described the "linear isochoric two-dimensional flow" as formula_4 where K lies in the interval [−1, 1]. The streamlines follow the curves formula_5 so negative "K" corresponds to an ellipse and positive "K" to a hyperbola, with the rectangular case of the squeeze mapping corresponding to "K" = 1. Stocker and Hosoi described their approach to corner flow as follows: we suggest an alternative formulation to account for the corner-like geometry, based on the use of hyperbolic coordinates, which allows substantial analytical progress towards determination of the flow in a Plateau border and attached liquid threads. We consider a region of flow forming an angle of "π"/2 and delimited on the left and bottom by symmetry planes. Stocker and Hosoi then recall Moffatt's consideration of "flow in a corner between rigid boundaries, induced by an arbitrary disturbance at a large distance." According to Stocker and Hosoi, For a free fluid in a square corner, Moffatt's (antisymmetric) stream function ... [indicates] that hyperbolic coordinates are indeed the natural choice to describe these flows. Bridge to transcendentals. The area-preserving property of squeeze mapping has an application in setting the foundation of the transcendental functions natural logarithm and its inverse the exponential function: Definition: Sector("a,b") is the hyperbolic sector obtained with central rays to ("a", 1/"a") and ("b", 1/"b"). Lemma: If "bc" = "ad", then there is a squeeze mapping that moves the sector("a,b") to sector("c,d"). Proof: Take parameter "r" = "c"/"a" so that ("u,v") = ("rx", "y"/"r") takes ("a", 1/"a") to ("c", 1/"c") and ("b", 1/"b") to ("d", 1/"d"). Theorem (Gregoire de Saint-Vincent 1647) If "bc" = "ad", then the quadrature of the hyperbola "xy" = 1 against the asymptote has equal areas between "a" and "b" compared to between "c" and "d". Proof: An argument adding and subtracting triangles of area &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2, one triangle being {(0,0), (0,1), (1,1)}, shows the hyperbolic sector area is equal to the area along the asymptote. The theorem then follows from the lemma. Theorem (Alphonse Antonio de Sarasa 1649) As area measured against the asymptote increases in arithmetic progression, the projections upon the asymptote increase in geometric sequence. Thus the areas form "logarithms" of the asymptote index. For instance, for a standard position angle which runs from (1, 1) to ("x", 1/"x"), one may ask "When is the hyperbolic angle equal to one?" The answer is the transcendental number x = e. A squeeze with "r" = e moves the unit angle to one between ("e", 1/"e") and ("ee", 1/"ee") which subtends a sector also of area one. The geometric progression "e", "e"2, "e"3, ..., "e""n", ... corresponds to the asymptotic index achieved with each sum of areas 1,2,3, ..., "n"... which is a proto-typical arithmetic progression "A" + "nd" where "A" = 0 and "d" = 1 . Lie transform. Following Pierre Ossian Bonnet's (1867) investigations on surfaces of constant curvatures, Sophus Lie (1879) found a way to derive new pseudospherical surfaces from a known one. Such surfaces satisfy the Sine-Gordon equation: formula_6 where formula_7 are asymptotic coordinates of two principal tangent curves and formula_8 their respective angle. Lie showed that if formula_9 is a solution to the Sine-Gordon equation, then the following squeeze mapping (now known as Lie transform) indicates other solutions of that equation: formula_10 Lie (1883) noticed its relation to two other transformations of pseudospherical surfaces: The Bäcklund transform (introduced by Albert Victor Bäcklund in 1883) can be seen as the combination of a Lie transform with a Bianchi transform (introduced by Luigi Bianchi in 1879.) Such transformations of pseudospherical surfaces were discussed in detail in the lectures on differential geometry by Gaston Darboux (1894), Luigi Bianchi (1894), or Luther Pfahler Eisenhart (1909). It is known that the Lie transforms (or squeeze mappings) correspond to Lorentz boosts in terms of light-cone coordinates, as pointed out by Terng and Uhlenbeck (2000): "Sophus Lie observed that the SGE [Sinus-Gordon equation] is invariant under Lorentz transformations. In asymptotic coordinates, which correspond to light cone coordinates, a Lorentz transformation is formula_11." This can be represented as follows: formula_12 where "k" corresponds to the Doppler factor in Bondi "k"-calculus, "η" is the rapidity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(x, y) \\mapsto (ax, y/a)" }, { "math_id": 1, "text": "\\{ (u,v) \\, : \\, u v = \\mathrm{constant}\\}" }, { "math_id": 2, "text": "x=u+v,\\quad y=u-v\\,," }, { "math_id": 3, "text": "u \\mapsto -u,\\quad v \\mapsto -v" }, { "math_id": 4, "text": "v_1 = G x_2 \\quad v_2 = K G x_1" }, { "math_id": 5, "text": "x_2^2 - K x_1^2 = \\mathrm{constant}" }, { "math_id": 6, "text": "\\frac{d^{2}\\Theta}{ds\\ d\\sigma}=K\\sin\\Theta ," }, { "math_id": 7, "text": "(s,\\sigma)" }, { "math_id": 8, "text": "\\Theta" }, { "math_id": 9, "text": "\\Theta=f(s,\\sigma)" }, { "math_id": 10, "text": "\\Theta=f\\left(ms,\\ \\frac{\\sigma}{m}\\right) ." }, { "math_id": 11, "text": "(x,t)\\mapsto\\left(\\tfrac{1}{\\lambda}x,\\lambda t\\right)" }, { "math_id": 12, "text": "\\begin{matrix}-c^{2}t^{2}+x^{2}=-c^{2}t^{\\prime2}+x^{\\prime2}\\\\\n\\hline \\begin{align}ct' & =ct\\gamma-x\\beta\\gamma & & =ct\\cosh\\eta-x\\sinh\\eta\\\\\nx' & =-ct\\beta\\gamma+x\\gamma & & =-ct\\sinh\\eta+x\\cosh\\eta\n\\end{align}\n\\\\\n\\hline u=ct+x,\\ v=ct-x,\\ k=\\sqrt{\\tfrac{1+\\beta}{1-\\beta}}=e^{\\eta}\\\\\nu'=\\frac{u}{k},\\ v'=kv\\\\\n\\hline u'v'=uv\n\\end{matrix}" } ]
https://en.wikipedia.org/wiki?curid=1140043
1140113
1955 Formula One season
9th season of FIA's Formula One motor racing &lt;templatestyles src="Motorsport season/styles.css" /&gt; 1955 Drivers' Champion: Juan Manuel Fangio Previous 1954 Next 1956 The 1955 Formula One season was the ninth season of FIA Formula One motor racing. It featured the sixth World Championship of Drivers, which was contested over seven races between 16 January and 11 September 1955. The season also included several non-championship races for Formula One cars. Juan Manuel Fangio won his second consecutive World Championship title. It was his third in total, a record that would not be beaten until Alain Prost in 1993. This was the last championship for a Mercedes driver until 2014. The season was coloured by tragedy. Two drivers were killed during the 1955 Indianapolis 500: Manny Ayulo and Bill Vukovich, winner of the two previous editions. Italian Mario Alborghetti died at the non-championship Pau Grand Prix. Alberto Ascari, World Champion of and , was killed while testing a Ferrari 750 Monza at Monza. And ex-Formula One driver Pierre Levegh was killed in the 1955 Le Mans disaster, along with 83 spectators. This would lead to the cancellation of four F1 Grands Prix. Teams and drivers. The following teams and drivers competed in the 1955 FIA World Championship. The list does not include those who only contested the Indianapolis 500. Calendar. Calendar changes. Cancelled rounds. In the aftermath of the 1955 Le Mans disaster, it was decided to reschedule the French Grand Prix from 3 July to 25 September. It was later cancelled, along with the German, Swiss and Spanish rounds. The circuits at Pedralbes and Bremgarten were never used again for racing. Motor racing was banned altogether in Switzerland until the 2018 Zürich ePrix. Championship report. Rounds 1 to 3. For the third year in a row, the championship opened with the Argentine Grand Prix. José Froilan González started on pole position. The Argentine had been a full-time Ferrari driver in , but it would be his only race this year. Next to him on the front row started two double World Champions: Alberto Ascari in the Lancia and Juan Manuel Fangio in the Mercedes. Fangio took the lead at the start, but lost it to Ascari on lap 3. Teammate Stirling Moss went from eighth to third, while behind them, drivers and cars were beginning to succumb to the heat of . On lap 21, Ascari crashed out by himself, leaving González in the lead. However, he was still recovering from his accident in the 1954 RAC Tourist Trophy and got exhausted. Fearing he could not hold Fangio behind, he pitted to hand the car to teammate and 1950 World Champion Nino Farina. Fangio pitted as well, for new tyres and to cool off, while Moss retired due to a vapor lock in the fuel pump. This left another local driver, Roberto Mieres in the Maserati, in the lead after starting sixteenth. Sadly, his fuel pump faltered as well and he spent 10 minutes in the pits, coming home in fifth. Besides Mieres, Fangio would be the only classified driver not to have switched cars during the race, and went on to win. Two Ferraris completed the podium, but each had seen three different drivers behind the wheel, so Fangio had an immediate lead in the championship. The Monaco Grand Prix returned to the calendar after three years and was given the honorary title of "European Grand Prix". A new rule to qualifying had been added: only the times recorded in the first practice session on Thursday afternoon would count for the front row of the grid and, thus, for pole position. The rest of the starting places would be decided by the remaining sessions on Friday and Saturday morning. This was done to entice spectators to come and watch every session, but it was an unpopular idea with the drivers. Fangio set the fastest time, ahead of Ascari and Moss, so they could relax and use the remaining sessions to try out car set-ups for the race. At the start of the race, Fangio held on to the lead, but Ascari fell back. Moss took second place after a few laps and was slowly closing up to his teammate in front. After the two drivers behind Moss pitted, Ascari was left in a lonely third place until, at half-distance, Fangio stopped on track with a broken transmission and, on lap 81, Moss's engine blew up. Ascari took the unexpected lead of the race, but mere seconds later, crashed coming out of the harbour chicane and plunged into the water. He was lucky to escape with just a cut on the nose. Maurice Trintignant took the win for Ferrari, the first of his career, ahead of Eugenio Castellotti for Lancia and Cesare Perdisa in Jean Behra's Maserati. The Indianapolis 500 was included in the Formula One championship, but no F1 drivers attended. Bob Sweikert won the race. In the Drivers' Championship, Maurice Trintignant (Ferrari) was leading with 11formula_0 points, ahead of Juan Manuel Fangio (Mercedes) with 10 and Bob Sweikert (Kurtis Kraft) with 8. Sweikert would not compete in any other rounds. Rounds 4 to 6. Four days after the Monaco Grand Prix, double World Champion Alberto Ascari was tragically killed in a test session at Monza. Further burdened by financial troubles, the Lancia team was left with two cars and just one driver. Soon, all assets would be merged into the Ferrari team, but this did not stop Eugenio Castellotti from scoring his first career pole position in the Belgian Grand Prix. The Mercedes cars of Juan Manuel Fangio and Stirling Moss started beside him on the front row. Championship leader Maurice Trintignant started down in eleventh out of thirteen. At the start, Fangio and Moss quickly took the lead and never looked back. Castellotti retired on lap 16, allowing 1950 World Champion Nino Farina to finish third for Ferrari. On 11 June, the 24 Hours of Le Mans took place and many F1 drivers participated. During the race, Pierre Levegh crashed into the spectator area, killing 83 people and injuring at least 120 others. This led the FIA to postpone the French Grand Prix. However, the Dutch Grand Prix was next on the F1 championship and went on undisturbed. Mercedes managed to occupy the front row with Fangio, Moss and Karl Kling. At the start, Luigi Musso put his Maserati into second position, but was outbraved by Moss. Kling tried his best to keep up with the leading trio but, on lap 21, spun off and retired. Fangio and Moss scored another one-two finish, a minute ahead of Musso. This was the first race since the 1950 French Grand Prix that none of the cars on the podium were powered by a Ferrari engine. For the British Grand Prix, Stirling Moss scored his first career pole position in front of his home crowd. Fangio started second, Jean Behra third for Maserati. The second row was filled by two more Mercedes: Karl Kling and Piero Taruffi. Fangio had the best start, but Moss regained the lead on lap 3, his car set up with a lower top speed but better acceleration out of the corners. Behra retired on lap 10, handing the top four positions to Mercedes, with Fangio once again in front. A couple of laps later, Moss retook the lead, grew his advantage to ten seconds and set a new lap record. Unused to having the team leader behind him, Moss looked back on the last lap and hesitated. But Fangio hang back, two tenths behind, leaving the home hero to take the win. In the Drivers' Championship, Juan Manuel Fangio (Mercedes) led with 33 points, ahead of Stirling Moss (Mercedes) with 22 and Maurice Trintignant (Ferrari) with 11formula_0 points. After the British Grand Prix, the German, Swiss, French and Spanish Grand Prix were cancelled, in the aftermath of the 1955 Le Mans disaster. This left just one race in the championship and effectively handed the title to Fangio. Round 7. The Italian Grand Prix was run on the Monza layout including a new steep banking. Nino Farina crashed in practice when his rear tyre came apart under the load of the banked turn and the heat of the sun. He escaped unhurt but his Ferrari-run Lancia was written off, and although Sunday was substantially cooler, the other Lancia was withdrawn as a precaution. Like in Zandvoort, Mercedes-Benz in Formula One#Mercedes occupied the front row in the order of Fangio, Moss, Kling. Moss took the lead at the start, but gave way to his Argentinian team leader before the first lap was run. The fourth Mercedes of Taruffi went from ninth to fourth, the team repeating their procession run from last race. However, Moss pitted on lap 19 for a new windscreen and subsequently retired on lap 28 when his engine cut out. Kling's gearbox broke and he retired as well, leaving the German team worried, but Fangio and Taruffi finished the race untroubled, scoring another Mercedes 1-2, ahead of Eugenio Castellotti for Ferrari. Juan Manuel Fangio (Mercedes) had collected 40 points and won his third Drivers' Championship, his second in a row. Teammate Stirling Moss was second with 23 points and Eugenio Castellotti third with 12. Mercedes withdrew from F1 after this season, marking it the final race until the team's revival in 2010, their final win until the 2012 Chinese Grand Prix and final championship title until . Results and standings. World Championship of Drivers standings. Points were awarded to the top five classified finishers, with an additional point awarded for setting the fastest lap, regardless of finishing position or even classification. Only the best five results counted towards the championship. Shared drives result in shared points for each driver if they finished in a points-scoring position. If more than one driver set the same fastest lap time, the fastest lap point would be divided equally between the drivers. Numbers without parentheses are championship points; numbers in parentheses are total points scored. Points were awarded in the following system: Non-championship races. Other Formula One races were also held in 1955, which did not count towards the World Championship. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac{1}{3}" } ]
https://en.wikipedia.org/wiki?curid=1140113
1140114
1954 Formula One season
8th season of FIA's Formula One motor racing &lt;templatestyles src="Motorsport season/styles.css" /&gt; 1954 Drivers' Champion: Juan Manuel Fangio Previous 1953 Next 1955 The 1954 Formula One season was the eighth season of FIA Formula One motor racing. It featured the fifth World Championship of Drivers, which was contested over nine races between 17 January and 24 October 1954. The season also included several non-championship races for Formula One cars. Juan Manuel Fangio won his second Drivers' Championship, after previously winning it in . After the first couple of races, he switched teams, going from Maserati to Mercedes-Benz, making him the only F1 driver in history to win a championship driving for more than one team in the same season. After the championship had been run under Formula Two regulations for two seasons, the maximum engine displacement was increased to 2.5 litres for 1954. This increased average power outputs by 150% and attracted several new constructors. At the same time, some F2 constructors withdrew, while others intended to compete but could not get an F1 chassis ready in time. Argentinian Onofre Marimón suffered a fatal accident during practice for the German Grand Prix. Coming over one of the steep hills, he went straight through the corner at the bottom. His Maserati hit a ditch, somersaulted and landed on top of him. It was the first fatality during an F1 championship weekend. In 1955, the movie The Racers came out, the story of which was based on the life of Rudolf Caracciola. Real-life racing footage from the 1954 F1 season was used, including in-race shots from a camera car that started behind the drivers in the Belgian. This approach would be recreated in the 1966 film Grand Prix. Teams and drivers. The following teams and drivers competed in the 1954 FIA World Championship of Drivers. The list does not include those who only contested the Indianapolis 500. Regulation changes. The maximum allowed engine displacement was increased from 2.0 to 2.5 litres for naturally-aspirated engines. Average power outputs increased by around 150%. The limit for compressed engines was set at 750 cc, as it had been since , but no constructor would build one before they were outright banned in . Championship report. Rounds 1 to 3. The championship started off with the Argentine Grand Prix. Multiple constructors intended to compete, but none of their cars were ready yet. The grid consisted of Ferraris, Maseratis and Gordinis, all of them adapting their chassis for the new regulations. 's champion Nino Farina qualified on pole position - he is the oldest F1 driver in history to start on pole - ahead of teammate José Froilán González and local hero Juan Manuel Fangio in the Maserati. At the start, González fell back to fourth, but after a remarkable recovery drive, he took the lead on lap 15. A third of the way in, a rainstorm arrived and the leader spun off. Farina pitted for a new helmet visor and third Ferrari driver Mike Hawthorn spun off as well. This left Fangio in a comfortable lead, until the track died and he fell back to third behind González and Farina. A second period of rain caused the order to switch back around, putting Fangio ahead of the two Ferraris, but when the Maserati driver pitted for new tyres, he was back in third. Ferrari's team manager Nello Ugolini protested his rivals' pit stop, claiming they had too many mechanics working on the car. Confident that the protest would be granted, he signalled the leading pair to bring the cars home and not fight the charging Fangio. So they did, and they finished second and third behind the home hero. But then the FIA rejected Ferrari's protest and upheld the results, granting Fangio his first home win. The Indianapolis 500 was included in the Formula One championship, but no F1 drivers attended. Bill Vukovich won the race for the second year in a row. In qualifying for the Belgian Grand Prix, Fangio broke his lap record and started on pole position, ahead of González and Farina. The Argentine was contracted by Mercedes, but since their car was not ready yet, he was loaned to his former team. González was allowed into the lead when Fangio messed up the start, but when his engine cut out on the opening lap, Farina was in front. Roberto Mieres's car burst into flames, as his fuel filler cap had been left open and fuel had leaked onto the exhaust. The Maserati driver jumped out, escaping with burns on his back, and the drivers avoided his car. Fangio got up to second place by lap 2 and took the lead on lap 3. When his helmet visor broke on lap 10, he pitted to put on his goggles, but then recovered to pass Farina for the second time, just before the Ferrari engine cut out, sending the Italian out of the race. Hawthorn's exhaust pipe split, sending fumes into the cockpit and making him feel dizzy. He pitted and collapsed over the wheel, so the team dragged him out and González took over his car. The team only found out why the Brit was unwell when González pointed it out a lap later. Fangio took a comfortable win, ahead of Maurice Trintignant (Ferrari) and Stirling Moss (Maserati). In the Drivers' Championship, Juan Manuel Fangio (Maserati/Mercedes) was in the lead with 17 points, ahead of Maurice Trintignant (Ferrari) and Bill Vukovich (Kurtis Kraft) with 8. Vukovich would not compete in any other rounds. Rounds 4 to 7. The long-awaited Mercedes team arrived for the French Grand Prix and their drivers were quickest of all from the get-go. Championship leader Juan Manuel Fangio could finally say goodbye to Maserati and was joined by Germans Karl Kling and Hans Herrmann. Fangio's seat was taken up by and champion Alberto Ascari, whose new employer Lancia did not have their cars ready yet. Teammate and mentor Luigi Villoresi was loaned to Maserati likewise. Fangio and Kling set the fastest times in qualifying, putting their silver-coloured streamlined W196s at the front of the grid. In the opening laps, González was the only one to stay with the leading pair, but his Ferraris overheated, so his focus shifted to keeping the third Mercedes of Herrmann behind. On lap 13, the Ferrari engine gave up. Teammate Mike Hawthorn retired with similar issues, before Herrmann broke the lap record but then stopped in a cloud of smoke. Fangio and Kling did their laps at a comfortable pace, most straights running side-by-side, only upping their pace for the final sprint. Coming out of the last corner, Fangio managed to take the win by just a couple of yards. Robert Manzon in a private Ferrari finished third out of just six finishers. Fangio was again at pole position for the British Grand Prix, but the Mercedes' streamlined bodywork gave them less of an advantage at the Silverstone Circuit, compared to Reims two weeks ago. The Ferraris of González and Hawthorn, and the private Maserati of Stirling Moss completed the four-wide front row. González took the lead at the start and created a gap of some five seconds, while Moss and Hawthorn were in a fierce fight. Rain fell and there were several accidents. Fangio went off and damaged the nose of his car, but kept putting pressure on his countryman in front, until his pace was hindered by technical trouble and he fell back to fourth. González scored a win to be proud of, ahead of Ferrari teammate Hawthorn and Onofre Marimón for Maserati, as with 10 laps to go, Moss's back axle had failed. Fangio finished fourth on a lap down. Seven drivers set the fastest lap, as it was not measured any more precise than in whole seconds, so they all received an extra formula_0 championship point. The German Grand Prix was given the honorary title of "Grand Prix of Europe". Four Mercedes cars arrived, with three of them carrying open-wheeled bodywork, the team seemingly having learned from their defeat in Britain. Practice was overshadowed by the fatal accident of Marimón, one of the more popular and younger drivers on the grid, and the Maserati works team withdrew from the race. Fangio scored his third pole position in a row, ahead of Hawthorn and Moss, but it was González who took the lead at the start. Hawthorn fell back behind the fast-starting Mercedes of Lang and Herrmann. Fangio passed his countryman going into lap 2 and Moss retired with dramatic technical failing. Hawthorn retired as well, giving way to the fourth Mercedes of Kling, who had started last. Herrmann retired with a fuel leak, but when González dropped off the pace, the other Mercedes were sitting in a dominant 1-2-3. Lang, however, spun off and Kling was putting unnecessary pressure on Fangio. Hawthorn took over González's car, before Kling pitted a broken rear suspension. Fangio upheld Mercedes's honour with a win, ahead of the two Ferraris of Hawthorn/González and Maurice Trintignant, with Kling in fourth. Fangio had the opportunity to clinch the championship in the Swiss Grand Prix. All he had to do was prevent González from winning and his lead in points would be large enough. González started on pole but immediately lost the lead to Fangio. Moss, who had been promoted to the Maserati works team, started third and was eager to put the Ferrari another place down. Hawthorn had started down in sixth but was lapping two seconds faster than the leader, and managed to overtake both González and Moss. In quick succession, Moss, Hawthorn, Trintignant and Kling retired, removing all excitement from the race. Fangio led González home by almost a minute, while Herrmann finished a lap down. In the Drivers' Championship, Juan Manuel Fangio (Maserati/Mercedes) stood on 42 points and he had done enough to secure his second title. José Froilán González (Ferrari) was currently in second with 23formula_1 points and Maurice Trintignant (Ferrari) third with 15. Rounds 8 and 9. Even with the championship in the bag, Juan Manuel Fangio showed no signs of slowing down going into the Italian Grand Prix. He scored another pole position for Mercedes, ahead of Alberto Ascari, now with Ferrari, since Lancia were still not ready, and Stirling Moss for Maserati. At the start, Fangio lost the lead to fourth-starting teammate Karl Kling and the "Silver Arrows" with their streamlined bodywork looked set to repeat their feat in Reims. However, Kling made a slight mistake on lap 5, bringing him down to fifth, and José Froilán González, second in the championship, managed to get alongside Fangio, before Ascari went passed all of them. González retired, so the old rivals Fangio and Ascari were free to fight. And so they did for more than twenty laps, until Maserati drivers Moss and Luigi Villoresi joined the scrap. The latter had overworked his clutch and soon dropped back, but Moss took the lead. Ascari suddenly retired with engine failure, which gave Moss the opportunity to stretch his lead, until on lap 68, his oil pressure dropped and he needed to pit. The oil was topped off, but on the next lap, it was streaming from the bottom of the car and he needed to retire. His teammate Sergio Mantovani had been fighting for second place with Mike Hawthorn, but that Maserati ran into trouble as well. Fangio won the race, just like last year, ahead of Hawthorn and Umberto Maglioli, who had taken over the car from González. The season closed with the Spanish Grand Prix and Lancia joined the grid with their D50s. This meant that Ascari could finally try the car and he did so with success, scoring his first pole position of the year. The front row was completed by Fangio (Mercedes), Hawthorn (Ferrari) and Harry Schell (private Maserati). The latter took the lead at the start, ahead of Hawthorn and Ascari, while Fangio fell back to sixth. Ascari was in front on lap 3 and was drawing away, until on lap 9, his clutch gave out. Teammate Villoresi had already stopped on the first lap, so both Lancias had been quick but brittle. Maurice Trintignant joined the pack and took the lead. Moss joined as well, but before long retired with a failing oil pump. Schell spun off while leading on lap 29, and then retired with a broken gearbox, before Trintignant retired from the lead with similar issues. Hawthorn could relax and he brought his Ferrari home to win, ahead of Maserati's Luigi Musso, who had overtaken Fangio's Mercedes six laps from the end, to make it three different constructors on the podium. In the Drivers' Championship, Juan Manuel Fangio (Maserati/Mercedes) gathered 42 points and won his second title, ahead of José Froilán González (Ferrari) with 25formula_0 points and Mike Hawthorn (Ferrari) with 24formula_1. Results and standings. World Championship of Drivers standings. Points were awarded to the top five classified finishers, with an additional point awarded for setting the fastest lap, regardless of finishing position or even classification. Only the best five results counted towards the championship. Shared drives result in half points for each driver if they finished in a points-scoring position. If more than one driver set the same fastest lap time, the fastest lap point would be divided equally between the drivers. Numbers without parentheses are championship points; numbers in parentheses are total points scored. Points were awarded in the following system: Non-championship races. The following is a summary of the races for Formula One cars staged during the 1954 season that did not count towards the 1954 World Championship of Drivers. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac{1}{7}" }, { "math_id": 1, "text": "\\tfrac{9}{14}" } ]
https://en.wikipedia.org/wiki?curid=1140114
11403316
Compressed sensing
Signal processing technique Compressed sensing (also known as compressive sensing, compressive sampling, or sparse sampling) is a signal processing technique for efficiently acquiring and reconstructing a signal, by finding solutions to underdetermined linear systems. This is based on the principle that, through optimization, the sparsity of a signal can be exploited to recover it from far fewer samples than required by the Nyquist–Shannon sampling theorem. There are two conditions under which recovery is possible. The first one is sparsity, which requires the signal to be sparse in some domain. The second one is incoherence, which is applied through the isometric property, which is sufficient for sparse signals. Compressed sensing has applications in, for example, MRI where the incoherence condition is typically satisfied. Overview. A common goal of the engineering field of signal processing is to reconstruct a signal from a series of sampling measurements. In general, this task is impossible because there is no way to reconstruct a signal during the times that the signal is not measured. Nevertheless, with prior knowledge or assumptions about the signal, it turns out to be possible to perfectly reconstruct a signal from a series of measurements (acquiring this series of measurements is called sampling). Over time, engineers have improved their understanding of which assumptions are practical and how they can be generalized. An early breakthrough in signal processing was the Nyquist–Shannon sampling theorem. It states that if a real signal's highest frequency is less than half of the sampling rate, then the signal can be reconstructed perfectly by means of sinc interpolation. The main idea is that with prior knowledge about constraints on the signal's frequencies, fewer samples are needed to reconstruct the signal. Around 2004, Emmanuel Candès, Justin Romberg, Terence Tao, and David Donoho proved that given knowledge about a signal's sparsity, the signal may be reconstructed with even fewer samples than the sampling theorem requires. This idea is the basis of compressed sensing. History. Compressed sensing relies on formula_0 techniques, which several other scientific fields have used historically. In statistics, the least squares method was complemented by the formula_0-norm, which was introduced by Laplace. Following the introduction of linear programming and Dantzig's simplex algorithm, the formula_0-norm was used in computational statistics. In statistical theory, the formula_0-norm was used by George W. Brown and later writers on median-unbiased estimators. It was used by Peter J. Huber and others working on robust statistics. The formula_0-norm was also used in signal processing, for example, in the 1970s, when seismologists constructed images of reflective layers within the earth based on data that did not seem to satisfy the Nyquist–Shannon criterion. It was used in matching pursuit in 1993, the LASSO estimator by Robert Tibshirani in 1996 and basis pursuit in 1998. At first glance, compressed sensing might seem to violate the sampling theorem, because compressed sensing depends on the sparsity of the signal in question and not its highest frequency. This is a misconception, because the sampling theorem guarantees perfect reconstruction given sufficient, not necessary, conditions. A sampling method fundamentally different from classical fixed-rate sampling cannot "violate" the sampling theorem. Sparse signals with high frequency components can be highly under-sampled using compressed sensing compared to classical fixed-rate sampling. Method. Underdetermined linear system. An underdetermined system of linear equations has more unknowns than equations and generally has an infinite number of solutions. The figure below shows such an equation system formula_1 where we want to find a solution for formula_2. In order to choose a solution to such a system, one must impose extra constraints or conditions (such as smoothness) as appropriate. In compressed sensing, one adds the constraint of sparsity, allowing only solutions which have a small number of nonzero coefficients. Not all underdetermined systems of linear equations have a sparse solution. However, if there is a unique sparse solution to the underdetermined system, then the compressed sensing framework allows the recovery of that solution. Solution / reconstruction method. Compressed sensing takes advantage of the redundancy in many interesting signals—they are not pure noise. In particular, many signals are sparse, that is, they contain many coefficients close to or equal to zero, when represented in some domain. This is the same insight used in many forms of lossy compression. Compressed sensing typically starts with taking a weighted linear combination of samples also called compressive measurements in a basis different from the basis in which the signal is known to be sparse. The results found by Emmanuel Candès, Justin Romberg, Terence Tao, and David Donoho showed that the number of these compressive measurements can be small and still contain nearly all the useful information. Therefore, the task of converting the image back into the intended domain involves solving an underdetermined matrix equation since the number of compressive measurements taken is smaller than the number of pixels in the full image. However, adding the constraint that the initial signal is sparse enables one to solve this underdetermined system of linear equations. The least-squares solution to such problems is to minimize the formula_3 norm—that is, minimize the amount of energy in the system. This is usually simple mathematically (involving only a matrix multiplication by the pseudo-inverse of the basis sampled in). However, this leads to poor results for many practical applications, for which the unknown coefficients have nonzero energy. To enforce the sparsity constraint when solving for the underdetermined system of linear equations, one can minimize the number of nonzero components of the solution. The function counting the number of non-zero components of a vector was called the formula_4 "norm" by David Donoho. Candès et al. proved that for many problems it is probable that the formula_0 norm is equivalent to the formula_4 norm, in a technical sense: This equivalence result allows one to solve the formula_0 problem, which is easier than the formula_4 problem. Finding the candidate with the smallest formula_0 norm can be expressed relatively easily as a linear program, for which efficient solution methods already exist. When measurements may contain a finite amount of noise, basis pursuit denoising is preferred over linear programming, since it preserves sparsity in the face of noise and can be solved faster than an exact linear program. Total variation-based CS reconstruction. Motivation and applications. Role of TV regularization. Total variation can be seen as a non-negative real-valued functional defined on the space of real-valued functions (for the case of functions of one variable) or on the space of integrable functions (for the case of functions of several variables). For signals, especially, total variation refers to the integral of the absolute gradient of the signal. In signal and image reconstruction, it is applied as total variation regularization where the underlying principle is that signals with excessive details have high total variation and that removing these details, while retaining important information such as edges, would reduce the total variation of the signal and make the signal subject closer to the original signal in the problem. For the purpose of signal and image reconstruction, formula_5 minimization models are used. Other approaches also include the least-squares as has been discussed before in this article. These methods are extremely slow and return a not-so-perfect reconstruction of the signal. The current CS Regularization models attempt to address this problem by incorporating sparsity priors of the original image, one of which is the total variation (TV). Conventional TV approaches are designed to give piece-wise constant solutions. Some of these include (as discussed ahead) – constrained formula_6-minimization which uses an iterative scheme. This method, though fast, subsequently leads to over-smoothing of edges resulting in blurred image edges. TV methods with iterative re-weighting have been implemented to reduce the influence of large gradient value magnitudes in the images. This has been used in computed tomography (CT) reconstruction as a method known as edge-preserving total variation. However, as gradient magnitudes are used for estimation of relative penalty weights between the data fidelity and regularization terms, this method is not robust to noise and artifacts and accurate enough for CS image/signal reconstruction and, therefore, fails to preserve smaller structures. Recent progress on this problem involves using an iteratively directional TV refinement for CS reconstruction. This method would have 2 stages: the first stage would estimate and refine the initial orientation field – which is defined as a noisy point-wise initial estimate, through edge-detection, of the given image. In the second stage, the CS reconstruction model is presented by utilizing directional TV regularizer. More details about these TV-based approaches – iteratively reweighted l1 minimization, edge-preserving TV and iterative model using directional orientation field and TV- are provided below. Existing approaches. Iteratively reweighted "ℓ"1 minimization. In the CS reconstruction models using constrained formula_5 minimization, larger coefficients are penalized heavily in the formula_5 norm. It was proposed to have a weighted formulation of formula_5 minimization designed to more democratically penalize nonzero coefficients. An iterative algorithm is used for constructing the appropriate weights. Each iteration requires solving one formula_5 minimization problem by finding the local minimum of a concave penalty function that more closely resembles the formula_7 norm. An additional parameter, usually to avoid any sharp transitions in the penalty function curve, is introduced into the iterative equation to ensure stability and so that a zero estimate in one iteration does not necessarily lead to a zero estimate in the next iteration. The method essentially involves using the current solution for computing the weights to be used in the next iteration. Advantages and disadvantages. Early iterations may find inaccurate sample estimates, however this method will down-sample these at a later stage to give more weight to the smaller non-zero signal estimates. One of the disadvantages is the need for defining a valid starting point as a global minimum might not be obtained every time due to the concavity of the function. Another disadvantage is that this method tends to uniformly penalize the image gradient irrespective of the underlying image structures. This causes over-smoothing of edges, especially those of low contrast regions, subsequently leading to loss of low contrast information. The advantages of this method include: reduction of the sampling rate for sparse signals; reconstruction of the image while being robust to the removal of noise and other artifacts; and use of very few iterations. This can also help in recovering images with sparse gradients. In the figure shown below, P1 refers to the first-step of the iterative reconstruction process, of the projection matrix P of the fan-beam geometry, which is constrained by the data fidelity term. This may contain noise and artifacts as no regularization is performed. The minimization of P1 is solved through the conjugate gradient least squares method. P2 refers to the second step of the iterative reconstruction process wherein it utilizes the edge-preserving total variation regularization term to remove noise and artifacts, and thus improve the quality of the reconstructed image/signal. The minimization of P2 is done through a simple gradient descent method. Convergence is determined by testing, after each iteration, for image positivity, by checking if formula_8 for the case when formula_9 (Note that formula_10 refers to the different x-ray linear attenuation coefficients at different voxels of the patient image). Edge-preserving total variation (TV)-based compressed sensing. This is an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low dose CT through low current levels (milliampere). In order to reduce the imaging dose, one of the approaches used is to reduce the number of x-ray projections acquired by the scanner detectors. However, this insufficient projection data which is used to reconstruct the CT image can cause streaking artifacts. Furthermore, using these insufficient projections in standard TV algorithms end up making the problem under-determined and thus leading to infinitely many possible solutions. In this method, an additional penalty weighted function is assigned to the original TV norm. This allows for easier detection of sharp discontinuities in intensity in the images and thereby adapt the weight to store the recovered edge information during the process of signal/image reconstruction. The parameter formula_11 controls the amount of smoothing applied to the pixels at the edges to differentiate them from the non-edge pixels. The value of formula_11 is changed adaptively based on the values of the histogram of the gradient magnitude so that a certain percentage of pixels have gradient values larger than formula_11. The edge-preserving total variation term, thus, becomes sparser and this speeds up the implementation. A two-step iteration process known as forward–backward splitting algorithm is used. The optimization problem is split into two sub-problems which are then solved with the conjugate gradient least squares method and the simple gradient descent method respectively. The method is stopped when the desired convergence has been achieved or if the maximum number of iterations is reached. Advantages and disadvantages. Some of the disadvantages of this method are the absence of smaller structures in the reconstructed image and degradation of image resolution. This edge preserving TV algorithm, however, requires fewer iterations than the conventional TV algorithm. Analyzing the horizontal and vertical intensity profiles of the reconstructed images, it can be seen that there are sharp jumps at edge points and negligible, minor fluctuation at non-edge points. Thus, this method leads to low relative error and higher correlation as compared to the TV method. It also effectively suppresses and removes any form of image noise and image artifacts such as streaking. Iterative model using a directional orientation field and directional total variation. To prevent over-smoothing of edges and texture details and to obtain a reconstructed CS image which is accurate and robust to noise and artifacts, this method is used. First, an initial estimate of the noisy point-wise orientation field of the image formula_12, formula_13, is obtained. This noisy orientation field is defined so that it can be refined at a later stage to reduce the noise influences in orientation field estimation. A coarse orientation field estimation is then introduced based on structure tensor, which is formulated as: formula_14. Here, formula_15 refers to the structure tensor related with the image pixel point (i,j) having standard deviation formula_16. formula_17 refers to the Gaussian kernel formula_18 with standard deviation formula_16. formula_11 refers to the manually defined parameter for the image formula_12 below which the edge detection is insensitive to noise. formula_19 refers to the gradient of the image formula_12 and formula_20 refers to the tensor product obtained by using this gradient. The structure tensor obtained is convolved with a Gaussian kernel formula_17 to improve the accuracy of the orientation estimate with formula_11 being set to high values to account for the unknown noise levels. For every pixel (i,j) in the image, the structure tensor J is a symmetric and positive semi-definite matrix. Convolving all the pixels in the image with formula_17, gives orthonormal eigen vectors ω and υ of the formula_21 matrix. ω points in the direction of the dominant orientation having the largest contrast and υ points in the direction of the structure orientation having the smallest contrast. The orientation field coarse initial estimation formula_13 is defined as formula_13 = υ. This estimate is accurate at strong edges. However, at weak edges or on regions with noise, its reliability decreases. To overcome this drawback, a refined orientation model is defined in which the data term reduces the effect of noise and improves accuracy while the second penalty term with the L2-norm is a fidelity term which ensures accuracy of initial coarse estimation. This orientation field is introduced into the directional total variation optimization model for CS reconstruction through the equation: formula_22. formula_23 is the objective signal which needs to be recovered. Y is the corresponding measurement vector, d is the iterative refined orientation field and formula_24 is the CS measurement matrix. This method undergoes a few iterations ultimately leading to convergence.formula_13 is the orientation field approximate estimation of the reconstructed image formula_25 from the previous iteration (in order to check for convergence and the subsequent optical performance, the previous iteration is used). For the two vector fields represented by formula_23 and formula_26, formula_27 refers to the multiplication of respective horizontal and vertical vector elements of formula_23 and formula_26 followed by their subsequent addition. These equations are reduced to a series of convex minimization problems which are then solved with a combination of variable splitting and augmented Lagrangian (FFT-based fast solver with a closed form solution) methods. It (Augmented Lagrangian) is considered equivalent to the split Bregman iteration which ensures convergence of this method. The orientation field, d is defined as being equal to formula_28, where formula_29 define the horizontal and vertical estimates of formula_26. The Augmented Lagrangian method for the orientation field, formula_30, involves initializing formula_31 and then finding the approximate minimizer of formula_32 with respect to these variables. The Lagrangian multipliers are then updated and the iterative process is stopped when convergence is achieved. For the iterative directional total variation refinement model, the augmented lagrangian method involves initializing formula_33. Here, formula_34 are newly introduced variables where formula_35 = formula_36, formula_37 = formula_38, formula_39 = formula_40, and formula_41 = formula_42. formula_43 are the Lagrangian multipliers for formula_34. For each iteration, the approximate minimizer of formula_44 with respect to variables (formula_45) is calculated. And as in the field refinement model, the lagrangian multipliers are updated and the iterative process is stopped when convergence is achieved. For the orientation field refinement model, the Lagrangian multipliers are updated in the iterative process as follows: formula_46 formula_47 For the iterative directional total variation refinement model, the Lagrangian multipliers are updated as follows: formula_48 formula_49 Here, formula_50 are positive constants. Advantages and disadvantages. Based on peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) metrics and known ground-truth images for testing performance, it is concluded that iterative directional total variation has a better reconstructed performance than the non-iterative methods in preserving edge and texture areas. The orientation field refinement model plays a major role in this improvement in performance as it increases the number of directionless pixels in the flat area while enhancing the orientation field consistency in the regions with edges. Applications. The field of compressive sensing is related to several topics in signal processing and computational mathematics, such as underdetermined linear systems, group testing, heavy hitters, sparse coding, multiplexing, sparse sampling, and finite rate of innovation. Its broad scope and generality has enabled several innovative CS-enhanced approaches in signal processing and compression, solution of inverse problems, design of radiating systems, radar and through-the-wall imaging, and antenna characterization. Imaging techniques having a strong affinity with compressive sensing include coded aperture and computational photography. Conventional CS reconstruction uses sparse signals (usually sampled at a rate less than the Nyquist sampling rate) for reconstruction through constrained formula_51 minimization. One of the earliest applications of such an approach was in reflection seismology which used sparse reflected signals from band-limited data for tracking changes between sub-surface layers. When the LASSO model came into prominence in the 1990s as a statistical method for selection of sparse models, this method was further used in computational harmonic analysis for sparse signal representation from over-complete dictionaries. Some of the other applications include incoherent sampling of radar pulses. The work by "Boyd et al." has applied the LASSO model- for selection of sparse models- towards analog to digital converters (the current ones use a sampling rate higher than the Nyquist rate along with the quantized Shannon representation). This would involve a parallel architecture in which the polarity of the analog signal changes at a high rate followed by digitizing the integral at the end of each time-interval to obtain the converted digital signal. Photography. Compressed sensing has been used in an experimental mobile phone camera sensor. The approach allows a reduction in image acquisition energy per image by as much as a factor of 15 at the cost of complex decompression algorithms; the computation may require an off-device implementation. Compressed sensing is used in single-pixel cameras from Rice University. Bell Labs employed the technique in a lensless single-pixel camera that takes stills using repeated snapshots of randomly chosen apertures from a grid. Image quality improves with the number of snapshots, and generally requires a small fraction of the data of conventional imaging, while eliminating lens/focus-related aberrations. Holography. Compressed sensing can be used to improve image reconstruction in holography by increasing the number of voxels one can infer from a single hologram. It is also used for image retrieval from undersampled measurements in optical and millimeter-wave holography. Facial recognition. Compressed sensing has been used in facial recognition applications. Magnetic resonance imaging. Compressed sensing has been used to shorten magnetic resonance imaging scanning sessions on conventional hardware. Reconstruction methods include Compressed sensing addresses the issue of high scan time by enabling faster acquisition by measuring fewer Fourier coefficients. This produces a high-quality image with relatively lower scan time. Another application (also discussed ahead) is for CT reconstruction with fewer X-ray projections. Compressed sensing, in this case, removes the high spatial gradient parts – mainly, image noise and artifacts. This holds tremendous potential as one can obtain high-resolution CT images at low radiation doses (through lower current-mA settings). Network tomography. Compressed sensing has showed outstanding results in the application of network tomography to network management. Network delay estimation and network congestion detection can both be modeled as underdetermined systems of linear equations where the coefficient matrix is the network routing matrix. Moreover, in the Internet, network routing matrices usually satisfy the criterion for using compressed sensing. Shortwave-infrared cameras. In 2013 one company announced shortwave-infrared cameras which utilize compressed sensing. These cameras have light sensitivity from 0.9 μm to 1.7 μm, wavelengths invisible to the human eye. Aperture synthesis astronomy. In radio astronomy and optical astronomical interferometry, full coverage of the Fourier plane is usually absent and phase information is not obtained in most hardware configurations. In order to obtain aperture synthesis images, various compressed sensing algorithms are employed. The Högbom CLEAN algorithm has been in use since 1974 for the reconstruction of images obtained from radio interferometers, which is similar to the matching pursuit algorithm mentioned above. Transmission electron microscopy. Compressed sensing combined with a moving aperture has been used to increase the acquisition rate of images in a transmission electron microscope. In scanning mode, compressive sensing combined with random scanning of the electron beam has enabled both faster acquisition and less electron dose, which allows for imaging of electron beam sensitive materials. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L^1" }, { "math_id": 1, "text": " \\mathbf{y}=D\\mathbf{x} " }, { "math_id": 2, "text": " \\mathbf{x} " }, { "math_id": 3, "text": "L^2" }, { "math_id": 4, "text": "L^0" }, { "math_id": 5, "text": "\\ell_1" }, { "math_id": 6, "text": "\\ell_1" }, { "math_id": 7, "text": "\\ell_0" }, { "math_id": 8, "text": "f^{k-1} = 0" }, { "math_id": 9, "text": "f^{k-1} < 0" }, { "math_id": 10, "text": "f" }, { "math_id": 11, "text": "\\sigma" }, { "math_id": 12, "text": "I" }, { "math_id": 13, "text": "\\hat{d}" }, { "math_id": 14, "text": " J_\\rho(\\nabla I_\\sigma) = G_\\rho * (\\nabla I_\\sigma \\otimes \\nabla I_\\sigma) = \\begin{pmatrix}J_{11} & J_{12}\\\\J_{12} & J_{22}\\end{pmatrix}" }, { "math_id": 15, "text": " J_\\rho " }, { "math_id": 16, "text": "\\rho" }, { "math_id": 17, "text": "G" }, { "math_id": 18, "text": "(0, \\rho ^2)" }, { "math_id": 19, "text": "\\nabla I_\\sigma" }, { "math_id": 20, "text": "(\\nabla I_\\sigma \\otimes \\nabla I_\\sigma)" }, { "math_id": 21, "text": "J" }, { "math_id": 22, "text": "\\min_\\Chi\\lVert \\nabla \\Chi \\bullet d \\rVert _1 + \\frac{\\lambda}{2}\\ \\lVert Y - \\Phi\\Chi \\rVert ^2_2" }, { "math_id": 23, "text": "\\Chi" }, { "math_id": 24, "text": "\\Phi" }, { "math_id": 25, "text": "X^{k-1}" }, { "math_id": 26, "text": "d" }, { "math_id": 27, "text": "\\Chi \\bullet d" }, { "math_id": 28, "text": "(d_h, d_v)" }, { "math_id": 29, "text": "d_h, d_v" }, { "math_id": 30, "text": "\\min_\\Chi\\lVert \\nabla \\Chi \\bullet d \\rVert _1 + \\frac{\\lambda}{2}\\ \\lVert Y - \\Phi\\Chi \\rVert^2_2" }, { "math_id": 31, "text": "d_h, d_v, H, V" }, { "math_id": 32, "text": "L_1" }, { "math_id": 33, "text": "\\Chi, P, Q, \\lambda_P, \\lambda_Q" }, { "math_id": 34, "text": "H, V, P, Q" }, { "math_id": 35, "text": "H" }, { "math_id": 36, "text": "\\nabla d_{h}" }, { "math_id": 37, "text": "V" }, { "math_id": 38, "text": "\\nabla d_v" }, { "math_id": 39, "text": "P" }, { "math_id": 40, "text": "\\nabla \\Chi" }, { "math_id": 41, "text": "Q" }, { "math_id": 42, "text": "P \\bullet d" }, { "math_id": 43, "text": "\\lambda_H, \\lambda_V, \\lambda_P, \\lambda_Q" }, { "math_id": 44, "text": "L_2" }, { "math_id": 45, "text": "\\Chi, P, Q" }, { "math_id": 46, "text": "(\\lambda_H)^k = (\\lambda_H)^{k-1} + \\gamma_H(H^k - \\nabla (d_h)^k)" }, { "math_id": 47, "text": "(\\lambda_V)^k = (\\lambda_V)^{k-1} + \\gamma_V(V^k - \\nabla (d_v)^k)" }, { "math_id": 48, "text": "(\\lambda_P)^k = (\\lambda_P)^{k-1} + \\gamma_P P^k - \\nabla (\\Chi)^k)" }, { "math_id": 49, "text": "(\\lambda_Q)^k = (\\lambda_Q)^{k-1} + \\gamma_Q(Q^k - P^k \\bullet d)" }, { "math_id": 50, "text": "\\gamma_H, \\gamma_V, \\gamma_P, \\gamma_Q" }, { "math_id": 51, "text": "l_{1}" } ]
https://en.wikipedia.org/wiki?curid=11403316
11405691
Electrical conductivity meter
Measuring device An electrical conductivity meter (EC meter) measures the electrical conductivity in a solution. It has multiple applications in research and engineering, with common usage in hydroponics, aquaculture, aquaponics, and freshwater systems to monitor the amount of nutrients, salts or impurities in the water. Principle. Common laboratory conductivity meters employ a potentiometric method and four electrodes. Often, the electrodes are cylindrical and arranged concentrically. The electrodes are usually made of platinum metal. An alternating current is applied to the outer pair of the electrodes. The potential between the inner pair is measured. Conductivity could in principle be determined using the distance between the electrodes and their surface area using Ohm's law but generally, for accuracy, a calibration is employed using electrolytes of well-known conductivity. Industrial conductivity probes often employ an inductive method, which has the advantage that the fluid does not wet the electrical parts of the sensor. Here, two inductively-coupled coils are used. One is the driving coil producing a magnetic field and it is supplied with accurately-known voltage. The other forms a secondary coil of a transformer. The liquid passing through a channel in the sensor forms one turn in the secondary winding of the transformer. The induced current is the output of the sensor. Another way is to use four-electrode conductivity sensors that are made from corrosion-resistant materials. A benefit of four-electrode conductivity sensors compared to inductive sensors is scaling compensation and the ability to measure low (below 100 μS/cm) conductivities (a feature especially important when measuring near-100% hydrofluoric acid). Temperature dependence. The conductivity of a solution is highly temperature dependent, so it is important either to use a temperature compensated instrument, or to calibrate the instrument at the same temperature as the solution being measured. Unlike metals, the conductivity of common electrolytes typically increases with increasing temperature. Over a limited temperature range, the way temperature affects the conductivity "of a solution" can be modeled linearly using the following formula: formula_0 where "T" is the temperature of the sample, "Tcal" is the calibration temperature, "σT" is the electrical conductivity at the temperature "T", "σTcal" is the electrical conductivity at the calibration temperature "Tcal", "α" is the temperature compensation gradient of the solution. The temperature compensation gradient for most naturally occurring samples of water is about 2%/°C; however it can range between 1 and 3%/°C. The compensation gradients for some common water solutions are listed in the table below. Conductivity measurement applications. Conductivity measurement is a versatile tool in process control. The measurement is simple and fast, and most advanced sensors require only a little maintenance. The measured conductivity reading can be used to make various assumptions on what is happening in the process. In some cases it is possible to develop a model to calculate the concentration of the liquid. Concentration of pure liquids can be calculated when the conductivity and temperature is measured. The preset curves for various acids and bases are commercially available. For example, one can measure the concentration of high purity hydrofluoric acid using conductivity-based concentration measurement [Zhejiang Quhua Fluorchemical, China Valmet Concentration 3300]. A benefit of conductivity- and temperature-based concentration measurement is the superior speed of inline measurement compared to an on-line analyzer. Conductivity-based concentration measurement has limitations. The concentration-conductivity dependence of most acids and bases is not linear. Conductivity-based measurement cannot determine on which side of the peak the measurement is, and therefore the measurement is only possible on a linear section of the curve. Kraft pulp mills use conductivity-based concentration measurement to control alkali additions to various stages of the cook. Conductivity measurement will not determine the specific amount of alkali components, but it is a good indication on the amount of effective alkali (NaOH + &lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 Na2S as NaOH or Na2O) or active alkali (NaOH + Na2S as NaOH or Na2O) in the cooking liquor. The composition of the liquor varies between different stages of the cook. Therefore, it is necessary to develop a specific curve for each measurement point or to use commercially available products. The high pressure and temperature of cooking process, combined with high concentration of alkali components, put a heavy strain on conductivity sensors that are installed in process. The scaling on the electrodes needs to be taken into account, otherwise the conductivity measurement drifts, requiring increased calibration and maintenance. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sigma_T = {\\sigma_{T_{cal}}[1 + \\alpha (T - T_{cal})] }" } ]
https://en.wikipedia.org/wiki?curid=11405691
1140981
Racks and quandles
Sets with binary operations analogous to the Reidemeister moves used on knot diagrams In mathematics, racks and quandles are sets with binary operations satisfying axioms analogous to the Reidemeister moves used to manipulate knot diagrams. While mainly used to obtain invariants of knots, they can be viewed as algebraic constructions in their own right. In particular, the definition of a quandle axiomatizes the properties of conjugation in a group. History. In 1943, Mituhisa Takasaki (高崎光久) introduced an algebraic structure which he called a "Kei" (圭), which would later come to be known as an involutive quandle. His motivation was to find a nonassociative algebraic structure to capture the notion of a reflection in the context of finite geometry. The idea was rediscovered and generalized in an unpublished 1959 correspondence between John Conway and Gavin Wraith, who at the time were undergraduate students at the University of Cambridge. It is here that the modern definitions of quandles and of racks first appear. Wraith had become interested in these structures (which he initially dubbed sequentials) while at school. Conway renamed them wracks, partly as a pun on his colleague's name, and partly because they arise as the remnants (or 'wrack and ruin') of a group when one discards the multiplicative structure and considers only the conjugation structure. The spelling 'rack' has now become prevalent. These constructs surfaced again in the 1980s: in a 1982 paper by David Joyce (where the term quandle, an arbitrary nonsense word, was coined), in a 1982 paper by (under the name distributive groupoids) and in a 1986 conference paper by Egbert Brieskorn (where they were called automorphic sets). A detailed overview of racks and their applications in knot theory may be found in the paper by Colin Rourke and Roger Fenn. Racks. A rack may be defined as a set formula_0 with a binary operation formula_1 such that for every formula_2 the self-distributive law holds: formula_3 and for every formula_4 there exists a unique formula_5 such that formula_6 This definition, while terse and commonly used, is suboptimal for certain purposes because it contains an existential quantifier which is not really necessary. To avoid this, we may write the unique formula_5 such that formula_7 as formula_8 We then have formula_9 and thus formula_10 and formula_11 Using this idea, a rack may be equivalently defined as a set formula_0 with two binary operations formula_12 and formula_13 such that for all formula_14 It is convenient to say that the element formula_19 is acting from the left in the expression formula_20 and acting from the right in the expression formula_8 The third and fourth rack axioms then say that these left and right actions are inverses of each other. Using this, we can eliminate either one of these actions from the definition of rack. If we eliminate the right action and keep the left one, we obtain the terse definition given initially. Many different conventions are used in the literature on racks and quandles. For example, many authors prefer to work with just the "right" action. Furthermore, the use of the symbols formula_1 and formula_13 is by no means universal: many authors use exponential notation formula_21 and formula_22 while many others write formula_23 Yet another equivalent definition of a rack is that it is a set where each element acts on the left and right as automorphisms of the rack, with the left action being the inverse of the right one. In this definition, the fact that each element acts as automorphisms encodes the left and right self-distributivity laws, and also these laws: formula_24 which are consequences of the definition(s) given earlier. Quandles. A quandle is defined as an idempotent rack, formula_25 such that for all formula_26 formula_27 or equivalently formula_28 Examples and applications. Every group gives a quandle where the operations come from conjugation: formula_29 In fact, every equational law satisfied by conjugation in a group follows from the quandle axioms. So, one can think of a quandle as what is left of a group when we forget multiplication, the identity, and inverses, and only remember the operation of conjugation. Every tame knot in three-dimensional Euclidean space has a 'fundamental quandle'. To define this, one can note that the fundamental group of the knot complement, or knot group, has a presentation (the Wirtinger presentation) in which the relations only involve conjugation. So, this presentation can also be used as a presentation of a quandle. The fundamental quandle is a very powerful invariant of knots. In particular, if two knots have isomorphic fundamental quandles then there is a homeomorphism of three-dimensional Euclidean space, which may be orientation reversing, taking one knot to the other. Less powerful but more easily computable invariants of knots may be obtained by counting the homomorphisms from the knot quandle to a fixed quandle formula_30 Since the Wirtinger presentation has one generator for each strand in a knot diagram, these invariants can be computed by counting ways of labelling each strand by an element of formula_25 subject to certain constraints. More sophisticated invariants of this sort can be constructed with the help of quandle cohomology. The &lt;templatestyles src="Template:Visible anchor/styles.css" /&gt;Alexander quandles are also important, since they can be used to compute the Alexander polynomial of a knot. Let formula_31 be a module over the ring formula_32 of Laurent polynomials in one variable. Then the Alexander quandle is formula_31 made into a quandle with the left action given by formula_33 Racks are a useful generalization of quandles in topology, since while quandles can represent knots on a round linear object (such as rope or a thread), racks can represent ribbons, which may be twisted as well as knotted. A quandle formula_34 is said to be involutory if for all formula_35 formula_36 or equivalently, formula_37 Any symmetric space gives an involutory quandle, where formula_38 is the result of 'reflecting formula_39 through formula_40'.
[ { "math_id": 0, "text": "\\mathrm{R}" }, { "math_id": 1, "text": "\\triangleleft" }, { "math_id": 2, "text": "a, b, c \\in \\mathrm{R}" }, { "math_id": 3, "text": "a \\triangleleft(b \\triangleleft c) = (a \\triangleleft b) \\triangleleft(a \\triangleleft c)" }, { "math_id": 4, "text": "a, b \\in \\mathrm{R}," }, { "math_id": 5, "text": "c \\in \\mathrm{R}" }, { "math_id": 6, "text": "a \\triangleleft c = b." }, { "math_id": 7, "text": "a \\triangleleft c = b" }, { "math_id": 8, "text": "b \\triangleright a." }, { "math_id": 9, "text": " a \\triangleleft c = b \\iff c = b \\triangleright a, " }, { "math_id": 10, "text": " a \\triangleleft(b \\triangleright a) = b," }, { "math_id": 11, "text": "(a \\triangleleft b) \\triangleright a = b." }, { "math_id": 12, "text": "\\triangleleft " }, { "math_id": 13, "text": "\\triangleright" }, { "math_id": 14, "text": "a, b, c \\in \\mathrm{R}\\text{:}" }, { "math_id": 15, "text": "a \\triangleleft(b \\triangleleft c) = (a \\triangleleft b) \\triangleleft(a \\triangleleft c)" }, { "math_id": 16, "text": "(c \\triangleright b) \\triangleright a = (c \\triangleright a) \\triangleright(b \\triangleright a)" }, { "math_id": 17, "text": "(a \\triangleleft b) \\triangleright a = b" }, { "math_id": 18, "text": "a \\triangleleft(b \\triangleright a) = b" }, { "math_id": 19, "text": "a \\in \\mathrm{R}" }, { "math_id": 20, "text": "a \\triangleleft b," }, { "math_id": 21, "text": "a \\triangleleft b = {}^a b" }, { "math_id": 22, "text": "b \\triangleright a = b^a," }, { "math_id": 23, "text": "b \\triangleright a = b \\star a. " }, { "math_id": 24, "text": "\\begin{align}\n a \\triangleleft(b \\triangleright c) &= (a \\triangleleft b) \\triangleright(a\\ \\triangleleft c) \\\\\n (c \\triangleleft b) \\triangleright a &= (c \\triangleright a) \\triangleleft(b \\triangleright a)\n \\end{align}" }, { "math_id": 25, "text": "\\mathrm{Q}," }, { "math_id": 26, "text": "a \\in \\mathrm{Q}" }, { "math_id": 27, "text": "a \\triangleleft a = a," }, { "math_id": 28, "text": "a \\triangleright a = a." }, { "math_id": 29, "text": "\\begin{align}\n a \\triangleleft b &= a b a^{-1} \\\\\n b \\triangleright a &= a^{-1} b a \\\\\n &= a^{-1} \\triangleleft b\n\\end{align}" }, { "math_id": 30, "text": "\\mathrm{Q}." }, { "math_id": 31, "text": "\\mathrm{A}" }, { "math_id": 32, "text": "\\mathbb{Z}[t, t^{-1}]" }, { "math_id": 33, "text": "a \\triangleleft b = tb + (1 - t)a. " }, { "math_id": 34, "text": "\\mathrm{Q}" }, { "math_id": 35, "text": "a, b \\in \\mathrm{Q}," }, { "math_id": 36, "text": " a \\triangleleft(a \\triangleleft b) = b " }, { "math_id": 37, "text": " (b \\triangleright a) \\triangleright a = b ." }, { "math_id": 38, "text": "a \\triangleleft b" }, { "math_id": 39, "text": "b" }, { "math_id": 40, "text": "a" } ]
https://en.wikipedia.org/wiki?curid=1140981
1141
Augustin-Jean Fresnel
French optical physicist (1788–1827) Augustin-Jean Fresnel (10 May 1788 – 14 July 1827) was a French civil engineer and physicist whose research in optics led to the almost unanimous acceptance of the wave theory of light, excluding any remnant of Newton's corpuscular theory, from the late 1830s  until the end of the 19th century. He is perhaps better known for inventing the catadioptric (reflective/refractive) Fresnel lens and for pioneering the use of "stepped" lenses to extend the visibility of lighthouses, saving countless lives at sea. The simpler dioptric (purely refractive) stepped lens, first proposed by Count Buffon  and independently reinvented by Fresnel, is used in screen magnifiers and in condenser lenses for overhead projectors. By expressing Huygens's principle of secondary waves and Young's principle of interference in quantitative terms, and supposing that simple colors consist of sinusoidal waves, Fresnel gave the first satisfactory explanation of diffraction by straight edges, including the first satisfactory wave-based explanation of rectilinear propagation. Part of his argument was a proof that the addition of sinusoidal functions of the same frequency but different phases is analogous to the addition of forces with different directions. By further supposing that light waves are purely transverse, Fresnel explained the nature of polarization, the mechanism of chromatic polarization, and the transmission and reflection coefficients at the interface between two transparent isotropic media. Then, by generalizing the direction-speed-polarization relation for calcite, he accounted for the directions and polarizations of the refracted rays in doubly-refractive crystals of the "biaxial" class (those for which Huygens's secondary wavefronts are not axisymmetric). The period between the first publication of his pure-transverse-wave hypothesis, and the submission of his first correct solution to the biaxial problem, was less than a year. Later, he coined the terms "linear polarization", "circular polarization", and "elliptical polarization", explained how optical rotation could be understood as a difference in propagation speeds for the two directions of circular polarization, and (by allowing the reflection coefficient to be complex) accounted for the change in polarization due to total internal reflection, as exploited in the Fresnel rhomb. Defenders of the established corpuscular theory could not match his quantitative explanations of so many phenomena on so few assumptions. Fresnel had a lifelong battle with tuberculosis, to which he succumbed at the age of 39. Although he did not become a public celebrity in his lifetime, he lived just long enough to receive due recognition from his peers, including (on his deathbed) the Rumford Medal of the Royal Society of London, and his name is ubiquitous in the modern terminology of optics and waves. After the wave theory of light was subsumed by Maxwell's electromagnetic theory in the 1860s, some attention was diverted from the magnitude of Fresnel's contribution. In the period between Fresnel's unification of physical optics and Maxwell's wider unification, a contemporary authority, Humphrey Lloyd, described Fresnel's transverse-wave theory as "the noblest fabric which has ever adorned the domain of physical science, Newton's system of the universe alone excepted."  Early life. Family. Augustin-Jean Fresnel (also called Augustin Jean or simply Augustin), born in Broglie, Normandy, on 10 May 1788, was the second of four sons of the architect Jacques Fresnel and his wife Augustine, "née" Mérimée. The family moved twice—in 1789/90 to Cherbourg, and in 1794  to Jacques's home town of Mathieu, where Augustine would spend 25 years as a widow, outliving two of her sons. The first son, Louis, was admitted to the École Polytechnique, became a lieutenant in the artillery, and was killed in action at Jaca, Spain. The third, Léonor, followed Augustin into civil engineering, succeeded him as secretary of the Lighthouse Commission, and helped to edit his collected works. The fourth, Fulgence Fresnel, became a linguist, diplomat, and orientalist, and occasionally assisted Augustin with negotiations. Fulgence died in Bagdad in 1855 having led a mission to explore Babylon. Madame Fresnel's younger brother, Jean François "Léonor" Mérimée, father of the writer Prosper Mérimée, was a painter who turned his attention to the chemistry of painting. He became the Permanent Secretary of the École des Beaux-Arts and (until 1814) a professor at the École Polytechnique, and was the initial point of contact between Augustin and the leading optical physicists of the day &lt;templatestyles src="Crossreference/styles.css" /&gt;. Education. The Fresnel brothers were initially home-schooled by their mother. The sickly Augustin was considered the slow one, not inclined to memorization; but the popular story that he hardly began to read until the age of eight is disputed. At the age of nine or ten he was undistinguished except for his ability to turn tree-branches into toy bows and guns that worked far too well, earning himself the title "l'homme de génie" (the man of genius) from his accomplices, and a united crackdown from their elders. In 1801, Augustin was sent to the "École Centrale" at Caen, as company for Louis. But Augustin lifted his performance: in late 1804 he was accepted into the École Polytechnique, being placed 17th in the entrance examination. As the detailed records of the École Polytechnique begin in 1808, we know little of Augustin's time there, except that he made few if any friends and—in spite of continuing poor health—excelled in drawing and geometry: in his first year he took a prize for his solution to a geometry problem posed by Adrien-Marie Legendre. Graduating in 1806, he then enrolled at the École Nationale des Ponts et Chaussées (National School of Bridges and Roads, also known as "ENPC" or "École des Ponts"), from which he graduated in 1809, entering the service of the Corps des Ponts et Chaussées as an "ingénieur ordinaire aspirant" (ordinary engineer in training). Directly or indirectly, he was to remain in the employment of the "Corps des Ponts" for the rest of his life. Religious formation. Fresnel's parents were Roman Catholics of the Jansenist sect, characterized by an extreme Augustinian view of original sin. Religion took first place in the boys' home-schooling. In 1802, his mother said: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I pray God to give my son the grace to employ the great talents, which he has received, for his own benefit, and for the God of all. Much will be asked from him to whom much has been given, and most will be required of him who has received most. Augustin remained a Jansenist. He regarded his intellectual talents as gifts from God, and considered it his duty to use them for the benefit of others. According to his fellow engineer Alphonse Duleau, who helped to nurse him through his final illness, Fresnel saw the study of nature as part of the study of the power and goodness of God. He placed virtue above science and genius. In his last days he prayed for "strength of soul," not against death alone, but against "the interruption of discoveries… of which he hoped to derive useful applications."  Jansenism is considered heretical by the Roman Catholic Church, and Grattan-Guinness suggests this is why Fresnel never gained a permanent academic teaching post; his only teaching appointment was at the Athénée in the winter of 1819–20. The article on Fresnel in the "Catholic Encyclopedia" does not mention his Jansenism, but describes him as "a deeply religious man and remarkable for his keen sense of duty."  Engineering assignments. Fresnel was initially posted to the western département of Vendée. There, in 1811, he anticipated what became known as the Solvay process for producing soda ash, except that recycling of the ammonia was not considered. That difference may explain why leading chemists, who learned of his discovery through his uncle Léonor, eventually thought it uneconomic. About 1812, Fresnel was sent to Nyons, in the southern département of Drôme, to assist with the imperial highway that was to connect Spain and Italy. It is from Nyons that we have the first evidence of his interest in optics. On 15 May 1814, while work was slack due to Napoleon's defeat, Fresnel wrote a "P.S." to his brother Léonor, saying in part: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I would also like to have papers that might tell me about the discoveries of French physicists on the polarization of light. I saw in the "Moniteur" of a few months ago that Biot had read to the Institute a very interesting memoir on the "polarization of light". Though I break my head, I cannot guess what that is. As late as 28 December he was still waiting for information, but by 10 February 1815 he had received Biot's memoir. (The "Institut de France" had taken over the functions of the French "Académie des Sciences" and other "académies" in 1795. In 1816 the "Académie des Sciences" regained its name and autonomy, but remained part of the institute.) In March 1815, perceiving Napoleon's return from Elba as "an attack on civilization", Fresnel departed without leave, hastened to Toulouse and offered his services to the royalist resistance, but soon found himself on the sick list. Returning to Nyons in defeat, he was threatened and had his windows broken. During the Hundred Days he was placed on suspension, which he was eventually allowed to spend at his mother's house in Mathieu. There he used his enforced leisure to begin his optical experiments. Contributions to physical optics. Historical context: From Newton to Biot. The appreciation of Fresnel's reconstruction of physical optics might be assisted by an overview of the fragmented state in which he found the subject. In this subsection, optical phenomena that were unexplained or whose explanations were disputed are named in bold type. The corpuscular theory of light, favored by Isaac Newton and accepted by nearly all of Fresnel's seniors, easily explained rectilinear propagation: the corpuscles obviously moved very fast, so that their paths were very nearly straight. The wave theory, as developed by Christiaan Huygens in his "Treatise on Light" (1690), explained rectilinear propagation on the assumption that each point crossed by a traveling wavefront becomes the source of a secondary wavefront. Given the initial position of a traveling wavefront, any later position (according to Huygens) was the common tangent surface (envelope) of the secondary wavefronts emitted from the earlier position. As the extent of the common tangent was limited by the extent of the initial wavefront, the repeated application of Huygens's construction to a plane wavefront of limited extent (in a uniform medium) gave a straight, parallel beam. While this construction indeed predicted rectilinear propagation, it was difficult to reconcile with the common observation that wavefronts on the surface of water can bend around obstructions, and with the similar behavior of sound waves—causing Newton to maintain, to the end of his life, that if light consisted of waves it would "bend and spread every way" into the shadows. Huygens's theory neatly explained the law of ordinary reflection and the law of ordinary refraction ("Snell's law"), provided that the secondary waves traveled slower in denser media (those of higher refractive index). The corpuscular theory, with the hypothesis that the corpuscles were subject to forces acting perpendicular to surfaces, explained the same laws equally well, albeit with the implication that light traveled "faster" in denser media; that implication was wrong, but could not be directly disproven with the technology of Newton's time or even Fresnel's time &lt;templatestyles src="Crossreference/styles.css" /&gt;. Similarly inconclusive was stellar aberration—that is, the apparent change in the position of a star due to the velocity of the earth across the line of sight (not to be confused with stellar parallax, which is due to the "displacement" of the earth across the line of sight). Identified by James Bradley in 1728, stellar aberration was widely taken as confirmation of the corpuscular theory. But it was equally compatible with the wave theory, as Euler noted in 1746—tacitly assuming that the aether (the supposed wave-bearing medium) near the earth was not disturbed by the motion of the earth. The outstanding strength of Huygens's theory was his explanation of the birefringence (double refraction) of "Iceland crystal" (transparent calcite), on the assumption that the secondary waves are spherical for the ordinary refraction (which satisfies Snell's law) and spheroidal for the "extraordinary" refraction (which does not). In general, Huygens's common-tangent construction implies that rays are "paths of least time" between successive positions of the wavefront, in accordance with Fermat's principle. In the special case of isotropic media, the secondary wavefronts must be spherical, and Huygens's construction then implies that the rays are perpendicular to the wavefront; indeed, the law of "ordinary" refraction can be separately derived from that premise, as Ignace-Gaston Pardies did before Huygens. Although Newton rejected the wave theory, he noticed its potential to explain colors, including the colors of "thin plates" (e.g., "Newton's rings", and the colors of skylight reflected in soap bubbles), on the assumption that light consists of "periodic" waves, with the lowest frequencies (longest wavelengths) at the red end of the spectrum, and the highest frequencies (shortest wavelengths) at the violet end. In 1672 he published a heavy hint to that effect, but contemporary supporters of the wave theory failed to act on it: Robert Hooke treated light as a periodic sequence of pulses but did not use frequency as the criterion of color, while Huygens treated the waves as individual pulses without any periodicity; and Pardies died young in 1673. Newton himself tried to explain colors of thin plates using the corpuscular theory, by supposing that his corpuscles had the wavelike property of alternating between "fits of easy transmission" and "fits of easy reflection", the distance between like "fits" depending on the color and the medium  and, awkwardly, on the angle of refraction or reflection into that medium.1144 More awkwardly still, this theory required thin plates to reflect only at the back surface, although "thick" plates manifestly reflected also at the front surface. It was not until 1801 that Thomas Young, in the Bakerian Lecture for that year, cited Newton's hint, and accounted for the colors of a thin plate as the combined effect of the front and back reflections, which reinforce or cancel each other according to the "wavelength" and the thickness. Young similarly explained the colors of "striated surfaces" (e.g., gratings) as the wavelength-dependent reinforcement or cancellation of reflections from adjacent lines. He described this reinforcement or cancellation as interference. Neither Newton nor Huygens satisfactorily explained diffraction—the blurring and fringing of shadows where, according to rectilinear propagation, they ought to be sharp. Newton, who called diffraction "inflexion", supposed that rays of light passing close to obstacles were bent ("inflected"); but his explanation was only qualitative. Huygens's common-tangent construction, without modifications, could not accommodate diffraction at all. Two such modifications were proposed by Young in the same 1801 Bakerian Lecture: first, that the secondary waves near the edge of an obstacle could diverge into the shadow, but only weakly, due to limited reinforcement from other secondary waves; and second, that diffraction by an edge was caused by interference between two rays: one reflected off the edge, and the other inflected while passing near the edge. The latter ray would be undeviated if sufficiently far from the edge, but Young did not elaborate on that case. These were the earliest suggestions that the degree of diffraction depends on wavelength. Later, in the 1803 Bakerian Lecture, Young ceased to regard inflection as a separate phenomenon, and produced evidence that diffraction fringes "inside" the shadow of a narrow obstacle were due to interference: when the light from one side was blocked, the internal fringes disappeared. But Young was alone in such efforts until Fresnel entered the field. Huygens, in his investigation of double refraction, noticed something that he could not explain: when light passes through two similarly oriented calcite crystals at normal incidence, the ordinary ray emerging from the first crystal suffers only the ordinary refraction in the second, while the extraordinary ray emerging from the first suffers only the extraordinary refraction in the second; but when the second crystal is rotated 90° about the incident rays, the roles are interchanged, so that the ordinary ray emerging from the first crystal suffers only the extraordinary refraction in the second, and vice versa. This discovery gave Newton another reason to reject the wave theory: rays of light evidently had "sides". Corpuscles could have sides  (or "poles", as they would later be called); but waves of light could not, because (so it seemed) any such waves would need to be longitudinal (with vibrations in the direction of propagation). Newton offered an alternative "Rule" for the extraordinary refraction, which rode on his authority through the 18th century, although he made "no known attempt to deduce it from any principles of optics, corpuscular or otherwise." 327 In 1808, the extraordinary refraction of calcite was investigated experimentally, with unprecedented accuracy, by Étienne-Louis Malus, and found to be consistent with Huygens's spheroid construction, not Newton's "Rule". Malus, encouraged by Pierre-Simon Laplace,1146 then sought to explain this law in corpuscular terms: from the known relation between the incident and refracted ray directions, Malus derived the corpuscular velocity (as a function of direction) that would satisfy Maupertuis's "least action" principle. But, as Young pointed out, the existence of such a velocity law was guaranteed by Huygens's spheroid, because Huygens's construction leads to Fermat's principle, which becomes Maupertuis's principle if the ray speed is replaced by the reciprocal of the particle speed! The corpuscularists had not found a "force" law that would yield the alleged velocity law, except by a circular argument in which a force acting at the "surface" of the crystal inexplicably depended on the direction of the (possibly subsequent) velocity "within" the crystal. Worse, it was doubtful that any such force would satisfy the conditions of Maupertuis's principle. In contrast, Young proceeded to show that "a medium more easily compressible in one direction than in any direction perpendicular to it, as if it consisted of an infinite number of parallel plates connected by a substance somewhat less elastic" admits spheroidal longitudinal wavefronts, as Huygens supposed. But Malus, in the midst of his experiments on double refraction, noticed something else: when a ray of light is reflected off a non-metallic surface at the appropriate angle, it behaves like "one" of the two rays emerging from a calcite crystal. It was Malus who coined the term polarization to describe this behavior, although the polarizing angle became known as Brewster's angle after its dependence on the refractive index was determined experimentally by David Brewster in 1815. Malus also introduced the term "plane of polarization". In the case of polarization by reflection, his "plane of polarization" was the plane of the incident and reflected rays; in modern terms, this is the plane "normal" to the "electric" vibration. In 1809, Malus further discovered that the intensity of light passing through "two" polarizers is proportional to the squared cosine of the angle between their planes of polarization (Malus's law), whether the polarizers work by reflection or double refraction, and that "all" birefringent crystals produce both extraordinary refraction and polarization. As the corpuscularists started trying to explain these things in terms of polar "molecules" of light, the wave-theorists had "no working hypothesis" on the nature of polarization, prompting Young to remark that Malus's observations "present greater difficulties to the advocates of the undulatory theory than any other facts with which we are acquainted."  Malus died in February 1812, at the age of 36, shortly after receiving the Rumford Medal for his work on polarization. In August 1811, François Arago reported that if a thin plate of mica was viewed against a white polarized backlight through a calcite crystal, the two images of the mica were of complementary colors (the overlap having the same color as the background). The light emerging from the mica was ""de"polarized" in the sense that there was no orientation of the calcite that made one image disappear; yet it was not ordinary (""un"polarized") light, for which the two images would be of the same color. Rotating the calcite around the line of sight changed the colors, though they remained complementary. Rotating the mica changed the "saturation" (not the hue) of the colors. This phenomenon became known as chromatic polarization. Replacing the mica with a much thicker plate of quartz, with its faces perpendicular to the optic axis (the axis of Huygens's spheroid or Malus's velocity function), produced a similar effect, except that rotating the quartz made no difference. Arago tried to explain his observations in "corpuscular" terms. In 1812, as Arago pursued further qualitative experiments and other commitments, Jean-Baptiste Biot reworked the same ground using a gypsum lamina in place of the mica, and found empirical formulae for the intensities of the ordinary and extraordinary images. The formulae contained two coefficients, supposedly representing colors of rays "affected" and "unaffected" by the plate—the "affected" rays being of the same color mix as those reflected by amorphous thin plates of proportional, but lesser, thickness. Arago protested, declaring that he had made some of the same discoveries but had not had time to write them up. In fact the overlap between Arago's work and Biot's was minimal, Arago's being only qualitative and wider in scope (attempting to include polarization by reflection). But the dispute triggered a notorious falling-out between the two men. Later that year, Biot tried to explain the observations as an oscillation of the alignment of the "affected" corpuscles at a frequency proportional to that of Newton's "fits", due to forces depending on the alignment. This theory became known as "mobile polarization". To reconcile his results with a sinusoidal oscillation, Biot had to suppose that the corpuscles emerged with one of two permitted orientations, namely the extremes of the oscillation, with probabilities depending on the phase of the oscillation. Corpuscular optics was becoming expensive on assumptions. But in 1813, Biot reported that the case of quartz was simpler: the observable phenomenon (now called optical rotation or "optical activity" or sometimes "rotary polarization") was a gradual rotation of the polarization direction with distance, and could be explained by a corresponding rotation ("not" oscillation) of the corpuscles. Early in 1814, reviewing Biot's work on chromatic polarization, Young noted that the periodicity of the color as a function of the plate thickness—including the factor by which the period exceeded that for a reflective thin plate, and even the effect of obliquity of the plate (but not the role of polarization)—could be explained by the wave theory in terms of the different propagation times of the ordinary and extraordinary waves through the plate. But Young was then the only public defender of the wave theory. In summary, in the spring of 1814, as Fresnel tried in vain to guess what polarization was, the corpuscularists thought that they knew, while the wave-theorists (if we may use the plural) literally had no idea. Both theories claimed to explain rectilinear propagation, but the wave explanation was overwhelmingly regarded as unconvincing. The corpuscular theory could not rigorously link double refraction to surface forces; the wave theory could not yet link it to polarization. The corpuscular theory was weak on thin plates and silent on gratings; the wave theory was strong on both, but under-appreciated. Concerning diffraction, the corpuscular theory did not yield quantitative predictions, while the wave theory had begun to do so by considering diffraction as a manifestation of interference, but had only considered two rays at a time. Only the corpuscular theory gave even a vague insight into Brewster's angle, Malus's law, or optical rotation. Concerning chromatic polarization, the wave theory explained the periodicity far better than the corpuscular theory, but had nothing to say about the role of polarization; and its explanation of the periodicity was largely ignored. And Arago had founded the study of chromatic polarization, only to lose the lead, controversially, to Biot. Such were the circumstances in which Arago first heard of Fresnel's interest in optics. Rêveries. Fresnel's letters from later in 1814 reveal his interest in the wave theory, including his awareness that it explained the constancy of the speed of light and was at least compatible with stellar aberration. Eventually he compiled what he called his "rêveries" (musings) into an essay and submitted it via Léonor Mérimée to André-Marie Ampère, who did not respond directly. But on 19 December, Mérimée dined with Ampère and Arago, with whom he was acquainted through the École Polytechnique; and Arago promised to look at Fresnel's essay. In mid 1815, on his way home to Mathieu to serve his suspension, Fresnel met Arago in Paris and spoke of the wave theory and stellar aberration. He was informed that he was trying to break down open doors ("il enfonçait des portes ouvertes"), and directed to classical works on optics. Diffraction. First attempt (1815). On 12 July 1815, as Fresnel was about to leave Paris, Arago left him a note on a new topic: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I do not know of any book that contains all the experiments that physicists are doing on the "diffraction" of light. M'sieur Fresnel will only be able to get to know this part of the optics by reading the work by Grimaldi, the one by Newton, the English treatise by Jordan, and the memoirs of Brougham and Young, which are part of the collection of the "Philosophical Transactions". Fresnel would not have ready access to these works outside Paris, and could not read English. But, in Mathieu—with a point-source of light made by focusing sunlight with a drop of honey, a crude micrometer of his own construction, and supporting apparatus made by a local locksmith—he began his own experiments. His technique was novel: whereas earlier investigators had projected the fringes onto a screen, Fresnel soon abandoned the screen and observed the fringes in space, through a lens with the micrometer at its focus, allowing more accurate measurements while requiring less light. Later in July, after Napoleon's final defeat, Fresnel was reinstated with the advantage of having backed the winning side. He requested a two-month leave of absence, which was readily granted because roadworks were in abeyance. On 23 September he wrote to Arago, beginning "I think I have found the explanation and the law of colored fringes which one notices in the shadows of bodies illuminated by a luminous point." In the same paragraph, however, Fresnel implicitly acknowledged doubt about the novelty of his work: noting that he would need to incur some expense in order to improve his measurements, he wanted to know "whether this is not useless, and whether the law of diffraction has not already been established by sufficiently exact experiments."  He explained that he had not yet had a chance to acquire the items on his reading lists, with the apparent exception of "Young's book", which he could not understand without his brother's help.  Not surprisingly, he had retraced many of Young's steps. In a memoir sent to the institute on 15 October 1815, Fresnel mapped the external and internal fringes in the shadow of a wire. He noticed, like Young before him, that the internal fringes disappeared when the light from one side was blocked, and concluded that "the vibrations of two rays that cross each other under a very small angle can contradict each other…"  But, whereas Young took the disappearance of the internal fringes as "confirmation" of the principle of interference, Fresnel reported that it was the internal fringes that first drew his attention to the principle. To explain the diffraction pattern, Fresnel constructed the internal fringes by considering the intersections of circular wavefronts emitted from the two edges of the obstruction, and the external fringes by considering the intersections between direct waves and waves reflected off the nearer edge. For the external fringes, to obtain tolerable agreement with observation, he had to suppose that the reflected wave was inverted; and he noted that the predicted paths of the fringes were hyperbolic. In the part of the memoir that most clearly surpassed Young, Fresnel explained the ordinary laws of reflection and refraction in terms of interference, noting that if two parallel rays were reflected or refracted at other than the prescribed angle, they would no longer have the same phase in a common perpendicular plane, and every vibration would be cancelled by a nearby vibration. He noted that his explanation was valid provided that the surface irregularities were much smaller than the wavelength. On 10 November, Fresnel sent a supplementary note dealing with Newton's rings and with gratings, including, for the first time, "transmission" gratings—although in that case the interfering rays were still assumed to be "inflected", and the experimental verification was inadequate because it used only two threads. As Fresnel was not a member of the institute, the fate of his memoir depended heavily on the report of a single member. The reporter for Fresnel's memoir turned out to be Arago (with Poinsot as the other reviewer). On 8 November, Arago wrote to Fresnel: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I have been instructed by the Institute to examine your memoir on the diffraction of light; I have studied it carefully, and found many interesting experiments, some of which had already been done by Dr. Thomas Young, who in general regards this phenomenon in a manner rather analogous to the one you have adopted. But what neither he nor anyone had seen before you is that the "external" colored bands do not travel in a straight line as one moves away from the opaque body. The results you have achieved in this regard seem to me very important; perhaps they can serve to prove the truth of the undulatory system, so often and so feebly combated by physicists who have not bothered to understand it. Fresnel was troubled, wanting to know more precisely where he had collided with Young. Concerning the curved paths of the "colored bands", Young had noted the hyperbolic paths of the fringes in the two-source interference pattern, corresponding roughly to Fresnel's "internal" fringes, and had described the hyperbolic fringes that appear "on the screen" within rectangular shadows. He had not mentioned the curved paths of the "external" fringes of a shadow; but, as he later explained, that was because Newton had already done so. Newton evidently thought the fringes were "caustics". Thus Arago erred in his belief that the curved paths of the fringes were fundamentally incompatible with the corpuscular theory. Arago's letter went on to request more data on the external fringes. Fresnel complied, until he exhausted his leave and was assigned to Rennes in the département of Ille-et-Vilaine. At this point Arago interceded with Gaspard de Prony, head of the École des Ponts, who wrote to Louis-Mathieu Molé, head of the Corps des Ponts, suggesting that the progress of science and the prestige of the Corps would be enhanced if Fresnel could come to Paris for a time. He arrived in March 1816, and his leave was subsequently extended through the middle of the year. Meanwhile, in an experiment reported on 26 February 1816, Arago verified Fresnel's prediction that the internal fringes were shifted if the rays on one side of the obstacle passed through a thin glass lamina. Fresnel correctly attributed this phenomenon to the lower wave velocity in the glass. Arago later used a similar argument to explain the colors in the scintillation of stars. Fresnel's updated memoir  was eventually published in the March 1816 issue of "Annales de Chimie et de Physique", of which Arago had recently become co-editor. That issue did not actually appear until May. In March, Fresnel already had competition: Biot read a memoir on diffraction by himself and his student Claude Pouillet, containing copious data and arguing that the regularity of diffraction fringes, like the regularity of Newton's rings, must be linked to Newton's "fits". But the new link was not rigorous, and Pouillet himself would become a distinguished early adopter of the wave theory. "Efficacious ray", double-mirror experiment (1816). On 24 May 1816, Fresnel wrote to Young (in French), acknowledging how little of his own memoir was new. But in a "supplement" signed on 14 July and read the next day, Fresnel noted that the internal fringes were more accurately predicted by supposing that the two interfering rays came from some distance "outside" the edges of the obstacle. To explain this, he divided the incident wavefront at the obstacle into what we now call "Fresnel zones", such that the secondary waves from each zone were spread over half a cycle when they arrived at the observation point. The zones on one side of the obstacle largely canceled out in pairs, except the first zone, which was represented by an "efficacious ray". This approach worked for the internal fringes, but the superposition of the efficacious ray and the direct ray did "not" work for the "external" fringes. The contribution from the "efficacious ray" was thought to be only "partly" canceled, for reasons involving the dynamics of the medium: where the wavefront was continuous, symmetry forbade oblique vibrations; but near the obstacle that truncated the wavefront, the asymmetry allowed some sideways vibration towards the geometric shadow. This argument showed that Fresnel had not (yet) fully accepted Huygens's principle, which would have permitted oblique radiation from all portions of the front. In the same supplement, Fresnel described his well-known double mirror, comprising two flat mirrors joined at an angle of slightly less than 180°, with which he produced a two-slit interference pattern from two virtual images of the same slit. A conventional double-slit experiment required a preliminary "single" slit to ensure that the light falling on the double slit was "coherent" (synchronized). In Fresnel's version, the preliminary single slit was retained, and the double slit was replaced by the double mirror—which bore no physical resemblance to the double slit and yet performed the same function. This result (which had been announced by Arago in the March issue of the "Annales") made it hard to believe that the two-slit pattern had anything to do with corpuscles being deflected as they passed near the edges of the slits. But 1816 was the "Year Without a Summer": crops failed; hungry farming families lined the streets of Rennes; the central government organized "charity workhouses" for the needy; and in October, Fresnel was sent back to Ille-et-Vilaine to supervise charity workers in addition to his regular road crew. According to Arago, &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;with Fresnel conscientiousness was always the foremost part of his character, and he constantly performed his duties as an engineer with the most rigorous scrupulousness. The mission to defend the revenues of the state, to obtain for them the best employment possible, appeared to his eyes in the light of a question of honour. The functionary, whatever might be his rank, who submitted to him an ambiguous account, became at once the object of his profound contempt. … Under such circumstances the habitual gentleness of his manners disappeared… Fresnel's letters from December 1816 reveal his consequent anxiety. To Arago he complained of being "tormented by the worries of surveillance, and the need to reprimand…" And to Mérimée he wrote: "I find nothing more tiresome than having to manage other men, and I admit that I have no idea what I'm doing."  Prize memoir (1818) and sequel. On 17 March 1817, the Académie des Sciences announced that diffraction would be the topic for the biannual physics "Grand Prix" to be awarded in 1819. The deadline for entries was set at 1 August 1818 to allow time for replication of experiments. Although the wording of the problem referred to rays and inflection and did not invite wave-based solutions, Arago and Ampère encouraged Fresnel to enter. In the fall of 1817, Fresnel, supported by de Prony, obtained a leave of absence from the new head of the Corp des Ponts, Louis Becquey, and returned to Paris. He resumed his engineering duties in the spring of 1818; but from then on he was based in Paris, first on the Canal de l'Ourcq, and then (from May 1819) with the cadastre of the pavements.486 On 15 January 1818, in a different context (revisited below), Fresnel showed that the addition of sinusoidal functions of the same frequency but different phases is analogous to the addition of forces with different directions. His method was similar to the phasor representation, except that the "forces" were plane vectors rather than complex numbers; they could be added, and multiplied by scalars, but not (yet) multiplied and divided by each other. The explanation was algebraic rather than geometric. Knowledge of this method was assumed in a preliminary note on diffraction, dated 19 April 1818 and deposited on 20 April, in which Fresnel outlined the elementary theory of diffraction as found in modern textbooks. He restated Huygens's principle in combination with the superposition principle, saying that the vibration at each point on a wavefront is the sum of the vibrations that would be sent to it at that moment by all the elements of the wavefront in any of its previous positions, all elements acting separately &lt;templatestyles src="Crossreference/styles.css" /&gt;. For a wavefront partly obstructed in a previous position, the summation was to be carried out over the unobstructed portion. In directions other than the normal to the primary wavefront, the secondary waves were weakened due to obliquity, but weakened much more by destructive interference, so that the effect of obliquity alone could be ignored. For diffraction by a straight edge, the intensity as a function of distance from the geometric shadow could then be expressed with sufficient accuracy in terms of what are now called the normalized Fresnel integrals: formula_0    formula_1 The same note included a table of the integrals, for an upper limit ranging from 0 to 5.1 in steps of 0.1, computed with a mean error of 0.0003, plus a smaller table of maxima and minima of the resulting intensity. In his final "Memoir on the diffraction of light", deposited on 29 July  and bearing the Latin epigraph "Natura simplex et fecunda" ("Nature simple and fertile"), Fresnel slightly expanded the two tables without changing the existing figures, except for a correction to the first minimum of intensity. For completeness, he repeated his solution to "the problem of interference", whereby sinusoidal functions are added like vectors. He acknowledged the directionality of the secondary sources and the variation in their distances from the observation point, chiefly to explain why these things make negligible difference in the context, provided of course that the secondary sources do not radiate in the retrograde direction. Then, applying his theory of interference to the secondary waves, he expressed the intensity of light diffracted by a single straight edge (half-plane) in terms of integrals which involved the dimensions of the problem, but which could be converted to the normalized forms above. With reference to the integrals, he explained the calculation of the maxima and minima of the intensity (external fringes), and noted that the calculated intensity falls very rapidly as one moves into the geometric shadow. The last result, as Olivier Darrigol says, "amounts to a proof of the rectilinear propagation of light in the wave theory, indeed the first proof that a modern physicist would still accept."  For the experimental testing of his calculations, Fresnel used red light with a wavelength of 638nm, which he deduced from the diffraction pattern in the simple case in which light incident on a single slit was focused by a cylindrical lens. For a variety of distances from the source to the obstacle and from the obstacle to the field point, he compared the calculated and observed positions of the fringes for diffraction by a half-plane, a slit, and a narrow strip—concentrating on the minima, which were visually sharper than the maxima. For the slit and the strip, he could not use the previously computed table of maxima and minima; for each combination of dimensions, the intensity had to be expressed in terms of sums or differences of Fresnel integrals and calculated from the table of integrals, and the extrema had to be calculated anew. The agreement between calculation and measurement was better than 1.5% in almost every case. Near the end of the memoir, Fresnel summed up the difference between Huygens's use of secondary waves and his own: whereas Huygens says there is light only where the secondary waves exactly agree, Fresnel says there is complete darkness only where the secondary waves exactly cancel out. The judging committee comprised Laplace, Biot, and Poisson (all corpuscularists), Gay-Lussac (uncommitted), and Arago, who eventually wrote the committee's report. Although entries in the competition were supposed to be anonymous to the judges, Fresnel's must have been recognizable by the content. There was only one other entry, of which neither the manuscript nor any record of the author has survived. That entry (identified as "no.1") was mentioned only in the last paragraph of the judges' report, noting that the author had shown ignorance of the relevant earlier works of Young and Fresnel, used insufficiently precise methods of observation, overlooked known phenomena, and made obvious errors. In the words of John Worrall, "The competition facing Fresnel could hardly have been less stiff."  We may infer that the committee had only two options: award the prize to Fresnel ("no. 2"), or withhold it. The committee deliberated into the new year.144 Then Poisson, exploiting a case in which Fresnel's theory gave easy integrals, predicted that if a circular obstacle were illuminated by a point-source, there should be (according to the theory) a bright spot in the center of the shadow, illuminated as brightly as the exterior. This seems to have been intended as a "reductio ad absurdum". Arago, undeterred, assembled an experiment with an obstacle 2mm in diameter—and there, in the center of the shadow, was Poisson's spot. The unanimous  report of the committee, read at the meeting of the Académie on 15 March 1819, awarded the prize to "the memoir marked no. 2, and bearing as epigraph: "Natura simplex et fecunda"."  At the same meeting,427 after the judgment was delivered, the president of the Académie opened a sealed note accompanying the memoir, revealing the author as Fresnel. The award was announced at the public meeting of the Académie a week later, on 22 March.432 Arago's verification of Poisson's counter-intuitive prediction passed into folklore as if it had decided the prize. That view, however, is not supported by the judges' report, which gave the matter only two sentences in the penultimate paragraph. Neither did Fresnel's triumph immediately convert Laplace, Biot, and Poisson to the wave theory, for at least four reasons. First, although the professionalization of science in France had established common standards, it was one thing to acknowledge a piece of research as meeting those standards, and another thing to regard it as conclusive. Second, it was possible to interpret Fresnel's integrals as rules for combining "rays". Arago even encouraged that interpretation, presumably in order to minimize resistance to Fresnel's ideas. Even Biot began teaching the Huygens-Fresnel principle without committing himself to a wave basis. Third, Fresnel's theory did not adequately explain the mechanism of generation of secondary waves or why they had any significant angular spread; this issue particularly bothered Poisson. Fourth, the question that most exercised optical physicists at that time was not diffraction, but polarization—on which Fresnel had been working, but was yet to make his critical breakthrough. Polarization. Background: Emissionism and selectionism. An "emission" theory of light was one that regarded the propagation of light as the transport of some kind of matter. While the corpuscular theory was obviously an emission theory, the converse did not follow: in principle, one could be an emissionist without being a corpuscularist. This was convenient because, beyond the ordinary laws of reflection and refraction, emissionists never managed to make testable quantitative predictions from a theory of forces acting on corpuscles of light. But they "did" make quantitative predictions from the premises that rays were countable objects, which were conserved in their interactions with matter (except absorbent media), and which had particular orientations with respect to their directions of propagation. According to this framework, polarization and the related phenomena of double refraction and partial reflection involved altering the orientations of the rays and/or selecting them according to orientation, and the state of polarization of a beam (a bundle of rays) was a question of how many rays were in what orientations: in a fully polarized beam, the orientations were all the same. This approach, which Jed Buchwald has called "selectionism", was pioneered by Malus and diligently pursued by Biot.110–113 Fresnel, in contrast, decided to introduce polarization into interference experiments. Interference of polarized light, chromatic polarization (1816–21). In July or August 1816, Fresnel discovered that when a birefringent crystal produced two images of a single slit, he could "not" obtain the usual two-slit interference pattern, even if he compensated for the different propagation times. A more general experiment, suggested by Arago, found that if the two beams of a double-slit device were separately polarized, the interference pattern appeared and disappeared as the polarization of one beam was rotated, giving full interference for parallel polarizations, but no interference for perpendicular polarizations &lt;templatestyles src="Crossreference/styles.css" /&gt;. These experiments, among others, were eventually reported in a brief memoir published in 1819 and later translated into English. In a memoir drafted on 30 August 1816 and revised on 6 October, Fresnel reported an experiment in which he placed two matching thin laminae in a double-slit apparatus—one over each slit, with their optic axes perpendicular—and obtained two interference patterns offset in opposite directions, with perpendicular polarizations. This, in combination with the previous findings, meant that each lamina split the incident light into perpendicularly polarized components with different velocities—just like a normal (thick) birefringent crystal, and contrary to Biot's "mobile polarization" hypothesis. Accordingly, in the same memoir, Fresnel offered his first attempt at a wave theory of chromatic polarization. When polarized light passed through a crystal lamina, it was split into ordinary and extraordinary waves (with intensities described by Malus's law), and these were perpendicularly polarized and therefore did not interfere, so that no colors were produced (yet). But if they then passed through an "analyzer" (second polarizer), their polarizations were brought into alignment (with intensities again modified according to Malus's law), and they would interfere. This explanation, by itself, predicts that if the analyzer is rotated 90°, the ordinary and extraordinary waves simply switch roles, so that if the analyzer takes the form of a calcite crystal, the two images of the lamina should be of the same hue (this issue is revisited below). But in fact, as Arago and Biot had found, they are of complementary colors. To correct the prediction, Fresnel proposed a phase-inversion rule whereby "one" of the constituent waves of "one" of the two images suffered an additional 180° phase shift on its way through the lamina. This inversion was a weakness in the theory relative to Biot's, as Fresnel acknowledged, although the rule specified which of the two images had the inverted wave. Moreover, Fresnel could deal only with special cases, because he had not yet solved the problem of superposing sinusoidal functions with arbitrary phase differences due to propagation at different velocities through the lamina. He solved that problem in a "supplement" signed on 15 January 1818  (mentioned above). In the same document, he accommodated Malus's law by proposing an underlying law: that if polarized light is incident on a birefringent crystal with its optic axis at an angle "θ" to the "plane of polarization", the ordinary and extraordinary vibrations (as functions of time) are scaled by the factors cos "θ" and sin "θ", respectively. Although modern readers easily interpret these factors in terms of perpendicular components of a "transverse" oscillation, Fresnel did not (yet) explain them that way. Hence he still needed the phase-inversion rule. He applied all these principles to a case of chromatic polarization not covered by Biot's formulae, involving "two" successive laminae with axes separated by 45°, and obtained predictions that disagreed with Biot's experiments (except in special cases) but agreed with his own. Fresnel applied the same principles to the standard case of chromatic polarization, in which "one" birefringent lamina was sliced parallel to its axis and placed between a polarizer and an analyzer. If the analyzer took the form of a thick calcite crystal with its axis in the plane of polarization, Fresnel predicted that the intensities of the ordinary and extraordinary images of the lamina were respectively proportional to formula_2 formula_3 where formula_4 is the angle from the initial plane of polarization to the optic axis of the lamina,  formula_5 is the angle from the initial plane of polarization to the plane of polarization of the final ordinary image, and formula_6 is the phase lag of the extraordinary wave relative to the ordinary wave due to the difference in propagation times through the lamina. The terms in formula_6 are the frequency-dependent terms and explain why the lamina must be "thin" in order to produce discernible colors: if the lamina is too thick, formula_7 will pass through too many cycles as the frequency varies through the visible range, and the eye (which divides the visible spectrum into only three bands) will not be able to resolve the cycles. From these equations it is easily verified that formula_8 for all formula_9 so that the colors are complementary. Without the phase-inversion rule, there would be a "plus" sign in front of the last term in the second equation, so that the formula_6-dependent term would be the same in both equations, implying (incorrectly) that the colors were of the same hue. These equations were included in an undated note that Fresnel gave to Biot, to which Biot added a few lines of his own. If we substitute formula_10  and  formula_11 then Fresnel's formulae can be rewritten as formula_12 formula_13 which are none other than Biot's empirical formulae of 1812, except that Biot interpreted formula_14 and formula_15 as the "unaffected" and "affected" selections of the rays incident on the lamina. If Biot's substitutions were accurate, they would imply that his experimental results were more fully explained by Fresnel's theory than by his own. Arago delayed reporting on Fresnel's works on chromatic polarization until June 1821, when he used them in a broad attack on Biot's theory. In his written response, Biot protested that Arago's attack went beyond the proper scope of a report on the nominated works of Fresnel. But Biot also claimed that the substitutions for formula_14 and formula_16 and therefore Fresnel's expressions for formula_17 and formula_18 were empirically wrong because when Fresnel's intensities of spectral colors were mixed according to Newton's rules, the squared cosine and sine functions varied too smoothly to account for the observed sequence of colors. That claim drew a written reply from Fresnel, who disputed whether the colors changed as abruptly as Biot claimed, and whether the human eye could judge color with sufficient objectivity for the purpose. On the latter question, Fresnel pointed out that different observers may give different names to the same color. Furthermore, he said, a single observer can only compare colors side by side; and even if they are judged to be the same, the identity is of sensation, not necessarily of composition. Fresnel's oldest and strongest point—that thin crystals were subject to the same laws as thick ones and did not need or allow a separate theory—Biot left unanswered.  Arago and Fresnel were seen to have won the debate. Moreover, by this time Fresnel had a new, simpler explanation of his equations on chromatic polarization. Breakthrough: Pure transverse waves (1821). In the draft memoir of 30 August 1816, Fresnel mentioned two hypotheses—one of which he attributed to Ampère—by which the non-interference of orthogonally-polarized beams could be explained if polarized light waves were "partly" transverse. But Fresnel could not develop either of these ideas into a comprehensive theory. As early as September 1816, according to his later account, he realized that the non-interference of orthogonally-polarized beams, together with the phase-inversion rule in chromatic polarization, would be most easily explained if the waves were "purely" transverse, and Ampère "had the same thought" on the phase-inversion rule. But that would raise a new difficulty: as natural light seemed to be "un"polarized and its waves were therefore presumed to be longitudinal, one would need to explain how the longitudinal component of vibration disappeared on polarization, and why it did not reappear when polarized light was reflected or refracted obliquely by a glass plate. Independently, on 12 January 1817, Young wrote to Arago (in English) noting that a transverse vibration would constitute a polarization, and that if two longitudinal waves crossed at a significant angle, they could not cancel without leaving a residual transverse vibration. Young repeated this idea in an article published in a supplement to the "Encyclopædia Britannica" in February 1818, in which he added that Malus's law would be explained if polarization consisted in a transverse motion.333–335 Thus Fresnel, by his own testimony, may not have been the first person to suspect that light waves could have a transverse "component", or that "polarized" waves were exclusively transverse. And it was Young, not Fresnel, who first "published" the idea that polarization depends on the orientation of a transverse vibration. But these incomplete theories had not reconciled the nature of polarization with the apparent existence of "unpolarized" light; that achievement was to be Fresnel's alone. In a note that Buchwald dates in the summer of 1818, Fresnel entertained the idea that unpolarized waves could have vibrations of the same energy and obliquity, with their orientations distributed uniformly about the wave-normal, and that the degree of polarization was the degree of "non"-uniformity in the distribution. Two pages later he noted, apparently for the first time in writing, that his phase-inversion rule and the non-interference of orthogonally-polarized beams would be easily explained if the vibrations of fully polarized waves were "perpendicular to the normal to the wave"—that is, purely transverse. But if he could account for "lack" of polarization by averaging out the transverse component, he did not also need to assume a longitudinal component. It was enough to suppose that light waves are "purely" transverse, hence "always" polarized in the sense of having a particular transverse orientation, and that the "unpolarized" state of natural or "direct" light is due to rapid and random variations in that orientation, in which case two "coherent" portions of "unpolarized" light will still interfere because their orientations will be synchronized. It is not known exactly when Fresnel made this last step, because there is no relevant documentation from 1820 or early 1821  (perhaps because he was too busy working on lighthouse-lens prototypes; see below). But he first "published" the idea in a paper on "Calcul des teintes…" ("calculation of the tints…"), serialized in Arago's "Annales" for May, June, and July 1821. In the first installment, Fresnel described "direct" (unpolarized) light as "the rapid succession of systems of waves polarized in all directions", and gave what is essentially the modern explanation of chromatic polarization, albeit in terms of the analogy between polarization and the resolution of forces in a plane, mentioning transverse waves only in a footnote. The introduction of transverse waves into the main argument was delayed to the second installment, in which he revealed the suspicion that he and Ampère had harbored since 1816, and the difficulty it raised. He continued: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;It has only been for a few months that in meditating more attentively on this subject, I have realized that it was very probable that the oscillatory movements of light waves were executed solely along the plane of these waves, "for direct light as well as for polarized light". According to this new view, he wrote, "the act of polarization consists not in creating these transverse movements, but in decomposing them into two fixed perpendicular directions and in separating the two components". While selectionists could insist on interpreting Fresnel's diffraction integrals in terms of discrete, countable rays, they could not do the same with his theory of polarization. For a selectionist, the state of polarization of a beam concerned the distribution of orientations over the "population" of rays, and that distribution was presumed to be static. For Fresnel, the state of polarization of a beam concerned the variation of a displacement over "time". That displacement might be constrained but was "not" static, and rays were geometric constructions, "not" countable objects. The conceptual gap between the wave theory and selectionism had become unbridgeable. The other difficulty posed by pure transverse waves, of course, was the apparent implication that the aether was an elastic "solid", except that, unlike other elastic solids, it was incapable of transmitting longitudinal waves. The wave theory was cheap on assumptions, but its latest assumption was expensive on credulity. If that assumption was to be widely entertained, its explanatory power would need to be impressive. Partial reflection (1821). In the second installment of "Calcul des teintes" (June 1821), Fresnel supposed, by analogy with sound waves, that the density of the aether in a refractive medium was inversely proportional to the square of the wave velocity, and therefore directly proportional to the square of the refractive index. For reflection and refraction at the surface between two isotropic media of different indices, Fresnel decomposed the transverse vibrations into two perpendicular components, now known as the "s" and "p" components, which are parallel to the "surface" and the "plane" of incidence, respectively; in other words, the "s" and "p" components are respectively "square" and "parallel" to the plane of incidence. For the "s" component, Fresnel supposed that the interaction between the two media was analogous to an elastic collision, and obtained a formula for what we now call the "reflectivity": the ratio of the reflected intensity to the incident intensity. The predicted reflectivity was non-zero at all angles. The third installment (July 1821) was a short "postscript" in which Fresnel announced that he had found, by a "mechanical solution", a formula for the reflectivity of the "p" component, which predicted that "the reflectivity was zero at the Brewster angle". So polarization by reflection had been accounted for—but with the proviso that the direction of vibration in Fresnel's model was "perpendicular" to the plane of polarization as defined by Malus. (On the ensuing controversy, see "Plane of polarization".) The technology of the time did not allow the "s" and "p" reflectivities to be measured accurately enough to test Fresnel's formulae at arbitrary angles of incidence. But the formulae could be rewritten in terms of what we now call the "reflection coefficient": the signed ratio of the reflected amplitude to the incident amplitude. Then, if the plane of polarization of the incident ray was at 45° to the plane of incidence, the tangent of the corresponding angle for the reflected ray was obtainable from the "ratio" of the two reflection coefficients, and this angle could be measured. Fresnel had measured it for a range of angles of incidence, for glass and water, and the agreement between the calculated and measured angles was better than 1.5° in all cases. Fresnel gave details of the "mechanical solution" in a memoir read to the Académie des Sciences on 7 January 1823. Conservation of energy was combined with continuity of the "tangential" vibration at the interface. The resulting formulae for the reflection coefficients and reflectivities became known as the "Fresnel equations". The reflection coefficients for the "s" and "p" polarizations are most succinctly expressed as formula_19    and    formula_20 where formula_4 and formula_21 are the angles of incidence and refraction; these equations are known respectively as "Fresnel's sine law" and "Fresnel's tangent law". By allowing the coefficients to be "complex", Fresnel even accounted for the different phase shifts of the "s" and "p" components due to total internal reflection. This success inspired James MacCullagh and Augustin-Louis Cauchy, beginning in 1836, to analyze reflection from metals by using the Fresnel equations with a complex refractive index. The same technique is applicable to non-metallic opaque media. With these generalizations, the Fresnel equations can predict the appearance of a wide variety of objects under illumination—for example, in computer graphics &lt;templatestyles src="Crossreference/styles.css" /&gt;. Circular and elliptical polarization, optical rotation (1822). In a memoir dated 9 December 1822, Fresnel coined the terms "linear polarization" (French: "polarisation rectiligne") for the simple case in which the perpendicular components of vibration are in phase or 180° out of phase, "circular polarization" for the case in which they are of equal magnitude and a quarter-cycle (±90°) out of phase, and "elliptical polarization" for other cases in which the two components have a fixed amplitude ratio and a fixed phase difference. He then explained how optical rotation could be understood as a species of birefringence. Linearly-polarized light could be resolved into two circularly-polarized components rotating in opposite directions. If these components propagated at slightly different speeds, the phase difference between them—and therefore the direction of their linearly-polarized resultant—would vary continuously with distance. These concepts called for a redefinition of the distinction between polarized and unpolarized light. Before Fresnel, it was thought that polarization could vary in direction, and in degree (e.g., due to variation in the angle of reflection off a transparent body), and that it could be a function of color (chromatic polarization), but not that it could vary in "kind". Hence it was thought that the degree of polarization was the degree to which the light could be suppressed by an analyzer with the appropriate orientation. Light that had been converted from linear to elliptical or circular polarization (e.g., by passage through a crystal lamina, or by total internal reflection) was described as partly or fully "depolarized" because of its behavior in an analyzer. "After" Fresnel, the defining feature of polarized light was that the perpendicular components of vibration had a fixed ratio of amplitudes and a fixed difference in phase. By that definition, elliptically or circularly polarized light is "fully" polarized although it cannot be fully suppressed by an analyzer alone. The conceptual gap between the wave theory and selectionism had widened again. Total internal reflection (1817–23). By 1817 it had been discovered by Brewster, but not adequately reported,324 that plane-polarized light was partly depolarized by total internal reflection if initially polarized at an acute angle to the plane of incidence. Fresnel rediscovered this effect and investigated it by including total internal reflection in a chromatic-polarization experiment. With the aid of his "first" theory of chromatic polarization, he found that the apparently depolarized light was a mixture of components polarized parallel and perpendicular to the plane of incidence, and that the total reflection introduced a phase difference between them. Choosing an appropriate angle of incidence (not yet exactly specified) gave a phase difference of 1/8 of a cycle (45°). Two such reflections from the "parallel faces" of "two coupled prisms" gave a phase difference of 1/4 of a cycle (90°). These findings were contained in a memoir submitted to the Académie on 10 November 1817 and read a fortnight later. An undated marginal note indicates that the two coupled prisms were later replaced by a single "parallelepiped in glass"—now known as a "Fresnel rhomb". This was the memoir whose "supplement", dated January 1818, contained the method of superposing sinusoidal functions and the restatement of Malus's law in terms of amplitudes. In the same supplement, Fresnel reported his discovery that optical rotation could be emulated by passing the polarized light through a Fresnel rhomb (still in the form of "coupled prisms"), followed by an ordinary birefringent lamina sliced parallel to its axis, with the axis at 45° to the plane of reflection of the Fresnel rhomb, followed by a second Fresnel rhomb at 90° to the first. In a further memoir read on 30 March, Fresnel reported that if polarized light was fully "depolarized" by a Fresnel rhomb—now described as a parallelepiped—its properties were not further modified by a subsequent passage through an optically rotating medium or device. The connection between optical rotation and birefringence was further explained in 1822, in the memoir on elliptical and circular polarization. This was followed by the memoir on reflection, read in January 1823, in which Fresnel quantified the phase shifts in total internal reflection, and thence calculated the precise angle at which a Fresnel rhomb should be cut in order to convert linear polarization to circular polarization. For a refractive index of 1.51, there were two solutions: about 48.6° and 54.6°.760 Double refraction. Background: Uniaxial and biaxial crystals; Biot's laws. When light passes through a slice of calcite cut perpendicular to its optic axis, the difference between the propagation times of the ordinary and extraordinary waves has a second-order dependence on the angle of incidence. If the slice is observed in a highly convergent cone of light, that dependence becomes significant, so that a chromatic-polarization experiment will show a pattern of concentric rings. But most minerals, when observed in this manner, show a more complicated pattern of rings involving two foci and a lemniscate curve, as if they had "two" optic axes. The two classes of minerals naturally become known as "uniaxal" and "biaxal"—or, in later literature, "uniaxial" and "biaxial". In 1813, Brewster observed the simple concentric pattern in "beryl, emerald, ruby &amp;c." The same pattern was later observed in calcite by Wollaston, Biot, and Seebeck.  Biot, assuming that the concentric pattern was the general case, tried to calculate the colors with his theory of chromatic polarization, and succeeded better for some minerals than for others. In 1818, Brewster belatedly explained why: seven of the twelve minerals employed by Biot had the lemniscate pattern, which Brewster had observed as early as 1812; and the minerals with the more complicated rings also had a more complicated law of refraction. In a uniform crystal, according to Huygens's theory, the secondary wavefront that expands from the origin in unit time is the "ray-velocity surface"—that is, the surface whose "distance" from the origin in any direction is the ray velocity in that direction. In calcite, this surface is two-sheeted, consisting of a sphere (for the ordinary wave) and an oblate spheroid (for the extraordinary wave) touching each other at opposite points of a common axis—touching at the north and south poles, if we may use a geographic analogy. But according to Malus's "corpuscular" theory of double refraction, the ray velocity was proportional to the reciprocal of that given by Huygens's theory, in which case the velocity law was of the form formula_22 where formula_23 and formula_24 were the ordinary and extraordinary ray velocities "according to the corpuscular theory", and formula_25 was the angle between the ray and the optic axis. By Malus's definition, the plane of polarization of a ray was the plane of the ray and the optic axis if the ray was ordinary, or the perpendicular plane (containing the ray) if the ray was extraordinary. In Fresnel's model, the direction of vibration was normal to the plane of polarization. Hence, for the sphere (the ordinary wave), the vibration was along the lines of latitude (continuing the geographic analogy); and for the spheroid (the extraordinary wave), the vibration was along the lines of longitude. On 29 March 1819, Biot presented a memoir in which he proposed simple generalizations of Malus's rules for a crystal with "two" axes, and reported that both generalizations seemed to be confirmed by experiment. For the velocity law, the squared sine was replaced by the "product" of the sines of the angles from the ray to the two axes ("Biot's sine law"). And for the polarization of the ordinary ray, the plane of the ray and the axis was replaced by the plane bisecting the dihedral angle between the two planes each of which contained the ray and one axis ("Biot's dihedral law"). Biot's laws meant that a biaxial crystal with axes at a small angle, cleaved in the plane of those axes, behaved nearly like a uniaxial crystal at near-normal incidence; this was fortunate because gypsum, which had been used in chromatic-polarization experiments, is biaxial. First memoir and supplements (1821–22). Until Fresnel turned his attention to biaxial birefringence, it was assumed that one of the two refractions was ordinary, even in biaxial crystals. But, in a memoir submitted  on 19 November 1821, Fresnel reported two experiments on topaz showing that "neither refraction" was ordinary in the sense of satisfying Snell's law; that is, neither ray was the product of spherical secondary waves. The same memoir contained Fresnel's first attempt at the biaxial velocity law. For calcite, if we interchange the equatorial and polar radii of Huygens's oblate spheroid while preserving the polar direction, we obtain a "prolate" spheroid touching the sphere at the equator. A plane through the center/origin cuts this prolate spheroid in an ellipse whose major and minor semi-axes give the magnitudes of the extraordinary and ordinary ray velocities in the direction normal to the plane, and (said Fresnel) the directions of their respective vibrations. The direction of the optic axis is the normal to the plane for which the ellipse of intersection reduces to a "circle". So, for the biaxial case, Fresnel simply replaced the prolate spheroid with a triaxial ellipsoid, which was to be sectioned by a plane in the same way. In general there would be "two" planes passing through the center of the ellipsoid and cutting it in a circle, and the normals to these planes would give "two" optic axes. From the geometry, Fresnel deduced Biot's sine law (with the ray velocities replaced by their reciprocals). The ellipsoid indeed gave the correct ray velocities (although the initial experimental verification was only approximate). But it did not give the correct directions of vibration, for the biaxial case or even for the uniaxial case, because the vibrations in Fresnel's model were tangential to the wavefront—which, for an extraordinary ray, is "not" generally normal to the ray. This error (which is small if, as in most cases, the birefringence is weak) was corrected in an "extract" that Fresnel read to the Académie a week later, on 26 November. Starting with Huygens's spheroid, Fresnel obtained a 4th-degree surface which, when sectioned by a plane as above, would yield the "wave-normal velocities" for a wavefront in that plane, together with their vibration directions. For the biaxial case, he generalized the equation to obtain a surface with three unequal principal dimensions; this he subsequently called the "surface of elasticity". But he retained the earlier ellipsoid as an approximation, from which he deduced Biot's dihedral law. Fresnel's initial derivation of the surface of elasticity had been purely geometric, and not deductively rigorous. His first attempt at a "mechanical" derivation, contained in a "supplement" dated 13 January 1822, assumed that (i) there were three mutually perpendicular directions in which a displacement produced a reaction in the same direction, (ii) the reaction was otherwise a linear function of the displacement, and (iii) the radius of the surface in any direction was the square root of the component, "in that direction", of the reaction to a unit displacement in that direction. The last assumption recognized the requirement that if a wave was to maintain a fixed direction of propagation and a fixed direction of vibration, the reaction must not be outside the plane of those two directions. In the same supplement, Fresnel considered how he might find, for the biaxial case, the secondary wavefront that expands from the origin in unit time—that is, the surface that reduces to Huygens's sphere and spheroid in the uniaxial case. He noted that this "wave surface" ("surface de l'onde") is tangential to all possible plane wavefronts that could have crossed the origin one unit of time ago, and he listed the mathematical conditions that it must satisfy. But he doubted the feasibility of deriving the surface "from" those conditions. In a "second supplement", Fresnel eventually exploited two related facts: (i) the "wave surface" was also the ray-velocity surface, which could be obtained by sectioning the ellipsoid that he had initially mistaken for the surface of elasticity, and (ii) the "wave surface" intersected each plane of symmetry of the ellipsoid in two curves: a circle and an ellipse. Thus he found that the "wave surface" is described by the 4th-degree equation formula_26 where formula_27 and formula_28 are the propagation speeds in directions normal to the coordinate axes for vibrations along the axes (the ray and wave-normal speeds being the same in those special cases). Later commentators19 put the equation in the more compact and memorable form formula_29 Earlier in the "second supplement", Fresnel modeled the medium as an array of point-masses and found that the force-displacement relation was described by a symmetric matrix, confirming the existence of three mutually perpendicular axes on which the displacement produced a parallel force. Later in the document, he noted that in a biaxial crystal, unlike a uniaxial crystal, the directions in which there is only one wave-normal velocity are not the same as those in which there is only one ray velocity. Nowadays we refer to the former directions as the "optic" axes or "binormal" axes, and the latter as the "ray" axes or "biradial" axes &lt;templatestyles src="Crossreference/styles.css" /&gt;. Fresnel's "second supplement" was signed on 31 March 1822 and submitted the next day—less than a year after the publication of his pure-transverse-wave hypothesis, and just less than a year after the demonstration of his prototype eight-panel lighthouse lens &lt;templatestyles src="Crossreference/styles.css" /&gt;. Second memoir (1822–26). Having presented the pieces of his theory in roughly the order of discovery, Fresnel needed to rearrange the material so as to emphasize the mechanical foundations; and he still needed a rigorous treatment of Biot's dihedral law. He attended to these matters in his "second memoir" on double refraction, published in the "Recueils" of the Académie des Sciences for 1824; this was not actually printed until late 1827, a few months after his death. In this work, having established the three perpendicular axes on which a displacement produces a parallel reaction, and thence constructed the surface of elasticity, he showed that Biot's dihedral law is exact provided that the binormals are taken as the optic axes, and the wave-normal direction as the direction of propagation. As early as 1822, Fresnel discussed his perpendicular axes with Cauchy. Acknowledging Fresnel's influence, Cauchy went on to develop the first rigorous theory of elasticity of non-isotropic solids (1827), hence the first rigorous theory of transverse waves therein (1830)—which he promptly tried to apply to optics. The ensuing difficulties drove a long competitive effort to find an accurate mechanical model of the aether. Fresnel's own model was not dynamically rigorous; for example, it deduced the reaction to a shear strain by considering the displacement of one particle while all others were fixed, and it assumed that the stiffness determined the wave velocity as in a stretched string, whatever the direction of the wave-normal. But it was enough to enable the wave theory to do what selectionist theory could not: generate testable formulae covering a comprehensive range of optical phenomena, from "mechanical" assumptions. Photoelasticity, multiple-prism experiments (1822). In 1815, Brewster reported that colors appear when a slice of isotropic material, placed between crossed polarizers, is mechanically stressed. Brewster himself immediately and correctly attributed this phenomenon to stress-induced birefringence —now known as "photoelasticity". In a memoir read in September 1822, Fresnel announced that he had verified Brewster's diagnosis more directly, by compressing a combination of glass prisms so severely that one could actually see a double image through it. In his experiment, Fresnel lined up seven 45°–90°–45° prisms, short side to short side, with their 90° angles pointing in alternating directions. Two half-prisms were added at the ends to make the whole assembly rectangular. The prisms were separated by thin films of turpentine ("térébenthine") to suppress internal reflections, allowing a clear line of sight along the row. When the four prisms with similar orientations were compressed in a vise across the line of sight, an object viewed through the assembly produced two images with perpendicular polarizations, with an apparent spacing of 1.5mm at one metre. At the end of that memoir, Fresnel predicted that if the compressed prisms were replaced by (unstressed) monocrystalline quartz prisms with matching directions of optical rotation, and with their optic axes aligned along the row, an object seen by looking along the common optic axis would give two images, which would seem unpolarized when viewed through an analyzer but, when viewed through a Fresnel rhomb, would be polarized at ±45° to the plane of reflection of the rhomb (indicating that they were initially circularly polarized in opposite directions). This would show directly that optical rotation is a form of birefringence. In the memoir of December 1822, in which he introduced the term "circular polarization", he reported that he had confirmed this prediction using only one 14°–152°–14° prism and two glass half-prisms. But he obtained a wider separation of the images by replacing the glass half-prism with quartz half-prisms whose rotation was opposite to that of the 14°–152°–14° prism. He added in passing that one could further increase the separation by increasing the number of prisms. Reception. For the supplement to Riffault's translation of Thomson's "System of Chemistry", Fresnel was chosen to contribute the article on light. The resulting 137-page essay, titled "De la Lumière" ("On Light"), was apparently finished in June 1821 and published by February 1822. With sections covering the nature of light, diffraction, thin-film interference, reflection and refraction, double refraction and polarization, chromatic polarization, and modification of polarization by reflection, it made a comprehensive case for the wave theory to a readership that was not restricted to physicists. To examine Fresnel's first memoir and supplements on double refraction, the Académie des Sciences appointed Ampère, Arago, Fourier, and Poisson. Their report, of which Arago was clearly the main author, was delivered at the meeting of 19 August 1822. Then, in the words of Émile Verdet, as translated by Ivor Grattan-Guinness: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Immediately after the reading of the report, Laplace took the floor, and… proclaimed the exceptional importance of the work which had just been reported: he congratulated the author on his steadfastness and his sagacity which had led him to discover a law which had escaped the cleverest, and, anticipating somewhat the judgement of posterity, declared that he placed these researches above everything that had been communicated to the Académie for a long time. Whether Laplace was announcing his conversion to the wave theory—at the age of 73—is uncertain. Grattan-Guinness entertained the idea. Buchwald, noting that Arago failed to explain that the "ellipsoid of elasticity" did not give the correct planes of polarization, suggests that Laplace may have merely regarded Fresnel's theory as a successful generalization of Malus's ray-velocity law, embracing Biot's laws. In the following year, Poisson, who did not sign Arago's report, disputed the possibility of transverse waves in the aether. Starting from assumed equations of motion of a fluid medium, he noted that they did not give the correct results for partial reflection and double refraction—as if that were Fresnel's problem rather than his own—and that the predicted waves, even if they were initially transverse, became more longitudinal as they propagated. In reply Fresnel noted, "inter alia", that the equations in which Poisson put so much faith did not even predict viscosity. The implication was clear: given that the behavior of light had not been satisfactorily explained except by transverse waves, it was not the responsibility of the wave-theorists to abandon transverse waves in deference to pre-conceived notions about the aether; rather, it was the responsibility of the aether modelers to produce a model that accommodated transverse waves. According to Robert H. Silliman, Poisson eventually accepted the wave theory shortly before his death in 1840. Among the French, Poisson's reluctance was an exception. According to Eugene Frankel, "in Paris no debate on the issue seems to have taken place after 1825. Indeed, almost the entire generation of physicists and mathematicians who came to maturity in the 1820s—Pouillet, Savart, Lamé, Navier, Liouville, Cauchy—seem to have adopted the theory immediately." Fresnel's other prominent French opponent, Biot, appeared to take a neutral position in 1830, and eventually accepted the wave theory—possibly by 1846 and certainly by 1858. In 1826, the British astronomer John Herschel, who was working on a book-length article on light for the "Encyclopædia Metropolitana", addressed three questions to Fresnel concerning double refraction, partial reflection, and their relation to polarization. The resulting article, titled simply "Light", was highly sympathetic to the wave theory, although not entirely free of selectionist language. It was circulating privately by 1828 and was published in 1830. Meanwhile, Young's translation of Fresnel's "De la Lumière" was published in installments from 1827 to 1829. George Biddell Airy, the former Lucasian Professor at Cambridge and future Astronomer Royal, unreservedly accepted the wave theory by 1831. In 1834, he famously calculated the diffraction pattern of a circular aperture from the wave theory, thereby explaining the limited angular resolution of a perfect telescope &lt;templatestyles src="Crossreference/styles.css" /&gt;. By the end of the 1830s, the only prominent British physicist who held out against the wave theory was Brewster, whose objections included the difficulty of explaining photochemical effects and (in his opinion) dispersion. A German translation of "De la Lumière" was published in installments in 1825 and 1828. The wave theory was adopted by Fraunhofer in the early 1820s and by Franz Ernst Neumann in the 1830s, and then began to find favor in German textbooks. The economy of assumptions under the wave theory was emphasized by William Whewell in his "History of the Inductive Sciences", first published in 1837. In the corpuscular system, "every new class of facts requires a new supposition," whereas in the wave system, a hypothesis devised in order to explain one phenomenon is then found to explain or predict others. In the corpuscular system there is "no unexpected success, no happy coincidence, no convergence of principles from remote quarters"; but in the wave system, "all tends to unity and simplicity."  Hence, in 1850, when Foucault and Fizeau found by experiment that light travels more slowly in water than in air, in accordance with the wave explanation of refraction and contrary to the corpuscular explanation, the result came as no surprise. Lighthouses and the Fresnel lens. Fresnel was not the first person to focus a lighthouse beam using a lens. That distinction apparently belongs to the London glass-cutter Thomas Rogers, whose first lenses, 53cm in diameter and 14cm thick at the center, were installed at the Old Lower Lighthouse at Portland Bill in 1789. Further samples were installed in about half a dozen other locations by 1804. But much of the light was wasted by absorption in the glass. Nor was Fresnel the first to suggest replacing a convex lens with a series of concentric annular prisms, to reduce weight and absorption. In 1748, Count Buffon proposed grinding such prisms as steps in a single piece of glass. In 1790, the Marquis de Condorcet suggested that it would be easier to make the annular sections separately and assemble them on a frame; but even that was impractical at the time. These designs were intended not for lighthouses, but for burning glasses.609 Brewster, however, proposed a system similar to Condorcet's in 1811, and by 1820 was advocating its use in British lighthouses. Meanwhile, on 21 June 1819, Fresnel was "temporarily" seconded by the "Commission des Phares" (Commission of Lighthouses) on the recommendation of Arago (a member of the Commission since 1813), to review possible improvements in lighthouse illumination. The commission had been established by Napoleon in 1811 and placed under the Corps des Ponts—Fresnel's employer. By the end of August 1819, unaware of the Buffon-Condorcet-Brewster proposal, Fresnel made his first presentation to the commission, recommending what he called "lentilles à échelons" (lenses by steps) to replace the reflectors then in use, which reflected only about half of the incident light. One of the assembled commissioners, Jacques Charles, recalled Buffon's suggestion, leaving Fresnel embarrassed for having again "broken through an open door". But, whereas Buffon's version was biconvex and in one piece, Fresnel's was plano-convex and made of multiple prisms for easier construction. With an official budget of 500 francs, Fresnel approached three manufacturers. The third, François Soleil, produced the prototype. Finished in March 1820, it had a square lens panel 55 cm on a side, containing 97 polygonal (not annular) prisms—and so impressed the Commission that Fresnel was asked for a full eight-panel version. This model, completed a year later in spite of insufficient funding, had panels 76 cm square. In a public spectacle on the evening of 13 April 1821, it was demonstrated by comparison with the most recent reflectors, which it suddenly rendered obsolete. Fresnel's next lens was a rotating apparatus with eight "bull's-eye" panels, made in annular arcs by Saint-Gobain, giving eight rotating beams—to be seen by mariners as a periodic flash. Above and behind each main panel was a smaller, sloping bull's-eye panel of trapezoidal outline with trapezoidal elements. This refracted the light to a sloping plane mirror, which then reflected it horizontally, 7 degrees ahead of the main beam, increasing the duration of the flash. Below the main panels were 128 small mirrors arranged in four rings, stacked like the slats of a louver or Venetian blind. Each ring, shaped as a frustum of a cone, reflected the light to the horizon, giving a fainter steady light between the flashes. The official test, conducted on the unfinished "Arc de Triomphe" on 20 August 1822, was witnessed by the commission—and by Louis XVIII and his entourage—from 32km away. The apparatus was stored at Bordeaux for the winter, and then reassembled at Cordouan Lighthouse under Fresnel's supervision. On 25 July 1823, the world's first lighthouse Fresnel lens was lit. Soon afterwards, Fresnel started coughing up blood. In May 1824, Fresnel was promoted to secretary of the "Commission des Phares", becoming the first member of that body to draw a salary, albeit in the concurrent role of Engineer-in-Chief. He was also an examiner (not a teacher) at the École Polytechnique since 1821; but poor health, long hours during the examination season, and anxiety about judging others induced him to resign that post in late 1824, to save his energy for his lighthouse work. In the same year he designed the first "fixed" lens—for spreading light evenly around the horizon while minimizing waste above or below. Ideally the curved refracting surfaces would be segments of toroids about a common vertical axis, so that the dioptric panel would look like a cylindrical drum. If this was supplemented by reflecting (catoptric) rings above and below the refracting (dioptric) parts, the entire apparatus would look like a beehive. The second Fresnel lens to enter service was indeed a fixed lens, of third order, installed at Dunkirk by 1 February 1825. However, due to the difficulty of fabricating large toroidal prisms, this apparatus had a 16-sided polygonal plan. In 1825, Fresnel extended his fixed-lens design by adding a rotating array outside the fixed array. Each panel of the rotating array was to refract part of the fixed light from a horizontal fan into a narrow beam. Also in 1825, Fresnel unveiled the "Carte des Phares" (Lighthouse Map), calling for a system of 51 lighthouses plus smaller harbor lights, in a hierarchy of lens sizes (called "orders", the first order being the largest), with different characteristics to facilitate recognition: a constant light (from a fixed lens), one flash per minute (from a rotating lens with eight panels), and two per minute (sixteen panels). In late 1825, to reduce the loss of light in the reflecting elements, Fresnel proposed to replace each mirror with a catadioptric prism, through which the light would travel by refraction through the first surface, then total internal reflection off the second surface, then refraction through the third surface. The result was the lighthouse lens as we now know it. In 1826 he assembled a small model for use on the Canal Saint-Martin, but he did not live to see a full-sized version. The first fixed lens with toroidal prisms was a first-order apparatus designed by the Scottish engineer Alan Stevenson under the guidance of Léonor Fresnel, and fabricated by Isaac Cookson &amp; Co. from French glass; it entered service at the Isle of May in 1836. The first large catadioptric lenses were fixed third-order lenses made in 1842 for the lighthouses at Gravelines and Île Vierge. The first fully catadioptric "first-order" lens, installed at Ailly in 1852, gave eight rotating beams assisted by eight catadioptric panels at the top (to lengthen the flashes), plus a fixed light from below. The first fully catadioptric lens with "purely revolving" beams—also of first order—was installed at Saint-Clément-des-Baleines in 1854, and marked the completion of Augustin Fresnel's original "Carte des Phares". Production of one-piece stepped dioptric lenses—roughly as envisaged by Buffon—became practical in 1852, when John L. Gilliland of the Brooklyn Flint-Glass Company patented a method of making such lenses from press-molded glass. By the 1950s, the substitution of plastic for glass made it economic to use fine-stepped Fresnel lenses as condensers in overhead projectors. Still finer steps can be found in low-cost plastic "sheet" magnifiers. Honors. Fresnel was elected to the "Société Philomathique de Paris" in April 1819, and in 1822 became one of the editors of the Société's  "Bulletin des Sciences". As early as May 1817, at Arago's suggestion, Fresnel applied for membership of the Académie des Sciences, but received only one vote. The successful candidate on that occasion was Joseph Fourier. In November 1822, Fourier's elevation to Permanent Secretary of the Académie created a vacancy in the physics section, which was filled in February 1823 by Pierre Louis Dulong, with 36 votes to Fresnel's 20. But in May 1823, after another vacancy was left by the death of Jacques Charles,  Fresnel's election was unanimous. In 1824, Fresnel was made a "chevalier de la Légion d'honneur" (Knight of the Legion of Honour). Meanwhile, in Britain, the wave theory was yet to take hold; Fresnel wrote to Thomas Young in November 1824, saying in part: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I am far from denying the value that I attach to the praise of English scholars, or pretending that they would not have flattered me agreeably. But for a long time this sensibility, or vanity, which is called the love of glory, has been much blunted in me: I work far less to capture the public's votes than to obtain an inner approbation which has always been the sweetest reward of my efforts. Doubtless I have often needed the sting of vanity to excite me to pursue my researches in moments of disgust or discouragement; but all the compliments I received from "MM." Arago, Laplace, and Biot never gave me as much pleasure as the discovery of a theoretical truth and the confirmation of my calculations by experiment. But "the praise of English scholars" soon followed. On 9 June 1825, Fresnel was made a Foreign Member of the Royal Society of London. In 1827 he was awarded the society's Rumford Medal for the year 1824, "For his Development of the Undulatory Theory as applied to the Phenomena of Polarized Light, and for his various important discoveries in Physical Optics."  A monument to Fresnel at his birthplace &lt;templatestyles src="Crossreference/styles.css" /&gt;  was dedicated on 14 September 1884 with a speech by Jules Jamin, Permanent Secretary of the Académie des Sciences.  "FRESNEL" is among the 72 names embossed on the Eiffel Tower (on the south-east side, fourth from the left). In the 19th century, as every lighthouse in France acquired a Fresnel lens, every one acquired a bust of Fresnel, seemingly watching over the coastline that he had made safer. The lunar features "Promontorium Fresnel" and "Rimae Fresnel" were later named after him. Decline and death. Fresnel's health, which had always been poor, deteriorated in the winter of 1822–1823, increasing the urgency of his original research, and (in part) preventing him from contributing an article on polarization and double refraction for the "Encyclopædia Britannica". The memoirs on circular and elliptical polarization and optical rotation, and on the detailed derivation of the Fresnel equations and their application to total internal reflection, date from this period. In the spring he recovered enough, in his own view, to supervise the lens installation at Cordouan. Soon afterwards, it became clear that his condition was tuberculosis. In 1824, he was advised that if he wanted to live longer, he needed to scale back his activities. Perceiving his lighthouse work to be his most important duty, he resigned as an examiner at the École Polytechnique, and closed his scientific notebooks. His last note to the Académie, read on 13 June 1825, described the first radiometer and attributed the observed repulsive force to a temperature difference. Although his fundamental research ceased, his advocacy did not; as late as August or September 1826, he found the time to answer Herschel's queries on the wave theory. It was Herschel who recommended Fresnel for the Royal Society's Rumford Medal. Fresnel's cough worsened in the winter of 1826–1827, leaving him too ill to return to Mathieu in the spring. The Académie meeting of 30 April 1827 was the last that he attended. In early June he was carried to Ville-d'Avray, west of Paris. There his mother joined him. On 6 July, Arago arrived to deliver the Rumford Medal. Sensing Arago's distress, Fresnel whispered that "the most beautiful crown means little, when it is laid on the grave of a friend." Fresnel did not have the strength to reply to the Royal Society. He died eight days later, on Bastille Day. He is buried at Père Lachaise Cemetery, Paris. The is partly eroded away; the legible part says, when translated, "To the memory of Augustin Jean Fresnel, member of the Institute of France". Posthumous publications. Fresnel's "second memoir" on double refraction was not printed until late 1827, a few months after his death. Until then, the best published source on his work on double refraction was an extract of that memoir, printed in 1822. His final treatment of partial reflection and total internal reflection, read to the Académie in January 1823, was thought to be lost until it was rediscovered among the papers of the deceased Joseph Fourier (1768–1830), and was printed in 1831. Until then, it was known chiefly through an extract printed in 1823 and 1825. The memoir introducing the parallelepiped form of the Fresnel rhomb, read in March 1818, was mislaid until 1846, and then attracted such interest that it was soon republished in English. Most of Fresnel's writings on polarized light before 1821—including his first theory of chromatic polarization (submitted 7 October 1816) and the crucial "supplement" of January 1818 —were not published in full until his "Oeuvres complètes" ("complete works") began to appear in 1866. The "supplement" of July 1816, proposing the "efficacious ray" and reporting the famous double-mirror experiment, met the same fate, as did the "first memoir" on double refraction. Publication of Fresnel's collected works was itself delayed by the deaths of successive editors. The task was initially entrusted to Félix Savary, who died in 1841. It was restarted twenty years later by the Ministry of Public Instruction. Of the three editors eventually named in the "Oeuvres", Sénarmont died in 1862, Verdet in 1866, and Léonor Fresnel in 1869, by which time only two of the three volumes had appeared. At the beginning of vol. 3 (1870), the completion of the project is described in a long footnote by "J. Lissajous." Not included in the "Oeuvres"  are two short notes by Fresnel on magnetism, which were discovered among Ampère's manuscripts.104 In response to Ørsted's discovery of electromagnetism in 1820, Ampère initially supposed that the field of a permanent magnet was due to a macroscopic circulating current. Fresnel suggested instead that there was a "microscopic" current circulating around each particle of the magnet. In his first note, he argued that microscopic currents, unlike macroscopic currents, would explain why a hollow cylindrical magnet does not lose its magnetism when cut longitudinally. In his second note, dated 5 July 1821, he further argued that a macroscopic current had the counterfactual implication that a permanent magnet should be hot, whereas microscopic currents circulating around the molecules might avoid the heating mechanism.101–104 He was not to know that the fundamental units of permanent magnetism are even smaller than molecules &lt;templatestyles src="Crossreference/styles.css" /&gt;. The two notes, together with Ampère's acknowledgment, were eventually published in 1885. Lost works. Fresnel's essay "Rêveries" of 1814 has not survived. The article "Sur les Différents Systèmes relatifs à la Théorie de la Lumière" ("On the Different Systems relating to the Theory of Light"), which Fresnel wrote for the newly launched English journal "European Review", was received by the publisher's agent in Paris in September 1824. The journal failed before Fresnel's contribution could be published. Fresnel tried unsuccessfully to recover the manuscript. The editors of his collected works were unable to find it, and concluded that it was probably lost. Unfinished work. Aether drag and aether density. In 1810, Arago found experimentally that the degree of refraction of starlight does not depend on the direction of the earth's motion relative to the line of sight. In 1818, Fresnel showed that this result could be explained by the wave theory, on the hypothesis that if an object with refractive index formula_30 moved at velocity formula_31 relative to the external aether (taken as stationary), then the velocity of light inside the object gained the additional component formula_32. He supported that hypothesis by supposing that if the density of the external aether was taken as unity, the density of the internal aether was formula_33, of which the excess, namely formula_34, was dragged along at velocity formula_31, whence the "average" velocity of the internal aether was formula_32. The factor in parentheses, which Fresnel originally expressed in terms of wavelengths, became known as the "Fresnel drag coefficient". &lt;templatestyles src="Crossreference/styles.css" /&gt; In his analysis of double refraction, Fresnel supposed that the different refractive indices in different directions within the "same medium" were due to a directional variation in elasticity, not density (because the concept of mass per unit volume is not directional). But in his treatment of partial reflection, he supposed that the different refractive indices of "different media" were due to different aether densities, not different elasticities. Dispersion. The analogy between light waves and transverse waves in elastic solids does not predict "dispersion"—that is, the frequency-dependence of the speed of propagation, which enables prisms to produce spectra and causes lenses to suffer from chromatic aberration. Fresnel, in "De la Lumière" and in the second supplement to his first memoir on double refraction, suggested that dispersion could be accounted for if the particles of the medium exerted forces on each other over distances that were significant fractions of a wavelength. Later, more than once, Fresnel referred to the demonstration of this result as being contained in a note appended to his "second memoir" on double refraction. No such note appeared in print, and the relevant manuscripts found after his death showed only that, around 1824, he was comparing refractive indices (measured by Fraunhofer) with a theoretical formula, the meaning of which was not fully explained. In the 1830s, Fresnel's suggestion was taken up by Cauchy, Baden Powell, and Philip Kelland, and it was found to be tolerably consistent with the variation of refractive indices with wavelength over the visible spectrum for a variety of transparent media &lt;templatestyles src="Crossreference/styles.css" /&gt;. These investigations were enough to show that the wave theory was at least compatible with dispersion; if the model of dispersion was to be accurate over a wider range of frequencies, it needed to be modified so as to take account of resonances within the medium &lt;templatestyles src="Crossreference/styles.css" /&gt;. Conical refraction. The analytical complexity of Fresnel's derivation of the ray-velocity surface was an implicit challenge to find a shorter path to the result. This was answered by MacCullagh in 1830, and by William Rowan Hamilton in 1832. Legacy. Within a century of Fresnel's initial stepped-lens proposal, more than 10,000 lights with Fresnel lenses were protecting lives and property around the world. Concerning the other benefits, the science historian Theresa H. Levitt has remarked: &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;Everywhere I looked, the story repeated itself. The moment a Fresnel lens appeared at a location was the moment that region became linked into the world economy. In the history of physical optics, Fresnel's successful revival of the wave theory nominates him as the pivotal figure between Newton, who held that light consisted of corpuscles, and James Clerk Maxwell, who established that light waves are electromagnetic. Whereas Albert Einstein described Maxwell's work as "the most profound and the most fruitful that physics has experienced since the time of Newton,"  commentators of the era between Fresnel and Maxwell made similarly strong statements about Fresnel: What Whewell called the "true theory" has since undergone two major revisions. The first, by Maxwell, specified the physical fields whose variations constitute the waves of light. Without the benefit of this knowledge, Fresnel managed to construct the world's first coherent theory of light, showing in retrospect that his methods are applicable to multiple types of waves. The second revision, initiated by Einstein's explanation of the photoelectric effect, supposed that the energy of light waves was divided into quanta, which were eventually identified with particles called photons. But photons did not exactly correspond to Newton's corpuscles; for example, Newton's explanation of ordinary refraction required the corpuscles to travel faster in media of higher refractive index, which photons do not. Neither did photons displace waves; rather, they led to the paradox of wave–particle duality. Moreover, the phenomena studied by Fresnel, which included nearly all the optical phenomena known at his time, are still most easily explained in terms of the "wave" nature of light. So it was that, as late as 1927, the astronomer Eugène Michel Antoniadi declared Fresnel to be "the dominant figure in optics."  Explanatory notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; General and cited references. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "C(x) = \\!\\int_0^x \\!\\cos\\big(\\tfrac{1}{2}\\pi z^2\\big)\\,dz" }, { "math_id": 1, "text": "S(x) = \\!\\int_0^x \\!\\sin\\big(\\tfrac{1}{2}\\pi z^2\\big)\\,dz\\,." }, { "math_id": 2, "text": "I_o = \\cos^2i\\,\\cos^2(i{-}s) + \\sin^2i\\,\\sin^2(i{-}s) + \\tfrac{1}{2}\\sin 2i\\,\\sin 2(i{-}s)\\cos\\phi\\,," }, { "math_id": 3, "text": "I_e = \\cos^2i\\,\\sin^2(i{-}s) + \\sin^2i\\,\\cos^2(i{-}s) - \\tfrac{1}{2}\\sin 2i\\,\\sin 2(i{-}s)\\cos\\phi\\,," }, { "math_id": 4, "text": "i" }, { "math_id": 5, "text": "s" }, { "math_id": 6, "text": "\\phi" }, { "math_id": 7, "text": "\\cos\\phi" }, { "math_id": 8, "text": "\\,I_o+I_e=1\\," }, { "math_id": 9, "text": "\\phi," }, { "math_id": 10, "text": "U=\\cos^2\\tfrac{\\phi}{2}" }, { "math_id": 11, "text": "A=\\sin^2\\tfrac{\\phi}{2}\\,," }, { "math_id": 12, "text": " \\!I_o = U\\cos^2 s + A\\cos^2(2i-s)\\,," }, { "math_id": 13, "text": " I_e = U\\sin^2 s + A\\sin^2(2i-s)\\,," }, { "math_id": 14, "text": "U" }, { "math_id": 15, "text": "A" }, { "math_id": 16, "text": "A," }, { "math_id": 17, "text": "I_o" }, { "math_id": 18, "text": "I_e," }, { "math_id": 19, "text": "r_s=-\\frac{\\sin(i-r)}{\\sin(i+r)}" }, { "math_id": 20, "text": "r_p=\\frac{\\tan(i-r)}{\\tan(i+r)}\\,," }, { "math_id": 21, "text": "r" }, { "math_id": 22, "text": "v_o^{2\\!}-v_e^2 = k\\sin^2\\theta \\,," }, { "math_id": 23, "text": "v_o" }, { "math_id": 24, "text": "v_e" }, { "math_id": 25, "text": "\\theta" }, { "math_id": 26, "text": "r^2\\big(a^2x^{2\\!}+ b^2y^{2\\!}+ c^2z^2\\big) - a^2\\big(b^{2\\!} + c^2\\big)x^2 - b^2\\big(c^{2\\!} + a^2\\big)y^2 - c^2\\big(a^{2\\!} + b^2\\big)z^2 + a^2b^2c^2 =\\, 0\\,," }, { "math_id": 27, "text": "\\,r^2 = x^{2\\!} + y^{2\\!} + z^2,\\," }, { "math_id": 28, "text": "\\,a,b,c\\," }, { "math_id": 29, "text": "\\frac{x^2}{r^2-a^2} + \\frac{y^2}{r^2-b^2} + \\frac{z^2}{r^2-c^2} \\,=\\, 1\\,." }, { "math_id": 30, "text": "n" }, { "math_id": 31, "text": "v" }, { "math_id": 32, "text": "\\,v(1-1/n^2)" }, { "math_id": 33, "text": "n^2" }, { "math_id": 34, "text": "\\,n^2{-}1\\," } ]
https://en.wikipedia.org/wiki?curid=1141
1141106
Siegel upper half-space
Set of complex matrices with positive definite imaginary part In mathematics, the Siegel upper half-space of degree "g (or genus "g) (also called the Siegel upper half-plane) is the set of "g" × "g" symmetric matrices over the complex numbers whose imaginary part is positive definite. It was introduced by Siegel (1939). It is the symmetric space associated to the symplectic group Sp(2"g", R). The Siegel upper half-space has properties as a complex manifold that generalize the properties of the upper half-plane, which is the Siegel upper half-space in the special case "g" = 1. The group of automorphisms preserving the complex structure of the manifold is isomorphic to the symplectic group Sp(2"g", R). Just as the two-dimensional hyperbolic metric is the unique (up to scaling) metric on the upper half-plane whose isometry group is the complex automorphism group SL(2, R) = Sp(2, R), the Siegel upper half-space has only one metric up to scaling whose isometry group is Sp(2"g", R). Writing a generic matrix "Z" in the Siegel upper half-space in terms of its real and imaginary parts as "Z" = "X" + "iY", all metrics with isometry group Sp(2"g", R) are proportional to formula_0 The Siegel upper half-plane can be identified with the set of tame almost complex structures compatible with a symplectic structure formula_1, on the underlying formula_2 dimensional real vector space formula_3, that is, the set of formula_4 such that formula_5 and formula_6 for all vectors formula_7. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d s^2 = \\text{tr}(Y^{-1} dZ Y^{-1} d \\bar{Z})." }, { "math_id": 1, "text": "\\omega" }, { "math_id": 2, "text": "2n" }, { "math_id": 3, "text": "V" }, { "math_id": 4, "text": "J \\in Hom(V)" }, { "math_id": 5, "text": "J^2 = -1" }, { "math_id": 6, "text": " \\omega(Jv, v) > 0 " }, { "math_id": 7, "text": "v \\ne 0" } ]
https://en.wikipedia.org/wiki?curid=1141106
1141150
Rouché's theorem
Theorem about zeros of holomorphic functions Rouché's theorem, named after Eugène Rouché, states that for any two complex-valued functions f and "g" holomorphic inside some region formula_0 with closed contour formula_1, if |"g"("z")| &lt; |"f"("z")| on formula_1, then "f" and "f" + "g" have the same number of zeros inside formula_0, where each zero is counted as many times as its multiplicity. This theorem assumes that the contour formula_1 is simple, that is, without self-intersections. Rouché's theorem is an easy consequence of a stronger symmetric Rouché's theorem described below. Usage. The theorem is usually used to simplify the problem of locating zeros, as follows. Given an analytic function, we write it as the sum of two parts, one of which is simpler and grows faster than (thus dominates) the other part. We can then locate the zeros by looking at only the dominating part. For example, the polynomial formula_2 has exactly 5 zeros in the disk formula_3 since formula_4 for every formula_5, and formula_6, the dominating part, has five zeros in the disk. Geometric explanation. It is possible to provide an informal explanation of Rouché's theorem. Let "C" be a closed, simple curve (i.e., not self-intersecting). Let "h"("z") = "f"("z") + "g"("z"). If "f" and "g" are both holomorphic on the interior of "C", then "h" must also be holomorphic on the interior of "C". Then, with the conditions imposed above, the Rouche's theorem in its original (and not symmetric) form says that &lt;templatestyles src="Block indent/styles.css"/&gt;If |"f"("z")| &gt; |"h"("z") − "f"("z")|, for every "z" in "C," then "f" and "h" have the same number of zeros in the interior of "C". Notice that the condition |"f"("z")| &gt; |"h"("z") − "f"("z")| means that for any "z", the distance from "f"("z") to the origin is larger than the length of "h"("z") − "f"("z"), which in the following picture means that for each point on the blue curve, the segment joining it to the origin is larger than the green segment associated with it. Informally we can say that the blue curve "f"("z") is always closer to the red curve "h"("z") than it is to the origin. The previous paragraph shows that "h"("z") must wind around the origin exactly as many times as "f"("z"). The index of both curves around zero is therefore the same, so by the argument principle, "f"("z") and "h"("z") must have the same number of zeros inside C. One popular, informal way to summarize this argument is as follows: If a person were to walk a dog on a leash around and around a tree, such that the distance between the person and the tree is always greater than the length of the leash, then the person and the dog go around the tree the same number of times. Applications. Bounding roots. Consider the polynomial formula_7 with formula_8. By the quadratic formula it has two zeros at formula_9. Rouché's theorem can be used to obtain some hint about their positions. Since formula_10 Rouché's theorem says that the polynomial has exactly one zero inside the disk formula_11. Since formula_12 is clearly outside the disk, we conclude that the zero is formula_13. In general, a polynomial formula_14. If formula_15 for some formula_16, then by Rouche's theorem, the polynomial has exactly formula_17 roots inside formula_18. This sort of argument can be useful in locating residues when one applies Cauchy's residue theorem. Fundamental theorem of algebra. Rouché's theorem can also be used to give a short proof of the fundamental theorem of algebra. Let formula_19 and choose formula_20 so large that: formula_21 Since formula_22 has formula_23 zeros inside the disk formula_24 (because formula_25), it follows from Rouché's theorem that formula_26 also has the same number of zeros inside the disk. One advantage of this proof over the others is that it shows not only that a polynomial must have a zero but the number of its zeros is equal to its degree (counting, as usual, multiplicity). Another use of Rouché's theorem is to prove the open mapping theorem for analytic functions. We refer to the article for the proof. Symmetric version. A stronger version of Rouché's theorem was published by Theodor Estermann in 1962. It states: let formula_27 be a bounded region with continuous boundary formula_1. Two holomorphic functions formula_28 have the same number of roots (counting multiplicity) in formula_0, if the strict inequality formula_29 holds on the boundary formula_30 The original version of Rouché's theorem then follows from this symmetric version applied to the functions formula_31 together with the trivial inequality formula_32 (in fact this inequality is strict since formula_33 for some formula_34 would imply formula_35). The statement can be understood intuitively as follows. By considering formula_36 in place of formula_37, the condition can be rewritten as formula_38 for formula_39. Since formula_40 always holds by the triangle inequality, this is equivalent to saying that formula_41 on formula_1, which in turn means that for formula_34 the functions formula_42 and formula_43 are non-vanishing and formula_44. Intuitively, if the values of formula_45 and formula_37 never pass through the origin and never point in the same direction as formula_46 circles along formula_1, then formula_42 and formula_43 must wind around the origin the same number of times. Proof of the symmetric form of Rouché's theorem. Let formula_47 be a simple closed curve whose image is the boundary formula_1. The hypothesis implies that "f" has no roots on formula_1, hence by the argument principle, the number "Nf"("K") of zeros of "f" in "K" is formula_48 i.e., the winding number of the closed curve formula_49 around the origin; similarly for "g". The hypothesis ensures that "g"("z") is not a negative real multiple of "f"("z") for any "z" = "C"("x"), thus 0 does not lie on the line segment joining "f"("C"("x")) to "g"("C"("x")), and formula_50 is a homotopy between the curves formula_49 and formula_51 avoiding the origin. The winding number is homotopy-invariant: the function formula_52 is continuous and integer-valued, hence constant. This shows formula_53 References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "K" }, { "math_id": 1, "text": "\\partial K" }, { "math_id": 2, "text": "z^5 + 3z^3 + 7" }, { "math_id": 3, "text": "|z| < 2" }, { "math_id": 4, "text": "|3z^3 + 7| \\le 31 < 32 = |z^5|" }, { "math_id": 5, "text": "|z| = 2" }, { "math_id": 6, "text": "z^5" }, { "math_id": 7, "text": "z^2 + 2az + b^2" }, { "math_id": 8, "text": "a > b > 0" }, { "math_id": 9, "text": "-a \\pm \\sqrt{a^2 - b^2}" }, { "math_id": 10, "text": "|z^2 + b^2| \\le 2b^2 < 2a|z| \\text{ for all } |z| = b," }, { "math_id": 11, "text": "|z| < b" }, { "math_id": 12, "text": "-a - \\sqrt{a^2 - b^2}" }, { "math_id": 13, "text": "-a + \\sqrt{a^2 - b^2}" }, { "math_id": 14, "text": "f(z) = a_n z^n + \\cdots + a_0" }, { "math_id": 15, "text": "|a_k| r^k > \\sum_{j\\neq k}|a_j| r^j" }, { "math_id": 16, "text": "r > 0, k \\in 0:n" }, { "math_id": 17, "text": "k" }, { "math_id": 18, "text": "B(0, r)" }, { "math_id": 19, "text": "p(z) = a_0 + a_1z + a_2 z^2 + \\cdots + a_n z^n, \\quad a_n \\ne 0" }, { "math_id": 20, "text": "R > 0" }, { "math_id": 21, "text": "|a_0 + a_1z + \\cdots + a_{n-1} z^{n-1}| \\le \\sum_{j=0}^{n - 1} |a_j| R^j < |a_n| R^n = |a_n z^n| \\text{ for } |z| = R." }, { "math_id": 22, "text": "a_n z^n" }, { "math_id": 23, "text": "n" }, { "math_id": 24, "text": "|z| < R" }, { "math_id": 25, "text": "R>0" }, { "math_id": 26, "text": "p" }, { "math_id": 27, "text": "K\\subset G" }, { "math_id": 28, "text": "f,\\,g\\in\\mathcal H(G)" }, { "math_id": 29, "text": "|f(z)-g(z)|<|f(z)|+|g(z)| \\qquad \\left(z\\in \\partial K\\right)" }, { "math_id": 30, "text": "\\partial K." }, { "math_id": 31, "text": "f+g,f" }, { "math_id": 32, "text": "|f(z)+g(z)| \\ge 0" }, { "math_id": 33, "text": "f(z)+g(z) = 0" }, { "math_id": 34, "text": "z\\in\\partial K" }, { "math_id": 35, "text": "|g(z)| = |f(z)|" }, { "math_id": 36, "text": "-g" }, { "math_id": 37, "text": "g" }, { "math_id": 38, "text": "|f(z) + g(z)|<|f(z)|+|g(z)|" }, { "math_id": 39, "text": "z\\in \\partial K" }, { "math_id": 40, "text": "|f(z) + g(z)| \\leq |f(z)|+|g(z)|" }, { "math_id": 41, "text": "|f(z) + g(z)| \\neq |f(z)|+|g(z)|" }, { "math_id": 42, "text": "f(z)" }, { "math_id": 43, "text": "g(z)" }, { "math_id": 44, "text": "\\arg{f(z)} \\neq \\arg{g(z)}" }, { "math_id": 45, "text": "f" }, { "math_id": 46, "text": "z" }, { "math_id": 47, "text": "C\\colon[0,1]\\to\\mathbb C" }, { "math_id": 48, "text": "\\frac1{2\\pi i}\\oint_C\\frac{f'(z)}{f(z)}\\,dz=\\frac1{2\\pi i}\\oint_{f\\circ C} \\frac{dz}z =\\mathrm{Ind}_{f\\circ C}(0)," }, { "math_id": 49, "text": "f\\circ C" }, { "math_id": 50, "text": "H_t(x) = (1-t)f(C(x)) + t g(C(x))" }, { "math_id": 51, "text": "g\\circ C" }, { "math_id": 52, "text": "I(t)=\\mathrm{Ind}_{H_t}(0)=\\frac1{2\\pi i}\\oint_{H_t}\\frac{dz}z" }, { "math_id": 53, "text": "N_f(K)=\\mathrm{Ind}_{f\\circ C}(0)=\\mathrm{Ind}_{g\\circ C}(0)=N_g(K)." } ]
https://en.wikipedia.org/wiki?curid=1141150
11413257
Moving sofa problem
Geometry question on motion planning &lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in mathematics: What is the largest area of a shape that can be maneuvered through a unit-width L-shaped corridor? In mathematics, the moving sofa problem or sofa problem is a two-dimensional idealization of real-life furniture-moving problems and asks for the rigid two-dimensional shape of the largest area that can be maneuvered through an L-shaped planar region with legs of unit width. The area thus obtained is referred to as the sofa constant. The exact value of the sofa constant is an open problem. The currently leading solution, by Joseph L. Gerver, has a value of approximately 2.2195 and is thought to be close to the optimal, based upon subsequent study and theoretical bounds. History. The first formal publication was by the Austrian-Canadian mathematician Leo Moser in 1966, although there had been many informal mentions before that date. Bounds. Work has been done to prove that the sofa constant (A) cannot be below or above specific values (lower bounds and upper bounds). Lower. A lower bound on the sofa constant can be proven by finding a specific shape of a high area and a path for moving it through the corner. formula_0 is an obvious lower bound. This comes from a sofa that is a half-disk of unit radius, which can slide up one passage into the corner, rotate within the corner around the center of the disk, and then slide out the other passage. In 1968, John Hammersley stated a lower bound of formula_1. This can be achieved using a shape resembling a telephone handset, consisting of two quarter-disks of radius 1 on either side of a 1 by formula_2 rectangle from which a half-disk of radius formula_3 has been removed. In 1992, Joseph L. Gerver of Rutgers University described a sofa with 18 curve sections, each taking a smooth analytic form. This further increased the lower bound for the sofa constant to approximately 2.2195 (sequence in the OEIS). Upper. Hammersley stated an upper bound on the sofa constant of at most formula_4. Yoav Kallus and Dan Romik published a new upper bound in 2018, capping the sofa constant at formula_5. Their approach involves rotating the corridor (rather than the sofa) through a finite sequence of distinct angles (rather than continuously) and using a computer search to find translations for each rotated copy so that the intersection of all of the copies has a connected component with as large an area as possible. As they show, this provides a valid upper bound for the optimal sofa, which can be made more accurate using more rotation angles. Five carefully chosen rotation angles lead to the stated upper bound. Ambidextrous sofa. A variant of the sofa problem asks the shape of the largest area that can go around both left and right 90-degree corners in a corridor of unit width (where the left and right corners are spaced sufficiently far apart that one is fully negotiated before the other is encountered). A lower bound of area approximately 1.64495521 has been described by Dan Romik. 18 curve sections also describe his sofa. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A \\geq \\pi/2 \\approx 1.57" }, { "math_id": 1, "text": "A \\geq \\pi/2 + 2/\\pi \\approx 2.2074" }, { "math_id": 2, "text": "4/\\pi" }, { "math_id": 3, "text": "2/\\pi" }, { "math_id": 4, "text": "2\\sqrt{2} \\approx 2.8284" }, { "math_id": 5, "text": "2.37" } ]
https://en.wikipedia.org/wiki?curid=11413257
11415890
Edmonds matrix
In graph theory, the Edmonds matrix formula_0 of a balanced bipartite graph formula_1 with sets of vertices formula_2 and formula_3 is defined by formula_4 where the "xij" are indeterminates. One application of the Edmonds matrix of a bipartite graph is that the graph admits a perfect matching if and only if the polynomial det("Aij") in the "xij" is not identically zero. Furthermore, the number of perfect matchings is equal to the number of monomials in the polynomial det("A"), and is also equal to the permanent of formula_0. In addition, the rank of formula_0 is equal to the maximum matching size of formula_5. The Edmonds matrix is named after Jack Edmonds. The Tutte matrix is a generalisation to non-bipartite graphs.
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "G = (U, V, E)" }, { "math_id": 2, "text": "U = \\{u_1, u_2, \\dots , u_n \\}" }, { "math_id": 3, "text": "V = \\{v_1, v_2, \\dots , v_n\\}" }, { "math_id": 4, "text": "A_{ij} = \\left\\{ \\begin{array}{ll}\n x_{ij} & (u_i, v_j) \\in E \\\\\n 0 & (u_i, v_j) \\notin E\n\\end{array}\\right." }, { "math_id": 5, "text": "G" } ]
https://en.wikipedia.org/wiki?curid=11415890
11417634
Amenable Banach algebra
In mathematics, specifically in functional analysis, a Banach algebra, "A", is amenable if all bounded derivations from "A" into dual Banach "A"-bimodules are inner (that is of the form formula_0 for some formula_1 in the dual module). An equivalent characterization is that "A" is amenable if and only if it has a virtual diagonal. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a\\mapsto a.x-x.a" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "L^1(G)" }, { "math_id": 3, "text": "\\theta" }, { "math_id": 4, "text": "\\overline{\\theta(A)}" } ]
https://en.wikipedia.org/wiki?curid=11417634
11421646
Prime k-tuple
Repeatable pattern of differences between prime numbers In number theory, a prime k-tuple is a finite collection of values representing a repeatable pattern of differences between prime numbers. For a k-tuple ("a", "b", …), the positions where the k-tuple matches a pattern in the prime numbers are given by the set of integers n such that all of the values ("n" + "a", "n" + "b", …) are prime. Typically the first value in the k-tuple is 0 and the rest are distinct positive even numbers. Named patterns. Several of the shortest "k"-tuples are known by other common names: OEIS sequence OEIS:  covers 7-tuples ("prime septuplets") and contains an overview of related sequences, e.g. the three sequences corresponding to the three admissible 8-tuples ("prime octuplets"), and the union of all 8-tuples. The first term in these sequences corresponds to the first prime in the smallest prime constellation shown below. Admissibility. In order for a k-tuple to have infinitely many positions at which all of its values are prime, there cannot exist a prime p such that the tuple includes every different possible value modulo p. For, if such a prime p existed, then no matter which value of n was chosen, one of the values formed by adding n to the tuple would be divisible by p, so there could only be finitely many prime placements (only those including p itself). For example, the numbers in a k-tuple cannot take on all three values 0, 1, and 2 modulo 3; otherwise the resulting numbers would always include a multiple of 3 and therefore could not all be prime unless one of the numbers is 3 itself. A k-tuple that satisfies this condition (i.e. it does not have a p for which it covers all the different values modulo p) is called admissible. It is conjectured that every admissible k-tuple matches infinitely many positions in the sequence of prime numbers. However, there is no admissible tuple for which this has been proven except the 1-tuple (0). Nevertheless, Yitang Zhang proved in 2013 that there exists at least one 2-tuple which matches infinitely many positions; subsequent work showed that such a 2-tuple exists with values differing by 246 or less that matches infinitely many positions. Positions matched by inadmissible patterns. Although (0, 2, 4) is not admissible it does produce the single set of primes, (3, 5, 7). Some inadmissible k-tuples have more than one all-prime solution. This cannot happen for a k-tuple that includes all values modulo 3, so to have this property a k-tuple must cover all values modulo a larger prime, implying that there are at least five numbers in the tuple. The shortest inadmissible tuple with more than one solution is the 5-tuple (0, 2, 8, 14, 26), which has two solutions: (3, 5, 11, 17, 29) and (5, 7, 13, 19, 31), where all values mod 5 are included in both cases. Prime constellations. The diameter of a k-tuple is the difference of its largest and smallest elements. An admissible prime k-tuple with the smallest possible diameter d (among all admissible k-tuples) is a prime constellation. For all "n" ≥ "k" this will always produce consecutive primes. (Recall that all n are integers for which the values ("n" + "a", "n" + "b", …) are prime.) This means that, for large n: formula_0 where pn is the nth prime number. The first few prime constellations are: The diameter d as a function of k is in the OEIS. A prime constellation is sometimes referred to as a prime k-tuplet, but some authors reserve that term for instances that are not part of longer k-tuplets. The first Hardy–Littlewood conjecture predicts that the asymptotic frequency of any prime constellation can be calculated. While the conjecture is unproven it is considered likely to be true. If that is the case, it implies that the second Hardy–Littlewood conjecture, in contrast, is false. Prime arithmetic progressions. A prime k-tuple of the form (0, "n", 2"n", 3"n", …, ("k" − 1)"n") is said to be a prime arithmetic progression. In order for such a k-tuple to meet the admissibility test, n must be a multiple of the primorial of k. Skewes numbers. The Skewes numbers for prime "k"-tuples are an extension of the definition of Skewes' number to prime "k"-tuples based on the first Hardy–Littlewood conjecture (). Let formula_1 denote a prime k-tuple, formula_2 the number of primes p below x such that formula_3 are all prime, let formula_4 and let formula_5 denote its Hardy–Littlewood constant (see first Hardy–Littlewood conjecture). Then the first prime p that violates the Hardy–Littlewood inequality for the k-tuple P, i.e., such that formula_6 (if such a prime exists) is the "Skewes number for P". The table below shows the currently known Skewes numbers for prime "k"-tuples: The Skewes number (if it exists) for sexy primes &amp;NoBreak;&amp;NoBreak; is still unknown. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "p_{n+k-1} - p_n \\geq d" }, { "math_id": 1, "text": "P = (p,\\ p+i_1,\\ p+i_2,\\ \\dots\\ ,\\ p+i_k)" }, { "math_id": 2, "text": "\\pi_P(x)" }, { "math_id": 3, "text": "p,\\ p+i_1,\\ p+i_2,\\ \\dots\\ ,\\ p+i_k" }, { "math_id": 4, "text": "\\operatorname{li}_P(x) = \\int_2^x \\frac{dt}{(\\ln t)^{k + 1}}" }, { "math_id": 5, "text": "C_P" }, { "math_id": 6, "text": "\\pi_P(p) > C_P \\operatorname{li}_P(p), " } ]
https://en.wikipedia.org/wiki?curid=11421646
11422488
Go and mathematics
Calculations of the game complexity of go The game of Go is one of the most popular games in the world. As a result of its elegant and simple rules, the game has long been an inspiration for mathematical research. Shen Kuo, an 11th century Chinese scholar, estimated in his "Dream Pool Essays" that the number of possible board positions is around 10172. In more recent years, research of the game by John H. Conway led to the development of the surreal numbers and contributed to development of combinatorial game theory (with Go Infinitesimals being a specific example of its use in Go). Computational complexity. Generalized Go is played on "n" × "n" boards, and the computational complexity of determining the winner in a given position of generalized Go depends crucially on the ko rules. Go is “almost” in PSPACE, since in normal play, moves are not reversible, and it is only through capture that there is the possibility of the repeating patterns necessary for a harder complexity. Without ko. Without ko, Go is PSPACE-hard. This is proved by reducing True Quantified Boolean Formula, which is known to be PSPACE-complete, to generalized geography, to planar generalized geography, to planar generalized geography with maximum degree 3, finally to Go positions. Go with superko is not known to be in PSPACE. Though actual games seem never to last longer than formula_0 moves, in general it is not known if there were a polynomial bound on the length of Go games. If there were, Go would be PSPACE-complete. As it currently stands, it might be PSPACE-complete, EXPTIME-complete, or even EXPSPACE-complete. Japanese ko rule. Japanese ko rules state that only the basic ko, that is, a move that reverts the board to the situation one move previously, is forbidden. Longer repetitive situations are allowed, thus potentially allowing a game to loop forever, such as the triple ko, where there are three kos at the same time, allowing a cycle of 12 moves. With Japanese ko rules, Go is EXPTIME-complete. Superko rule. The superko rule (also called the positional superko rule) states that a repetition of any board position that has previously occurred is forbidden. This is the ko rule used in most Chinese and US rulesets. It is an open problem what the complexity class of Go is under superko rule. Though Go with Japanese ko rule is EXPTIME-complete, both the lower and the upper bounds of Robson’s EXPTIME-completeness proof break when the superko rule is added. It is known that it is at least PSPACE-hard, since the proof in of the PSPACE-hardness of Go does not rely on the ko rule, or lack of the ko rule. It is also known that Go is in EXPSPACE. Robson showed that if the superko rule, that is, “no previous position may ever be recreated”, is added to certain two-player games that are EXPTIME-complete, then the new games would be EXPSPACE-complete. Intuitively, this is because an exponential amount of space is required even to determine the legal moves from a position, because the game history leading up to a position could be exponentially long. As a result, superko variants (moves that repeat a previous board position are not allowed) of generalized chess and checkers are EXPSPACE-complete, since generalized chess and checkers are EXPTIME-complete. However, this result does not apply to Go. Complexity of certain Go configurations. A Go endgame begins when the board is divided into areas that are isolated from all other local areas by living stones, such that each local area has a polynomial size canonical game tree. In the language of combinatorial game theory, it happens when a Go game decomposes into a sum of subgames with polynomial size canonical game trees. With that definition, Go endgames are PSPACE-hard. This is proven by converting the Quantified Boolean Formula problem, which is PSPACE-complete, into a sum of small (with polynomial size canonical game trees) Go subgames. Note that the paper does not prove that Go endgames are in PSPACE, so they might not be PSPACE-complete. Determining which side wins a ladder capturing race is PSPACE-complete, whether Japanese ko rule or superko rule is in place. This is proven by simulating QBF, known to be PSPACE-complete, with ladders that bounce around the board like light beams. Legal positions. Since each location on the board can be either empty, black, or white, there are a total of 3"n"2 possible board positions on a square board with length "n"; however not all of them are legal. Tromp and Farnebäck derived a recursive formula for legal positions formula_1 of a rectangle board with length "m" and "n". The exact number of formula_2 was obtained in 2016. They also find an asymptotic formula formula_3, where formula_4, formula_5 and formula_6. It has been estimated that the observable universe contains around 1080 atoms, far fewer than the number of possible legal positions of regular board size (m=n=19). As the board gets larger, the percentage of the positions that are legal decreases. Game tree complexity. The computer scientist Victor Allis notes that typical games between experts last about 150 moves, with an average of about 250 choices per move, suggesting a game-tree complexity of 10360. For the number of "theoretically possible" games, including games impossible to play in practice, Tromp and Farnebäck give lower and upper bounds of 101048 and 1010171 respectively. The lower bound was improved to a googolplex by Walraet and Tromp. The most commonly quoted number for the number of possible games, 10700 is derived from a simple permutation of 361 moves or 361! = 10768. Another common derivation is to assume "N" intersections and "L" longest game for "N""L" total games. For example, 400 moves, as seen in some professional games, would be one out of 361400 or 1 × 101023 possible games. The total number of possible games is a function both of the size of the board and the number of moves played. While most games last less than 400 or even 200 moves, many more are possible. The total number of possible games can be estimated from the board size in a number of ways, some more rigorous than others. The simplest, a permutation of the board size, ("N")"L", fails to include illegal captures and positions. Taking "N" as the board size (19 × 19 = 361) and "L" as the longest game, "N""L" forms an upper limit. A more accurate limit is presented in the Tromp/Farnebäck paper. 10700 is thus an overestimate of the number of possible games that can be played in 200 moves and an underestimate of the number of games that can be played in 361 moves. Since there are about 31 million seconds in a year, it would take about &lt;templatestyles src="Fraction/styles.css" /&gt;2+1⁄4 years, playing 16 hours a day at one move per second, to play 47 million moves. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n^2" }, { "math_id": 1, "text": "L(m,n)" }, { "math_id": 2, "text": "L(19,19)" }, { "math_id": 3, "text": "L\\approx AB^{m+n}C^{mn}" }, { "math_id": 4, "text": "A\\approx 0.8506399258457145" }, { "math_id": 5, "text": "B\\approx 0.96553505933837387" }, { "math_id": 6, "text": "C\\approx 2.975734192043357249381" } ]
https://en.wikipedia.org/wiki?curid=11422488
11424466
Modified Maddrey's discriminant function
Maddrey's discriminant function (DF) is the traditional model for evaluating the severity and prognosis in alcoholic hepatitis and evaluates the efficacy of using alcoholic hepatitis steroid treatment. The Maddrey DF score is a predictive statistical model compares the subject's DF score with mortality prognosis within 30-day or 90-day scores. The subject's Maddrey DF score is determined by blood analysis. The modified Maddrey's discriminant function was originally described by Maddrey and Boitnott to predict prognosis in alcoholic hepatitis. It is calculated by a simple formula using prothrombin time and serum bilirubin concentration: formula_0 Prospective studies have shown that it is useful in predicting short term prognosis, especially mortality within 30 days. A value more than 32 implies poor outcome with one month mortality ranging between 35% and 45%. Corticosteroid therapy or pentoxifylline have been used with mixed results for patients whose increased mortality is indicated with a value greater than 32. To calculate Maddrey discriminant function using SI units, such as micromoles per litre, divide bilirubin value by 17. Reference list. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\left(4.6 \\times \\left(\\hbox{prothrombin time} - \\hbox{control time}\\right)\\right) + \\hbox{serum bilirubin in mg/dl}" } ]
https://en.wikipedia.org/wiki?curid=11424466
1142504
K1
K1, K.I, K01, K 1 or K-1 can refer to: &lt;templatestyles src="Template:TOC_right/styles.css" /&gt; Music. K. 1 can designate: See also. &lt;templatestyles src="Dmbox/styles.css" /&gt; Topics referred to by the same termThis page lists articles associated with the same title formed as a letter–number combination.
[ { "math_id": 0, "text": " K_1(R)" }, { "math_id": 1, "text": "R" } ]
https://en.wikipedia.org/wiki?curid=1142504
1142737
Hebesphenomegacorona
89th Johnson solid (21 faces) In geometry, the hebesphenomegacorona is a Johnson solid with 18 equilateral triangles and 3 squares as its faces. Properties. The hebesphenomegacorona is named by in which he used the prefix "hebespheno-" referring to a blunt wedge-like complex formed by three adjacent "lunes"—a square with equilateral triangles attached on its opposite sides. The suffix "-megacorona" refers to a crownlike complex of 12 triangles. By joining both complexes together, the result polyhedron has 18 equilateral triangles and 3 squares, making 21 faces.. All of its faces are regular polygons, categorizing the hebesphenomegacorona as a Johnson solid—a convex polyhedron in which all of its faces are regular polygons—enumerated as 89th Johnson solid formula_0. It is an elementary polyhedron, meaning it cannot be separated by a plane into two small regular-faced polyhedra. The surface area of a hebesphenomegacorona with edge length formula_1 can be determined by adding the area of its faces, 18 equilateral triangles and 3 squares formula_2 and its volume is formula_3. Cartesian coordinates. Let formula_4 be the second smallest positive root of the polynomial formula_5 Then, Cartesian coordinates of a hebesphenomegacorona with edge length 2 are given by the union of the orbits of the points formula_6 under the action of the group generated by reflections about the xz-plane and the yz-plane. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{89} " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " \\frac{6 + 9\\sqrt{3}}{2}a^2 \\approx 10.7942a^2, " }, { "math_id": 3, "text": " 2.9129a^3 " }, { "math_id": 4, "text": " a \\approx 0.21684 " }, { "math_id": 5, "text": " \\begin{align} &26880x^{10} + 35328x^9 - 25600x^8 - 39680x^7 + 6112x^6 \\\\ &\\quad {}+ 13696x^5 + 2128x^4 - 1808x^3 - 1119x^2 + 494x - 47 \\end{align}" }, { "math_id": 6, "text": " \\begin{align} &\\left(1,1,2\\sqrt{1-a^2}\\right),\\ \\left(1+2a,1,0\\right),\\ \\left(0,1+\\sqrt{2}\\sqrt{\\frac{2a-1}{a-1}},-\\frac{2a^2+a-1}{\\sqrt{1-a^2}}\\right),\\ \\left(1,0,-\\sqrt{3-4a^2}\\right), \\\\ &\\left(0,\\frac{\\sqrt{2(3-4a^2)(1-2a)}+\\sqrt{1+a}}{2(1-a)\\sqrt{1+a}},\\frac{(2a-1)\\sqrt{3-4a^2}}{2(1-a)}-\\frac{\\sqrt{2(1-2a)}}{2(1-a)\\sqrt{1+a}}\\right) \\end{align}" } ]
https://en.wikipedia.org/wiki?curid=1142737
1142743
Sphenomegacorona
88th Johnson solid (18 faces) In geometry, the sphenomegacorona is a Johnson solid with 16 equilateral triangles and 2 squares as its faces. Properties. The sphenomegacorona was named by in which he used the prefix "spheno-" referring to a wedge-like complex formed by two adjacent "lunes"—a square with equilateral triangles attached on its opposite sides. The suffix "-megacorona" refers to a crownlike complex of 12 triangles, contrasted with the smaller triangular complex that makes the sphenocorona. By joining both complexes, the resulting polyhedron has 16 equilateral triangles and 2 squares, making 18 faces. All of its faces are regular polygons, categorizing the sphenomegacorona as a Johnson solid—a convex polyhedron in which all of the faces are regular polygons—enumerated as the 88th Johnson solid formula_0. It is an elementary polyhedron, meaning it cannot be separated by a plane into two small regular-faced polyhedra. The surface area of a sphenomegacorona formula_1 is the total of polygonal faces' area—16 equilateral triangles and 2 squares. The volume of a sphenomegacorona is obtained by finding the root of a polynomial, and its decimal expansion—denoted as formula_2—is given by . With edge length formula_3, its surface area and volume can be formulated as: formula_4 Cartesian coordinates. Let formula_5 be the smallest positive root of the polynomial formula_6 Then, Cartesian coordinates of a sphenomegacorona with edge length 2 are given by the union of the orbits of the points formula_7 under the action of the group generated by reflections about the xz-plane and the yz-plane. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{88} " }, { "math_id": 1, "text": " A " }, { "math_id": 2, "text": " \\xi " }, { "math_id": 3, "text": " a " }, { "math_id": 4, "text": " \\begin{align}\n A &= \\left(2+4\\sqrt{3}\\right)a^2 &\\approx 8.928a^2, \\\\\n V &= \\xi a^3 &\\approx 1.948a^3.\n\\end{align}\n" }, { "math_id": 5, "text": " k \\approx 0.59463 " }, { "math_id": 6, "text": " 1680 x^{16}- 4800 x^{15} - 3712 x^{14} + 17216 x^{13}+ 1568 x^{12} - 24576 x^{11} + 2464 x^{10} + 17248 x^9 -3384 x^8 - 5584 x^7 + 2000 x^6+ 240 x^5- 776 x^4+ 304 x^3 + 200 x^2 - 56 x -23. " }, { "math_id": 7, "text": "\\begin{align} &\\left(0,1,2\\sqrt{1-k^2}\\right),\\,(2k,1,0),\\,\\left(0,\\frac{\\sqrt{3-4k^2}}{\\sqrt{1-k^2}}+1,\\frac{1-2k^2}{\\sqrt{1-k^2}}\\right), \\\\\n&\\left(1,0,-\\sqrt{2+4k-4k^2}\\right),\\,\\left(0,\\frac{\\sqrt{3-4k^2}\\left(2k^2-1\\right)}{\\left(k^2-1\\right)\\sqrt{1-k^2}}+1,\\frac{2k^4-1}{\\left(1-k^2\\right)^\\frac32}\\right) \\end{align}" } ]
https://en.wikipedia.org/wiki?curid=1142743
1142750
Sphenocorona
86th Johnson solid (14 faces) In geometry, the sphenocorona is a Johnson solid with 12 equilateral triangles and 2 squares as its faces. Properties. The sphenocorona was named by in which he used the prefix "spheno-" referring to a wedge-like complex formed by two adjacent "lunes"—a square with equilateral triangles attached on its opposite sides. The suffix "-corona" refers to a crownlike complex of 8 equilateral triangles. By joining both complexes together, the resulting polyhedron has 12 equilateral triangles and 2 squares, making 14 faces. A convex polyhedron in which all faces are regular polygons is called a Johnson solid. The sphenocorona is among them, enumerated as the 86th Johnson solid formula_0. It is an elementary polyhedron, meaning it cannot be separated by a plane into two small regular-faced polyhedra. The surface area of a sphenocorona with edge length formula_1 can be calculated as: formula_2 and its volume as: formula_3 Cartesian coordinates. Let formula_4 be the smallest positive root of the quartic polynomial formula_5. Then, Cartesian coordinates of a sphenocorona with edge length 2 are given by the union of the orbits of the points formula_6 under the action of the group generated by reflections about the xz-plane and the yz-plane. Variations. The sphenocorona is also the vertex figure of the isogonal n-gonal double antiprismoid where n is an odd number greater than one, including the grand antiprism with pairs of trapezoid rather than square faces. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{86} " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " A=\\left(2+3\\sqrt{3}\\right)a^2\\approx7.19615a^2," }, { "math_id": 3, "text": "\\left(\\frac{1}{2}\\sqrt{1 + 3 \\sqrt{\\frac{3}{2}} + \\sqrt{13 + 3 \\sqrt{6}}}\\right)a^3\\approx1.51535a^3." }, { "math_id": 4, "text": " k \\approx 0.85273 " }, { "math_id": 5, "text": " 60x^4 - 48x^3 - 100x^2 + 56x + 23 " }, { "math_id": 6, "text": " \\left(0,1,2\\sqrt{1-k^2}\\right),\\,(2k,1,0),\\left(0,1+\\frac{\\sqrt{3-4k^2}}{\\sqrt{1-k^2}},\\frac{1-2k^2}{\\sqrt{1-k^2}}\\right),\\,\\left(1,0,-\\sqrt{2+4k-4k^2}\\right)" } ]
https://en.wikipedia.org/wiki?curid=1142750
11432
Full moon
Lunar phase: completely illuminated disc The full moon is the lunar phase when the Moon appears fully illuminated from Earth's perspective. This occurs when Earth is located between the Sun and the Moon (when the ecliptic longitudes of the Sun and Moon differ by 180°). This means that the lunar hemisphere facing Earth—the near side—is completely sunlit and appears as an approximately circular disk. The full moon occurs roughly once a month. The time interval between a full moon and the next repetition of the same phase, a synodic month, averages about 29.53 days. Because of irregularities in the moon's orbit, the new and full moons may fall up to thirteen hours either side of their mean. If the calendar date is not locally determined through observation of the new moon at the beginning of the month there is the potential for a further twelve hours difference depending on the time zone. Potential discrepancies also arise from whether the calendar day is considered to begin in the evening or at midnight. It is normal for the full moon to fall on the fourteenth or the fifteenth of the month according to whether the start of the month is reckoned from the appearance of the new moon or from the conjunction. A will also exhibit variations depending on the intercalation system used. Because a calendar month consists of a whole number of days, a month in a lunar calendar may be either 29 or 30 days long. Characteristics. A full moon is often thought of as an event of a full night's duration, although its phase seen from Earth continuously waxes or wanes, and is full only at the instant when waxing ends and waning begins. For any given location, about half of these maximum full moons may be visible, while the other half occurs during the day, when the full moon is below the horizon. As the Moon's orbit is inclined by 5.145° from the ecliptic, it is not generally perfectly opposite from the Sun during full phase, therefore a full moon is in general not perfectly full except on nights with a lunar eclipse as the Moon crosses the ecliptic at opposition from the Sun. Many almanacs list full moons not only by date, but also by their exact time, usually in Coordinated Universal Time (UTC). Typical monthly calendars that include lunar phases may be offset by one day when prepared for a different time zone. The full moon is generally a suboptimal time for astronomical observation of the Moon because shadows vanish. It is a poor time for other observations because the bright sunlight reflected by the Moon, amplified by the opposition surge, then outshines many stars. Moon phases. There are eight phases of the moon, which vary from partial to full illumination. The moon phases are also called lunar phases. These stages have different names that come from its shape and size at each phase. For example, the crescent moon is 'banana' shaped, and the half-moon is D-shaped. When the moon is nearly full, it is called a gibbous moon. The crescent and gibbous moons each last approximately a week. Each phase is also described in accordance to its position on the full 29.5-day cycle. The eight phases of the moon in order: Formula. The date and approximate time of a specific full moon (assuming a circular orbit) can be calculated from the following equation: formula_0 where "d" is the number of days since 1 January 2000 00:00:00 in the Terrestrial Time scale used in astronomical ephemerides; for Universal Time (UT) add the following approximate correction to "d": formula_1 days where "N" is the number of full moons since the first full moon of 2000. The true time of a full moon may differ from this approximation by up to about 14.5 hours as a result of the non-circularity of the Moon's orbit. See New moon for an explanation of the formula and its parameters. The age and apparent size of the full moon vary in a cycle of just under 14 synodic months, which has been referred to as a full moon cycle. Lunar eclipses. When the Moon moves into Earth's shadow, a lunar eclipse occurs, during which all or part of the Moon's face may appear reddish due to the Rayleigh scattering of blue wavelengths and the refraction of sunlight through Earth's atmosphere. Lunar eclipses happen only during a full moon and around points on its orbit where the satellite may pass through the planet's shadow. A lunar eclipse does not occur every month because the Moon's orbit is inclined 5.145° with respect to the ecliptic plane of Earth; thus, the Moon usually passes north or south of Earth's shadow, which is mostly restricted to this plane of reference. Lunar eclipses happen only when the full moon occurs around either node of its orbit (ascending or descending). Therefore, a lunar eclipse occurs about every six months, and often two weeks before or after a solar eclipse, which occurs during a new moon around the opposite node. In folklore and tradition. In Buddhism, Vesak is celebrated on the full moon day of the Vaisakha month, marking the birth, enlightenment, and the death of the Buddha. In Arabic, badr (بدر ) means 'full moon', but it is often translated as 'white moon', referring to The White Days, the three days when the full moon is celebrated. Full moons are traditionally associated with insomnia (inability to sleep), insanity (hence the terms "lunacy" and "lunatic") and various "magical phenomena" such as lycanthropy. Psychologists, however, have found that there is no strong evidence for effects on human behavior around the time of a full moon. They find that studies are generally not consistent, with some showing a positive effect and others showing a negative effect. In one instance, the 23 December 2000 issue of the "British Medical Journal" published two studies on dog bite admission to hospitals in England and Australia. The study of the Bradford Royal Infirmary found that dog bites were twice as common during a full moon, whereas the study conducted by the public hospitals in Australia found that they were less likely. The symbol of the Triple Goddess is drawn with the circular image of the full moon in the center flanked by a left facing crescent and right facing crescent, on either side, representing a maiden, mother and crone archetype. Full moon names. Historically, month names are names of moons (lunations, not necessarily full moons) in lunisolar calendars. Since the introduction of the solar Julian calendar in the Roman Empire, and later the Gregorian calendar worldwide, people no longer perceive month names as "moon" names. The traditional Old English month names were equated with the names of the Julian calendar from an early time, soon after Christianization, according to the testimony of Bede around AD 700. Some full moons have developed new names in modern times, such as "blue moon", as well as "harvest moon" and "hunter's moon" for the full moons of autumn. Lunar eclipses occur only at a full moon and often cause a reddish hue on the near side of the Moon. This full moon has been called a blood moon in popular culture. Harvest and hunter's moons. The "harvest moon" and the "hunter's moon" are traditional names for the full moons in late summer and in the autumn in the Northern Hemisphere, usually in September and October, respectively. People may celebrate these occurrences in festivities such as the Chinese Mid-Autumn Festival, which is as important as the Chinese New Year. The "harvest moon" (also known as the "barley moon" or "full corn moon") is the full moon nearest to the autumnal equinox (22 or 23 September), occurring anytime within two weeks before or after that date. The "hunter's moon" is the full moon following it. The names are recorded from the early 18th century. The "Oxford English Dictionary" entry for "harvest moon" cites a 1706 reference, and for "hunter's moon" a 1710 edition of "The British Apollo", which attributes the term to "the country people" ("The Country People call this the Hunters-Moon.") The names became traditional in American folklore, where they are now often popularly attributed to Native Americans. The Feast of the Hunters' Moon is a yearly festival in West Lafayette, Indiana, held in late September or early October each year since 1968. In 2010 the harvest moon occurred on the night of the equinox itself (some 5&lt;templatestyles src="Fraction/styles.css" /&gt;1⁄2 hours after the moment of equinox) for the first time since 1991, after a period known as the Metonic cycle. All full moons rise around the time of sunset. Since the Moon moves eastward among the stars faster than the Sun, lunar culmination is delayed by about 50.47 minutes (on average) each day, thus causing moonrise to occur later each day. Due to the high lunar standstill, the harvest and hunter's moons of 2007 were special because the time difference between moonrises on successive evenings was much shorter than average. The moon rose about 30 minutes later from one night to the next, as seen from about 40° N or S latitude (because the full moon of September 2007 rose in the northeast rather than in the east). Hence, no long period of darkness occurred between sunset and moonrise for several days after the full moon, thus lengthening the time in the evening when there is enough twilight and moonlight to work to get the harvest in. Native American. Various 18th and 19th century writers gave what were claimed to be Native American or First Nations moon names. These were not the names of the full moons as such, but were the names of lunar months beginning with each new moon. According to Jonathan Carver in 1778, "Some nations among them reckon their years by moons, and make them consist of twelve synodical or lunar months, observing, when thirty moons have waned, to add a supernumerary one, which they term the lost moon; and then begin to count as before." Carver gave the names of the lunar months (starting from the first after the March equinox) as Worm, Plants, Flowers, Hot, Buck, Sturgeon, Corn, Travelling, Beaver, Hunting, Cold, Snow. Carver's account was reproduced verbatim in Events in Indian History (1841), but completely different lists were given by Eugene Vetromile (1856) and Peter Jones (1861). In a book on Native American culture published in 1882, Richard Irving Dodge stated: There is a difference among authorities as to whether or not the moons themselves are named. Brown gives names for nine moons corresponding to months. Maximillian gives the names of twelve moons; and Belden, who lived many years among the Sioux, asserts that "the Indians compute their time very much as white men do, only they use moons instead of months to designate the seasons, each answering to some month in our calendar." Then follows a list of twelve moons with Indian and English names. While I cannot contradict so positive and minute a statement of one so thoroughly in a position to know, I must assert with equal positiveness that I have never met any wild Indians, of the Sioux or other Plains tribes, who had a permanent, common, conventional name for any moon. The looseness of Belden's general statement, that "Indians compute time like white people," when his only particularization of similarity is between the months and moons, is in itself sufficient to render the whole statement questionable. My experience is that the Indian, in attempting to fix on a particular moon, will designate it by some natural and well-known phenomenon which culminates during that moon. But two Indians of the same tribe may fix on different designations ; and even the same Indian, on different occasions, may give different names to the same moon. Thus, an Indian of the middle Plains will to-day designate a spring moon as "the moon when corn is planted;" to-morrow, speaking of the same moon, he may call it "the moon when the buffalo comes." Moreover, though there are thirteen moons in our year, no observer has ever given an Indian name to the thirteenth. My opinion is, that if any of the wild tribes have given conventional names to twelve moons, it is not an indigenous idea, but borrowed from the whites. Jonathan Carver's list of purportedly Native American month names was adopted in the 19th century by the Improved Order of Red Men, an all-white U.S. fraternal organization. They called the month of January "Cold moon", the rest being Snow, Worm, Plant, Flower, Hot, Buck, Sturgeon, Corn, Travelling, Beaver and Hunting moon. They numbered years from the time of Columbus's arrival in America. In "The American Boy's Book of Signs, Signals and Symbols" (1918), Daniel Carter Beard wrote: "The Indians' Moons naturally vary in the different parts of the country, but by comparing them all and striking an average as near as may be, the moons are reduced to the following." He then gave a list that had two names for each lunar month, again quite different from earlier lists that had been published. The 1937 "Maine Farmers' Almanac" published a list of full moon names that it said "were named by our early English ancestors as follows": Winter Moons: Moon after Yule, Wolf Moon, Lenten Moon&lt;br&gt; Spring Moons: Egg Moon, Milk Moon, Flower Moon&lt;br&gt; Summer Moons: Hay Moon, Grain Moon, Fruit Moon&lt;br&gt; Fall Moons: Harvest Moon, Hunter's Moon, Moon before Yule&lt;br&gt; It also mentioned blue moon. These were considered in some quarters to be Native American full moon names, and some were adopted by colonial Americans. The "Farmers' Almanac" (since 1955 published in Maine, but not the same publication as the "Maine Farmers' Almanac") continues to print such names. Such names have gained currency in American folklore. They appeared in print more widely outside of the almanac tradition from the 1990s in popular publications about the Moon. "Mysteries of the Moon" by Patricia Haddock ("Great Mysteries Series", Greenhaven Press, 1992) gave an extensive list of such names along with the individual tribal groups they were supposedly associated with. Haddock supposes that certain "Colonial American" moon names were adopted from Algonquian languages (which were formerly spoken in the territory of New England), while others are based in European tradition (e.g. the Colonial American names for the May moon, "Milk Moon", "Mother's Moon", "Hare Moon" have no parallels in the supposed native names, while the name of November, "Beaver Moon" is supposedly based in an Algonquian language). Many other names have been reported. These have passed into modern mythology, either as full-moon names, or as names for lunar months. Deanna J. Conway's "Moon Magick: Myth &amp; Magick, Crafts &amp; Recipes, Rituals &amp; Spells" (1995) gave as headline names for the lunar months (from January): Wolf, Ice, Storm, Growing, Hare, Mead, Hay, Corn, Harvest, Blood, Snow, Cold. Conway also gave multiple alternative names for each month, e.g. the first lunar month after the winter solstice could be called the Wolf, Quiet, Snow, Cold, Chaste or Disting Moon, or the Moon of Little Winter.19 For the last lunar month Conway offered the names Cold, Oak or Wolf Moon, or Moon of Long Nights, Long Night's Moon, Aerra Geola (Month Before Yule), Wintermonat (Winter Month), Heilagmanoth (Holy Month), Big Winter Moon, Moon of Popping Trees.247 Conway did not cite specific sources for most of the names she listed, but some have gained wider currency as full-moon names, such as Pink Moon for a full moon in April, 77 Long Night's Moon for the last in December and Ice Moon for the first full moon of January or February. Hindu full moon festivals. In Hinduism, most festivals are celebrated on auspicious days. Many Hindu festivals are celebrated on days with a full moon night, called the "purnima". Different parts of India celebrate the same festival with different names, as listed below: Lunar and lunisolar calendars. Most pre-modern calendars the world over were lunisolar, combining the solar year with the lunation by means of intercalary months. The Julian calendar abandoned this method in favour of a purely solar reckoning while conversely the 7th-century Islamic calendar opted for a purely lunar one. A continuing lunisolar calendar is the Hebrew calendar. Evidence of this is noted in the dates of Passover and Easter in Judaism and Christianity, respectively. Passover falls on the full moon on 15 Nisan of the Hebrew calendar. The date of the Jewish Rosh Hashana and Sukkot festivals along with all other Jewish holidays are dependent on the dates of the new moons. Intercalary months. In lunisolar calendars, an intercalary month occurs seven times in the 19 years of the Metonic cycle, or on average every 2.7 years (19/7). In the Hebrew calendar this is noted with a periodic extra month of Adar in the early spring. Meetings arranged to coincide with full moon. Before the days of good street lighting and car headlights, several organisations arranged their meetings for full moon, so that it would be easier for their members to walk, or ride home. Examples include the Lunar Society of Birmingham, several Masonic societies, including Warren Lodge No. 32, USA and Masonic Hall, York, Western Australia, and several New Zealand local authorities, including Awakino, Ohura and Whangarei County Councils and Maori Hill and Wanganui East Borough Councils. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " d = 20.362000+ 29.530588861 \\times N + 102.026 \\times 10^{-12} \\times N^2" }, { "math_id": 1, "text": "-0.000739 - (235 \\times 10^{-12})\\times N^2" } ]
https://en.wikipedia.org/wiki?curid=11432
11434205
Mass flux
Vector quantity describing mass flow rate through a given area In physics and engineering, mass flux is the rate of mass flow per unit of area. Its SI units are kg m−2 s−1. The common symbols are "j", "J", "q", "Q", "φ", or Φ (Greek lowercase or capital Phi), sometimes with subscript "m" to indicate mass is the flowing quantity. Mass flux can also refer to an alternate form of flux in Fick's law that includes the molecular mass, or in Darcy's law that includes the mass density. Less commonly the defining equation for mass flux in this article is used interchangeably with the defining equation in mass flow rate. For example, "Fluid Mechanics, Schaum's et al" uses the definition of mass flux as the equation in the mass flow rate article. Definition. Mathematically, mass flux is defined as the limit formula_0 where formula_1 is the mass current (flow of mass m per unit time t) and A is the area through which the mass flows. For mass flux as a vector j"m", the surface integral of it over a surface "S", followed by an integral over the time duration "t"1 to "t"2, gives the total amount of mass flowing through the surface in that time ("t"2 − "t"1): formula_2 The area required to calculate the flux is real or imaginary, flat or curved, either as a cross-sectional area or a surface. For example, for substances passing through a filter or a membrane, the real surface is the (generally curved) surface area of the filter, macroscopically - ignoring the area spanned by the holes in the filter/membrane. The spaces would be cross-sectional areas. For liquids passing through a pipe, the area is the cross-section of the pipe, at the section considered. The vector area is a combination of the magnitude of the area through which the mass passes through, "A", and a unit vector normal to the area, formula_3. The relation is formula_4. If the mass flux j"m" passes through the area at an angle θ to the area normal formula_3, then formula_5 where · is the dot product of the unit vectors. That is, the component of mass flux passing through the surface (i.e. normal to it) is "jm" cos "θ". While the component of mass flux passing tangential to the area is given by "jm" sin "θ", there is "no" mass flux actually passing "through" the area in the tangential direction. The "only" component of mass flux passing normal to the area is the cosine component. Example. Consider a pipe of flowing water. Suppose the pipe has a constant cross section and we consider a straight section of it (not at any bends/junctions), and the water is flowing steadily at a constant rate, under standard conditions. The area "A" is the cross-sectional area of the pipe. Suppose the pipe has radius "r" = 2 cm = 2 × 10−2 m. The area is then formula_6 To calculate the mass flux "jm" (magnitude), we also need the amount of mass of water transferred through the area and the time taken. Suppose a volume "V" = 1.5 L = 1.5 × 10−3 m3 passes through in time "t" = 2 s. Assuming the density of water is "ρ" = 1000 kg m−3, we have: formula_7 (since initial volume passing through the area was zero, final is V, so corresponding mass is m), so the mass flux is formula_8 Substituting the numbers gives: formula_9 which is approximately 596.8 kg s−1 m−2. Equations for fluids. Alternative equation. Using the vector definition, mass flux is also equal to: formula_10 where: Sometimes this equation may be used to define jm as a vector. Mass and molar fluxes for composite fluids. Mass fluxes. In the case fluid is not pure, i.e. is a mixture of substances (technically contains a number of component substances), the mass fluxes must be considered separately for each component of the mixture. When describing fluid flow (i.e. flow of matter), mass flux is appropriate. When describing particle transport (movement of a large number of particles), it is useful to use an analogous quantity, called the molar flux. Using mass, the mass flux of component "i" is formula_11 The barycentric mass flux of component "i" is formula_12 where formula_13 is the average mass velocity of all the components in the mixture, given by formula_14 where The average is taken over the velocities of the components. Molar fluxes. If we replace density ρ by the "molar density", concentration c, we have the molar flux analogues. The molar flux is the number of moles per unit time per unit area, generally: formula_15 So the molar flux of component "i" is (number of moles per unit time per unit area): formula_16 and the barycentric molar flux of component "i" is formula_17 where formula_13 this time is the average molar velocity of all the components in the mixture, given by: formula_18 Usage. Mass flux appears in some equations in hydrodynamics, in particular the continuity equation: formula_19 which is a statement of the mass conservation of fluid. In hydrodynamics, mass can only flow from one place to another. Molar flux occurs in Fick's first law of diffusion: formula_20 where D is the diffusion coefficient. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "j_m = \\lim_{A \\to 0} \\frac{I_m}{A}," }, { "math_id": 1, "text": "I_m = \\lim_{\\Delta t \\to 0} \\frac{\\Delta m}{\\Delta t} = \\frac{dm}{dt}" }, { "math_id": 2, "text": "m=\\int_{t_1}^{t_2} \\iint_S \\mathbf{j}_m \\cdot\\mathbf{\\hat{n}} \\, dA \\, dt." }, { "math_id": 3, "text": "\\mathbf{\\hat{n}}" }, { "math_id": 4, "text": "\\mathbf{A} = A \\mathbf{\\hat{n}}" }, { "math_id": 5, "text": "\\mathbf{j}_m \\cdot \\mathbf{\\hat{n}} = j_m\\cos\\theta" }, { "math_id": 6, "text": "A = \\pi r^2." }, { "math_id": 7, "text": "\\begin{align}\n\\Delta m &= \\rho \\Delta V \\\\\nm_2 - m_1 &= \\rho ( V_2 - V_1) \\\\\nm &= \\rho V \\\\\n\\end{align}" }, { "math_id": 8, "text": "j_m = \\frac{\\Delta m}{ A \\Delta t} = \\frac{\\rho V}{ \\pi r^2 t}." }, { "math_id": 9, "text": " j_m = \\frac{1000 \\times \\left(1.5 \\times 10^{-3}\\right)}{ \\pi \\times \\left(2 \\times 10^{-2}\\right)^2 \\times 2} = \\frac{3}{16\\pi}\\times 10^4," }, { "math_id": 10, "text": "\\mathbf{j}_{\\rm m} = \\rho \\mathbf{u}" }, { "math_id": 11, "text": "\\mathbf{j}_{{\\rm m}, \\, i} = \\rho_i \\mathbf{u}_i." }, { "math_id": 12, "text": "\\mathbf{j}_{{\\rm m}, \\, i} = \\rho \\left ( \\mathbf{u}_i - \\langle \\mathbf{u} \\rangle \\right )," }, { "math_id": 13, "text": " \\langle \\mathbf{u} \\rangle " }, { "math_id": 14, "text": " \\langle \\mathbf{u} \\rangle = \\frac{1}{\\rho}\\sum_i \\rho_i \\mathbf{u}_i = \\frac{1}{\\rho}\\sum_i \\mathbf{j}_{{\\rm m}, \\, i} " }, { "math_id": 15, "text": "\\mathbf{j}_{\\rm n} = c \\mathbf{u}." }, { "math_id": 16, "text": "\\mathbf{j}_{{\\rm n}, \\, i} = c_i \\mathbf{u}_i " }, { "math_id": 17, "text": "\\mathbf{j}_{{\\rm n}, \\, i} = c \\left ( \\mathbf{u}_i - \\langle \\mathbf{u} \\rangle \\right )," }, { "math_id": 18, "text": " \\langle \\mathbf{u} \\rangle = \\frac{1}{n}\\sum_i c_i \\mathbf{u}_i = \\frac{1}{c}\\sum_i \\mathbf{j}_{{\\rm n}, \\, i}." }, { "math_id": 19, "text": "\\nabla \\cdot \\mathbf{j}_{\\rm m} + \\frac{\\partial \\rho}{\\partial t} = 0," }, { "math_id": 20, "text": "\\nabla \\cdot \\mathbf{j}_{\\rm n} = -\\nabla \\cdot D \\nabla n" } ]
https://en.wikipedia.org/wiki?curid=11434205
11435202
Equable shape
A two-dimensional equable shape (or perfect shape) is one whose area is numerically equal to its perimeter. For example, a right angled triangle with sides 5, 12 and 13 has area and perimeter both have a unitless numerical value of 30. Scaling and units. An area cannot be equal to a length except relative to a particular unit of measurement. For example, if shape has an area of 5 square yards and a perimeter of 5 yards, then it has an area of and a perimeter of 15 feet (since 3 feet = 1 yard and hence 9 square feet = 1 square yard). Moreover, contrary to what the name implies, changing the size while leaving the shape intact changes an "equable shape" into a non-equable shape. However its common use as GCSE coursework has led to its being an accepted concept. For any shape, there is a similar equable shape: if a shape "S" has perimeter "p" and area "A", then scaling "S" by a factor of "p/A" leads to an equable shape. Alternatively, one may find equable shapes by setting up and solving an equation in which the area equals the perimeter. In the case of the square, for instance, this equation is formula_0 Solving this yields that "x" = 4, so a 4 × 4 square is equable. Tangential polygons. A tangential polygon is a polygon in which the sides are all tangent to a common circle. Every tangential polygon may be triangulated by drawing edges from the circle's center to the polygon's vertices, forming a collection of triangles that all have height equal to the circle's radius; it follows from this decomposition that the total area of a tangential polygon equals half the perimeter times the radius. Thus, a tangential polygon is equable if and only if its inradius is two. All triangles are tangential, so in particular the equable triangles are exactly the triangles with inradius two. Integer dimensions. Combining restrictions that a shape be equable and that its dimensions be integers is significantly more restrictive than either restriction on its own. For instance, there are infinitely many Pythagorean triples describing integer-sided right triangles, and there are infinitely many equable right triangles with non-integer sides; however, there are only two equable integer right triangles, with side lengths (5,12,13) and (6,8,10). More generally, the problem of finding all equable triangles with integer sides (that is, equable Heronian triangles) was considered by B. Yates in 1858. As W. A. Whitworth and D. Biddle proved in 1904, there are exactly three solutions, beyond the right triangles already listed, with sides (6,25,29), (7,15,20), and (9,10,17). The only equable rectangles with integer sides are the 4 × 4 square and the 3 × 6 rectangle. An integer rectangle is a special type of polyomino, and more generally there exist polyominoes with equal area and perimeter for any even integer area greater than or equal to 16. For smaller areas, the perimeter of a polyomino must exceed its area. Equable solids. In three dimensions, a shape is equable when its surface area is numerically equal to its volume. An example is a cube with side length six. As with equable shapes in two dimensions, an equable solid may be found by scaling any solid by an appropriate factor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\displaystyle x^2 = 4x." } ]
https://en.wikipedia.org/wiki?curid=11435202
11435667
List of taxa named by Ruiz and Pavón
Hipólito Ruiz López and José Antonio Pavón Jiménez are jointly cited as the authors of many botanical names. Between 1779 and 1788 these Spanish botanists (together with the French botanist Joseph Dombey) visited Chile, Peru and other South American countries. Their standard author abbreviations are "Ruiz" and "Pav.", so that they are now jointly cited as "Ruiz &amp; Pav." "Ruiz y Pavón" is the Spanish form of the Latin "Ruiz et Pavón"; both mean "Ruiz and Pavón". Published works. Ruiz and Pavón jointly published: Taxa of Ruiz &amp; Pavón. Although Ruiz and Pavón named plant genera after one another, Cavanilles was working independently at about the same time and honoured each of them with the genus names "Pavonia" Cav. (in family Malvaceae), and "Ruizia" Cav. (in family Sterculiaceae), which were published before their own work. "Pavonia" Ruiz &amp; Pav. and "Ruizia" Pav.(in family Monimiaceae) are thus homonyms. Key : A = species is accepted  |  S = species is a synonym  |  U = species is unresolved Genera ("A" — "E"). &lt;templatestyles src="Div col/styles.css"/&gt; * Note: According to the Plant List entry for "Sphaerine distichophylla" the binomial authority is '(Spreng.) Herb.'; while the Tropicos entry for the same is given as '(Ruiz &amp; Pav.) Herb.' * Note: According to the Plant List entry for "Sphaerine distichophylla" the binomial authority is '(Spreng.) Herb.'; while the Tropicos entry for the same is given as '(Ruiz &amp; Pav.) Herb.' * Note: According to Tropicos, "Solanum viridiflorum" is the basionym of "Cyphomandra viridiflora", and its synonym (see here); on The Plant List entry for "Cyphomandra viridiflora", "S. viridiflorum" is not included as a synonym (Retrieved February 8, 2012) * Note: According to the Plant List entry for "Sphaerine distichophylla" the binomial authority is '(Spreng.) Herb.'; while the Tropicos entry for the same is given as '(Ruiz &amp; Pav.) Herb.' * Note: according to The Plant List entry for "Lomatia dentata" the binomial authority is simply 'R.Br.', while the GRIN entry for "L. dentata" shows '(Ruiz &amp; Pav.) R.Br.' (Retrieved February 1, 2012) Genera ("F" — "J"). &lt;templatestyles src="Div col/styles.css"/&gt; Genera ("K" — "O"). &lt;templatestyles src="Div col/styles.css"/&gt; * Note: The Plant List entry for "Laurelia sempervirens" gives the binomial authority as simply 'Tul.', while the Tropicos entry for "L. sempervirens" shows '(Ruiz &amp; Pav.) Tul.' (Retrieved February 1, 2012) * Note: according to The Plant List entry for "Lomatia dentata" the binomial authority is simply 'R.Br.', while the GRIN entry for "L. dentata" shows '(Ruiz &amp; Pav.) R.Br.' (Retrieved February 1, 2012) Genera ("P" — "T"). &lt;templatestyles src="Div col/styles.css"/&gt; * Note: The Plant List entry for "Laurelia sempervirens" gives the binomial authority as simply 'Tul.', while the Tropicos entry for "L. sempervirens" shows '(Ruiz &amp; Pav.) Tul.' (Retrieved February 1, 2012) * Note: According to the Plant List entry for "Vestia foetida" the binomial authority is shown as simply 'Hoffmanns.'; while the GRIN entry for the same is given as '(Ruiz &amp; Pav.) Hoffmanns.' * Note: According to Tropicos, "Solanum viridiflorum" is the basionym of "Pionandra viridiflora", but is not its synonym (see here); according to The Plant List entry for "Solanum viarum", "S. viridiflorum" is synonymous with "S. viarum". (Retrieved February 8, 2012) * Note: According to Tropicos, "Solanum viridiflorum" is the basionym of "Cyphomandra viridiflora", and its synonym (see here); but on The Plant List entry for "Solanum viridiflorum", "C. viridiflora" is not included as a synonym. (Retrieved February 8, 2012) * Note: According to the Plant List entry for "Sphaerine distichophylla" the binomial authority is '(Spreng.) Herb.'; while the Tropicos entry for the same is given as '(Ruiz &amp; Pav.) Herb.' Genera ("U" — "Z"). &lt;templatestyles src="Div col/styles.css"/&gt; * Note: According to the Plant List entry for "Vestia foetida" the binomial authority is shown as simply 'Hoffmanns.'; while the GRIN entry for the same is given as '(Ruiz &amp; Pav.) Hoffmanns.' References. Footnotes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\neq" } ]
https://en.wikipedia.org/wiki?curid=11435667
11436087
Complement (group theory)
In mathematics, especially in the area of algebra known as group theory, a complement of a subgroup "H" in a group "G" is a subgroup "K" of "G" such that formula_0 Equivalently, every element of "G" has a unique expression as a product "hk" where "h" ∈ "H" and "k" ∈ "K". This relation is symmetrical: if "K" is a complement of "H", then "H" is a complement of "K". Neither "H" nor "K" need be a normal subgroup of "G". Relation to other products. Complements generalize both the direct product (where the subgroups "H" and "K" are normal in "G"), and the semidirect product (where one of "H" or "K" is normal in "G"). The product corresponding to a general complement is called the internal Zappa–Szép product. When "H" and "K" are nontrivial, complement subgroups factor a group into smaller pieces. Existence. As previously mentioned, complements need not exist. A "p"-complement is a complement to a Sylow "p"-subgroup. Theorems of Frobenius and Thompson describe when a group has a normal "p"-complement. Philip Hall characterized finite soluble groups amongst finite groups as those with "p"-complements for every prime "p"; these "p"-complements are used to form what is called a Sylow system. A Frobenius complement is a special type of complement in a Frobenius group. A complemented group is one where every subgroup has a complement.
[ { "math_id": 0, "text": "G = HK = \\{ hk : h\\in H, k\\in K\\} \\text{ and } H\\cap K = \\{e\\}." } ]
https://en.wikipedia.org/wiki?curid=11436087
11437896
Sylvester's criterion
Criterion of positive definiteness of a matrix In mathematics, Sylvester’s criterion is a necessary and sufficient criterion to determine whether a Hermitian matrix is positive-definite. Sylvester's criterion states that a "n" × "n" Hermitian matrix "M" is positive-definite if and only if all the following matrices have a positive determinant: In other words, all of the "leading" principal minors must be positive. By using appropriate permutations of rows and columns of "M", it can also be shown that the positivity of "any" nested sequence of "n" principal minors of "M" is equivalent to "M" being positive-definite. An analogous theorem holds for characterizing positive-semidefinite Hermitian matrices, except that it is no longer sufficient to consider only the "leading" principal minors as illustrated by the Hermitian matrix formula_1 A Hermitian matrix "M" is positive-semidefinite if and only if "all" principal minors of "M" are nonnegative. Proof for the case of positive definite matrices. Suppose formula_2is formula_3 Hermitian matrix formula_4. Let formula_5 be the principal minor matrices, i.e. the formula_6 upper left corner matrices. It will be shown that if formula_7 is positive definite, then the principal minors are positive; that is, formula_8 for all formula_9. formula_10 is positive definite. Indeed, choosing formula_11 we can notice that formula_12 Equivalently, the eigenvalues of formula_10 are positive, and this implies that formula_13 since the determinant is the product of the eigenvalues. To prove the reverse implication, we use induction. The general form of an formula_14 Hermitian matrix is formula_15, where formula_2 is an formula_16 Hermitian matrix, formula_17 is a vector and formula_18 is a real constant. Suppose the criterion holds for formula_2. Assuming that all the principal minors of formula_19 are positive implies that formula_20, formula_21, and that formula_2 is positive definite by the inductive hypothesis. Denote formula_22 then formula_23 By completing the squares, this last expression is equal to formula_24 formula_25 where formula_26 (note that formula_27 exists because the eigenvalues of formula_2 are all positive.) The first term is positive by the inductive hypothesis. We now examine the sign of the second term. By using the block matrix determinant formula formula_28 on formula_29 we obtain formula_30, which implies formula_31. Consequently, formula_32 Proof for the case of positive semidefinite matrices. Let formula_2 be an "n" x "n" Hermitian matrix. Suppose formula_2 is semidefinite. Essentially the same proof as for the case that formula_2 is strictly positive definite shows that all principal minors (not necessarily the leading principal minors) are non-negative. For the reverse implication, it suffices to show that if formula_2 has all non-negative principal minors, then for all "t&gt;0", all leading principal minors of the Hermitian matrix formula_33 are strictly positive, where formula_34 is the "n"x"n" identity matrix. Indeed, from the positive definite case, we would know that the matrices formula_33 are strictly positive definite. Since the limit of positive definite matrices is always positive semidefinite, we can take formula_35 to conclude. To show this, let formula_10 be the "k"th leading principal submatrix of formula_36 We know that formula_37 is a polynomial in "t", related to the characteristic polynomial formula_38 via formula_39 We use the identity in Characteristic polynomial#Properties to write formula_40 where formula_41 is the trace of the "j"th exterior power of formula_42 From Minor_(linear_algebra)#Multilinear_algebra_approach, we know that the entries in the matrix expansion of formula_43 (for "j &gt; 0") are just the minors of formula_42 In particular, the diagonal entries are the principal minors of formula_10, which of course are also principal minors of formula_2, and are thus non-negative. Since the trace of a matrix is the sum of the diagonal entries, it follows that formula_44 Thus the coefficient of formula_45 in formula_46 is non-negative for all "j &gt; 0." For "j = 0", it is clear that the coefficient is 1. In particular, formula_47 for all "t &gt; 0", which is what was required to show. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{}\\quad\\vdots" }, { "math_id": 1, "text": "\n\\begin{pmatrix}\n0&0&-1\\\\\n0&-1&0\\\\\n-1&0&0\n\\end{pmatrix}\\quad\\text{with eigenvectors}\\quad\n\\begin{pmatrix}\n0\\\\1\\\\0\n\\end{pmatrix},\\quad\n\\begin{pmatrix}\n1\\\\0\\\\1\n\\end{pmatrix}\\quad\\text{and}\\quad\n\\begin{pmatrix}\n1\\\\0\\\\-1\n\\end{pmatrix}.\n" }, { "math_id": 2, "text": "M_n" }, { "math_id": 3, "text": "n\\times n" }, { "math_id": 4, "text": "M_n^{\\dagger}=M_n" }, { "math_id": 5, "text": "M_k,k=1,\\ldots n" }, { "math_id": 6, "text": "k\\times k" }, { "math_id": 7, "text": "M_n " }, { "math_id": 8, "text": "\\det M_k>0 " }, { "math_id": 9, "text": " k " }, { "math_id": 10, "text": "M_k" }, { "math_id": 11, "text": " x=\\left(\n\\begin{array}{c} x_1\\\\ \\vdots\\\\ x_k\\\\ 0\\\\ \\vdots\\\\0\\end{array}\n\\right) = \n\\left(\n\\begin{array}{c}\n\\vec{x}\\\\\n0\\\\\n\\vdots\\\\\n0\n\\end{array}\n\\right)\n" }, { "math_id": 12, "text": "0<x^{\\dagger} M_n x=\\vec{x}^{\\dagger}M_k\\vec{x}. " }, { "math_id": 13, "text": "\\det M_k>0" }, { "math_id": 14, "text": "(n+1) \\times (n+1)" }, { "math_id": 15, "text": "M_{n+1}= \\left( \\begin{array}{cc}M_n&\\vec{v}\\\\ \\vec{v}^{\\dagger}&d\\end{array}\\right) \\qquad (*)" }, { "math_id": 16, "text": "n \\times n" }, { "math_id": 17, "text": "\\vec{v}" }, { "math_id": 18, "text": "d" }, { "math_id": 19, "text": "M_{n+1}" }, { "math_id": 20, "text": "\\det M_{n+1}>0" }, { "math_id": 21, "text": "\\det M_n>0" }, { "math_id": 22, "text": "x=\\left( \\begin{array}{c}\\vec{x}\\\\ x_{n+1}\\end{array} \\right) " }, { "math_id": 23, "text": "x^{\\dagger}M_{n+1}x=\\vec{x}^{\\dagger}M_n\\vec{x}+x_{n+1}\\vec{x}^{\\dagger}\\vec{v}+\\bar{x}_{n+1}\\vec{v}^{\\dagger}\\vec{x}+d|x_{n+1}|^2 " }, { "math_id": 24, "text": "(\\vec{x}^{\\dagger}+\\vec{v}^{\\dagger}M_n^{-1}\\bar{x}_{n+1})M_n(\\vec{x}+x_{n+1}M_n^{-1}\\vec{v})-|x_{n+1}|^2\\vec{v}^{\\dagger}M_n^{-1}\\vec{v}+d|x_{n+1}|^2 " }, { "math_id": 25, "text": "=(\\vec{x}+\\vec{c})^{\\dagger}M_n(\\vec{x}+\\vec{c})+|x_{n+1}|^2(d-\\vec{v}^{\\dagger}M_n^{-1}\\vec{v}) " }, { "math_id": 26, "text": "\\vec{c}=x_{n+1}M_n^{-1}\\vec{v}" }, { "math_id": 27, "text": "M_n^{-1}" }, { "math_id": 28, "text": "\\det \\left(\\begin{array}{cc}A&B\\\\C&D\\end{array} \\right)=\\det A\\det(D-CA^{-1}B)" }, { "math_id": 29, "text": "(*)" }, { "math_id": 30, "text": "\\det M_{n+1}=\\det M_n(d-\\vec{v}^{\\dagger}M_n^{-1}\\vec{v})>0" }, { "math_id": 31, "text": "d-\\vec{v}^{\\dagger}M_n^{-1}\\vec{v}>0" }, { "math_id": 32, "text": "x^{\\dagger}M_{n+1}x>0." }, { "math_id": 33, "text": "M_n+tI_n" }, { "math_id": 34, "text": "I_n" }, { "math_id": 35, "text": "t \\to 0" }, { "math_id": 36, "text": "M_n." }, { "math_id": 37, "text": "q_k(t) = \\det(M_k + tI_k)" }, { "math_id": 38, "text": "p_{M_k}" }, { "math_id": 39, "text": "q_k(t) = (-1)^kp_{M_k}(-t)." }, { "math_id": 40, "text": "q_k(t) = \\sum_{j=0}^k t^{k-j} \\operatorname{tr}\\left(\\textstyle\\bigwedge^j M_k\\right)," }, { "math_id": 41, "text": "\\operatorname{tr}\\left(\\bigwedge^j M_k\\right)" }, { "math_id": 42, "text": "M_k." }, { "math_id": 43, "text": "\\bigwedge^j M_k" }, { "math_id": 44, "text": "\\operatorname{tr}\\left(\\textstyle\\bigwedge^j M_k\\right) \\geq 0." }, { "math_id": 45, "text": " t^{k-j}" }, { "math_id": 46, "text": "q_k(t)" }, { "math_id": 47, "text": "q_k(t) > 0" } ]
https://en.wikipedia.org/wiki?curid=11437896
11439
Faster-than-light
Propagation of information or matter faster than the speed of light Faster-than-light (superluminal or supercausal) travel and communication are the conjectural propagation of matter or information faster than the speed of light (c). The special theory of relativity implies that only particles with zero rest mass (i.e., photons) may travel "at" the speed of light, and that nothing may travel faster. Particles whose speed exceeds that of light (tachyons) have been hypothesized, but their existence would violate causality and would imply time travel. The scientific consensus is that they do not exist. According to all observations and current scientific theories, matter travels at slower-than-light (subluminal) speed with respect to the locally distorted spacetime region. Speculative faster-than-light concepts include the Alcubierre drive, Krasnikov tubes, traversable wormholes, and quantum tunneling. Some of these proposals find loopholes around general relativity, such as by expanding or contracting space to make the object appear to be travelling greater than "c". Such proposals are still widely believed to be impossible as they still violate current understandings of causality, and they all require fanciful mechanisms to work (such as requiring exotic matter). &lt;templatestyles src="Template:TOC limit/styles.css" /&gt; Superluminal travel of non-information. In the context of this article, "faster-than-light" means the transmission of information or matter faster than "c", a constant equal to the speed of light in vacuum, which is 299,792,458 m/s (by definition of the metre) or about 186,282.397 miles per second. This is not quite the same as traveling faster than light, since: Neither of these phenomena violates special relativity or creates problems with causality, and thus neither qualifies as faster-than-light as described here. In the following examples, certain influences may appear to travel faster than light, but they do not convey energy or information faster than light, so they do not violate special relativity. Daily sky motion. For an earth-bound observer, objects in the sky complete one revolution around the Earth in one day. Proxima Centauri, the nearest star outside the Solar System, is about four and a half light-years away. In this frame of reference, in which Proxima Centauri is perceived to be moving in a circular trajectory with a radius of four light years, it could be described as having a speed many times greater than "c" as the rim speed of an object moving in a circle is a product of the radius and angular speed. It is also possible on a geostatic view, for objects such as comets to vary their speed from subluminal to superluminal and vice versa simply because the distance from the Earth varies. Comets may have orbits which take them out to more than 1000 AU. The circumference of a circle with a radius of 1000 AU is greater than one light day. In other words, a comet at such a distance is superluminal in a geostatic, and therefore non-inertial, frame. Light spots and shadows. If a laser beam is swept across a distant object, the spot of laser light can easily be made to move across the object at a speed greater than "c". Similarly, a shadow projected onto a distant object can be made to move across the object faster than "c". In neither case does the light travel from the source to the object faster than "c", nor does any information travel faster than light. Closing speeds. The rate at which two objects in motion in a single frame of reference get closer together is called the mutual or closing speed. This may approach twice the speed of light, as in the case of two particles travelling at close to the speed of light in opposite directions with respect to the reference frame. Imagine two fast-moving particles approaching each other from opposite sides of a particle accelerator of the collider type. The closing speed would be the rate at which the distance between the two particles is decreasing. From the point of view of an observer standing at rest relative to the accelerator, this rate will be slightly less than twice the speed of light. Special relativity does not prohibit this. It tells us that it is wrong to use Galilean relativity to compute the velocity of one of the particles, as would be measured by an observer traveling alongside the other particle. That is, special relativity gives the correct velocity-addition formula for computing such relative velocity. It is instructive to compute the relative velocity of particles moving at "v" and −"v" in accelerator frame, which corresponds to the closing speed of 2"v" &gt; "c". Expressing the speeds in units of "c", "β" = "v"/"c": formula_0 Proper speeds. If a spaceship travels to a planet one light-year (as measured in the Earth's rest frame) away from Earth at high speed, the time taken to reach that planet could be less than one year as measured by the traveller's clock (although it will always be more than one year as measured by a clock on Earth). The value obtained by dividing the distance traveled, as determined in the Earth's frame, by the time taken, measured by the traveller's clock, is known as a proper speed or a proper velocity. There is no limit on the value of a proper speed as a proper speed does not represent a speed measured in a single inertial frame. A light signal that left the Earth at the same time as the traveller would always get to the destination before the traveller would. Phase velocities above "c". The phase velocity of an electromagnetic wave, when traveling through a medium, can routinely exceed "c", the vacuum velocity of light. For example, this occurs in most glasses at X-ray frequencies. However, the phase velocity of a wave corresponds to the propagation speed of a theoretical single-frequency (purely monochromatic) component of the wave at that frequency. Such a wave component must be infinite in extent and of constant amplitude (otherwise it is not truly monochromatic), and so cannot convey any information. Thus a phase velocity above "c" does not imply the propagation of signals with a velocity above "c". Group velocities above "c". The group velocity of a wave may also exceed "c" in some circumstances. In such cases, which typically at the same time involve rapid attenuation of the intensity, the maximum of the envelope of a pulse may travel with a velocity above "c". However, even this situation does not imply the propagation of signals with a velocity above "c", even though one may be tempted to associate pulse maxima with signals. The latter association has been shown to be misleading, because the information on the arrival of a pulse can be obtained before the pulse maximum arrives. For example, if some mechanism allows the full transmission of the leading part of a pulse while strongly attenuating the pulse maximum and everything behind (distortion), the pulse maximum is effectively shifted forward in time, while the information on the pulse does not come faster than "c" without this effect. However, group velocity can exceed "c" in some parts of a Gaussian beam in vacuum (without attenuation). The diffraction causes the peak of the pulse to propagate faster, while overall power does not. Cosmic expansion. According to Hubble's law, the expansion of the universe causes distant galaxies to recede from us faster than the speed of light. However, the recession speed associated with Hubble's law, defined as the rate of increase in proper distance per interval of cosmological time, is not a velocity in a relativistic sense. Moreover, in general relativity, velocity is a local notion, and there is not even a unique definition for the relative velocity of a cosmologically distant object. Faster-than-light cosmological recession speeds are entirely a coordinate effect. There are many galaxies visible in telescopes with redshift numbers of 1.4 or higher. All of these have cosmological recession speeds greater than the speed of light. Because the Hubble parameter is decreasing with time, there can actually be cases where a galaxy that is receding from us faster than light does manage to emit a signal which reaches us eventually. However, because the expansion of the universe is accelerating, it is projected that most galaxies will eventually cross a type of cosmological event horizon where any light they emit past that point will never be able to reach us at any time in the infinite future, because the light never reaches a point where its "peculiar velocity" towards us exceeds the expansion velocity away from us (these two notions of velocity are also discussed in Comoving and proper distances#Uses of the proper distance). The current distance to this cosmological event horizon is about 16 billion light-years, meaning that a signal from an event happening at present would eventually be able to reach us in the future if the event was less than 16 billion light-years away, but the signal would never reach us if the event was more than 16 billion light-years away. Astronomical observations. Apparent superluminal motion is observed in many radio galaxies, blazars, quasars, and recently also in microquasars. The effect was predicted before it was observed by Martin Rees and can be explained as an optical illusion caused by the object partly moving in the direction of the observer, when the speed calculations assume it does not. The phenomenon does not contradict the theory of special relativity. Corrected calculations show these objects have velocities close to the speed of light (relative to our reference frame). They are the first examples of large amounts of mass moving at close to the speed of light. Earth-bound laboratories have only been able to accelerate small numbers of elementary particles to such speeds. Quantum mechanics. Certain phenomena in quantum mechanics, such as quantum entanglement, might give the superficial impression of allowing communication of information faster than light. According to the no-communication theorem these phenomena do not allow true communication; they only let two observers in different locations see the same system simultaneously, without any way of controlling what either sees. Wavefunction collapse can be viewed as an epiphenomenon of quantum decoherence, which in turn is nothing more than an effect of the underlying local time evolution of the wavefunction of a system and "all" of its environment. Since the underlying behavior does not violate local causality or allow FTL communication, it follows that neither does the additional effect of wavefunction collapse, whether real "or" apparent. The uncertainty principle implies that individual photons may travel for short distances at speeds somewhat faster (or slower) than "c", even in vacuum; this possibility must be taken into account when enumerating Feynman diagrams for a particle interaction. However, it was shown in 2011 that a single photon may not travel faster than "c". In quantum mechanics, virtual particles may travel faster than light, and this phenomenon is related to the fact that static field effects (which are mediated by virtual particles in quantum terms) may travel faster than light (see section on static fields above). However, macroscopically these fluctuations average out, so that photons do travel in straight lines over long (i.e., non-quantum) distances, and they do travel at the speed of light on average. Therefore, this does not imply the possibility of superluminal information transmission. There have been various reports in the popular press of experiments on faster-than-light transmission in optics — most often in the context of a kind of quantum tunnelling phenomenon. Usually, such reports deal with a phase velocity or group velocity faster than the vacuum velocity of light. However, as stated above, a superluminal phase velocity cannot be used for faster-than-light transmission of information Hartman effect. The Hartman effect is the tunneling effect through a barrier where the tunneling time tends to a constant for large barriers. This could, for instance, be the gap between two prisms. When the prisms are in contact, the light passes straight through, but when there is a gap, the light is refracted. There is a non-zero probability that the photon will tunnel across the gap rather than follow the refracted path. However, it has been claimed that the Hartman effect cannot actually be used to violate relativity by transmitting signals faster than "c", also because the tunnelling time "should not be linked to a velocity since evanescent waves do not propagate". The evanescent waves in the Hartman effect are due to virtual particles and a non-propagating static field, as mentioned in the sections above for gravity and electromagnetism. Casimir effect. In physics, the Casimir–Polder force is a physical force exerted between separate objects due to resonance of vacuum energy in the intervening space between the objects. This is sometimes described in terms of virtual particles interacting with the objects, owing to the mathematical form of one possible way of calculating the strength of the effect. Because the strength of the force falls off rapidly with distance, it is only measurable when the distance between the objects is extremely small. Because the effect is due to virtual particles mediating a static field effect, it is subject to the comments about static fields discussed above. EPR paradox. The EPR paradox refers to a famous thought experiment of Albert Einstein, Boris Podolsky and Nathan Rosen that was realized experimentally for the first time by Alain Aspect in 1981 and 1982 in the Aspect experiment. In this experiment, the two measurements of an entangled state are correlated even when the measurements are distant from the source and each other. However, no information can be transmitted this way; the answer to whether or not the measurement actually affects the other quantum system comes down to which interpretation of quantum mechanics one subscribes to. An experiment performed in 1997 by Nicolas Gisin has demonstrated quantum correlations between particles separated by over 10 kilometers. But as noted earlier, the non-local correlations seen in entanglement cannot actually be used to transmit classical information faster than light, so that relativistic causality is preserved. The situation is akin to sharing a synchronized coin flip, where the second person to flip their coin will always see the opposite of what the first person sees, but neither has any way of knowing whether they were the first or second flipper, without communicating classically. See No-communication theorem for further information. A 2008 quantum physics experiment also performed by Nicolas Gisin and his colleagues has determined that in any hypothetical non-local hidden-variable theory, the speed of the quantum non-local connection (what Einstein called "spooky action at a distance") is at least 10,000 times the speed of light. Delayed choice quantum eraser. The delayed-choice quantum eraser is a version of the EPR paradox in which the observation (or not) of interference after the passage of a photon through a double slit experiment depends on the conditions of observation of a second photon entangled with the first. The characteristic of this experiment is that the observation of the second photon can take place at a later time than the observation of the first photon, which may give the impression that the measurement of the later photons "retroactively" determines whether the earlier photons show interference or not, although the interference pattern can only be seen by correlating the measurements of both members of every pair and so it cannot be observed until both photons have been measured, ensuring that an experimenter watching only the photons going through the slit does not obtain information about the other photons in an faster-than-light or backwards-in-time manner. Superluminal communication. Faster-than-light communication is, according to relativity, equivalent to time travel. What we measure as the speed of light in vacuum (or near vacuum) is actually the fundamental physical constant "c". This means that all inertial and, for the coordinate speed of light, non-inertial observers, regardless of their relative velocity, will always measure zero-mass particles such as photons traveling at "c" in vacuum. This result means that measurements of time and velocity in different frames are no longer related simply by constant shifts, but are instead related by Poincaré transformations. These transformations have important implications: Justifications. Casimir vacuum and quantum tunnelling. Special relativity postulates that the speed of light in vacuum is invariant in inertial frames. That is, it will be the same from any frame of reference moving at a constant speed. The equations do not specify any particular value for the speed of light, which is an experimentally determined quantity for a fixed unit of length. Since 1983, the SI unit of length (the meter) has been defined using the speed of light. The experimental determination has been made in vacuum. However, the vacuum we know is not the only possible vacuum which can exist. The vacuum has energy associated with it, called simply the vacuum energy, which could perhaps be altered in certain cases. When vacuum energy is lowered, light itself has been predicted to go faster than the standard value "c". This is known as the Scharnhorst effect. Such a vacuum can be produced by bringing two perfectly smooth metal plates together at near atomic diameter spacing. It is called a Casimir vacuum. Calculations imply that light will go faster in such a vacuum by a minuscule amount: a photon traveling between two plates that are 1 micrometer apart would increase the photon's speed by only about one part in 1036. Accordingly, there has as yet been no experimental verification of the prediction. A recent analysis argued that the Scharnhorst effect cannot be used to send information backwards in time with a single set of plates since the plates' rest frame would define a "preferred frame" for FTL signaling. However, with multiple pairs of plates in motion relative to one another the authors noted that they had no arguments that could "guarantee the total absence of causality violations", and invoked Hawking's speculative chronology protection conjecture which suggests that feedback loops of virtual particles would create "uncontrollable singularities in the renormalized quantum stress-energy" on the boundary of any potential time machine, and thus would require a theory of quantum gravity to fully analyze. Other authors argue that Scharnhorst's original analysis, which seemed to show the possibility of faster-than-"c" signals, involved approximations which may be incorrect, so that it is not clear whether this effect could actually increase signal speed at all. It was later claimed by Eckle "et al." that particle tunneling does indeed occur in zero real time. Their tests involved tunneling electrons, where the group argued a relativistic prediction for tunneling time should be 500–600 attoseconds (an attosecond is one quintillionth (10−18) of a second). All that could be measured was 24 attoseconds, which is the limit of the test accuracy. Again, though, other physicists believe that tunneling experiments in which particles appear to spend anomalously short times inside the barrier are in fact fully compatible with relativity, although there is disagreement about whether the explanation involves reshaping of the wave packet or other effects. Give up (absolute) relativity. Because of the strong empirical support for special relativity, any modifications to it must necessarily be quite subtle and difficult to measure. The best-known attempt is doubly special relativity, which posits that the Planck length is also the same in all reference frames, and is associated with the work of Giovanni Amelino-Camelia and João Magueijo. There are speculative theories that claim inertia is produced by the combined mass of the universe (e.g., Mach's principle), which implies that the rest frame of the universe might be "preferred" by conventional measurements of natural law. If confirmed, this would imply special relativity is an approximation to a more general theory, but since the relevant comparison would (by definition) be outside the observable universe, it is difficult to imagine (much less construct) experiments to test this hypothesis. Despite this difficulty, such experiments have been proposed. Spacetime distortion. Although the theory of special relativity forbids objects to have a relative velocity greater than light speed, and general relativity reduces to special relativity in a local sense (in small regions of spacetime where curvature is negligible), general relativity does allow the space between distant objects to expand in such a way that they have a "recession velocity" which exceeds the speed of light, and it is thought that galaxies which are at a distance of more than about 14 billion light-years from us today have a recession velocity which is faster than light. Miguel Alcubierre theorized that it would be possible to create a warp drive, in which a ship would be enclosed in a "warp bubble" where the space at the front of the bubble is rapidly contracting and the space at the back is rapidly expanding, with the result that the bubble can reach a distant destination much faster than a light beam moving outside the bubble, but without objects inside the bubble locally traveling faster than light. However, several objections raised against the Alcubierre drive appear to rule out the possibility of actually using it in any practical fashion. Another possibility predicted by general relativity is the traversable wormhole, which could create a shortcut between arbitrarily distant points in space. As with the Alcubierre drive, travelers moving through the wormhole would not "locally" move faster than light travelling through the wormhole alongside them, but they would be able to reach their destination (and return to their starting location) faster than light traveling outside the wormhole. Gerald Cleaver and Richard Obousy, a professor and student of Baylor University, theorized that manipulating the extra spatial dimensions of string theory around a spaceship with an extremely large amount of energy would create a "bubble" that could cause the ship to travel faster than the speed of light. To create this bubble, the physicists believe manipulating the 10th spatial dimension would alter the dark energy in three large spatial dimensions: height, width and length. Cleaver said positive dark energy is currently responsible for speeding up the expansion rate of our universe as time moves on. Lorentz symmetry violation. The possibility that Lorentz symmetry may be violated has been seriously considered in the last two decades, particularly after the development of a realistic effective field theory that describes this possible violation, the so-called Standard-Model Extension. This general framework has allowed experimental searches by ultra-high energy cosmic-ray experiments and a wide variety of experiments in gravity, electrons, protons, neutrons, neutrinos, mesons, and photons. The breaking of rotation and boost invariance causes direction dependence in the theory as well as unconventional energy dependence that introduces novel effects, including Lorentz-violating neutrino oscillations and modifications to the dispersion relations of different particle species, which naturally could make particles move faster than light. In some models of broken Lorentz symmetry, it is postulated that the symmetry is still built into the most fundamental laws of physics, but that spontaneous symmetry breaking of Lorentz invariance shortly after the Big Bang could have left a "relic field" throughout the universe which causes particles to behave differently depending on their velocity relative to the field; however, there are also some models where Lorentz symmetry is broken in a more fundamental way. If Lorentz symmetry can cease to be a fundamental symmetry at the Planck scale or at some other fundamental scale, it is conceivable that particles with a critical speed different from the speed of light be the ultimate constituents of matter. In current models of Lorentz symmetry violation, the phenomenological parameters are expected to be energy-dependent. Therefore, as widely recognized, existing low-energy bounds cannot be applied to high-energy phenomena; however, many searches for Lorentz violation at high energies have been carried out using the Standard-Model Extension. Lorentz symmetry violation is expected to become stronger as one gets closer to the fundamental scale. Superfluid theories of physical vacuum. In this approach, the physical vacuum is viewed as a quantum superfluid which is essentially non-relativistic, whereas Lorentz symmetry is not an exact symmetry of nature but rather the approximate description valid only for the small fluctuations of the superfluid background. Within the framework of the approach, a theory was proposed in which the physical vacuum is conjectured to be a quantum Bose liquid whose ground-state wavefunction is described by the logarithmic Schrödinger equation. It was shown that the relativistic gravitational interaction arises as the small-amplitude collective excitation mode whereas relativistic elementary particles can be described by the particle-like modes in the limit of low momenta. The important fact is that at very high velocities the behavior of the particle-like modes becomes distinct from the relativistic one – they can reach the speed of light limit at finite energy; also, faster-than-light propagation is possible without requiring moving objects to have imaginary mass. FTL neutrino flight results. MINOS experiment. In 2007 the MINOS collaboration reported results measuring the flight-time of 3 GeV neutrinos yielding a speed exceeding that of light by 1.8-sigma significance. However, those measurements were considered to be statistically consistent with neutrinos traveling at the speed of light. After the detectors for the project were upgraded in 2012, MINOS corrected their initial result and found agreement with the speed of light. Further measurements are going to be conducted. OPERA neutrino anomaly. On September 22, 2011, a preprint from the OPERA Collaboration indicated detection of 17 and 28 GeV muon neutrinos, sent 730 kilometers (454 miles) from CERN near Geneva, Switzerland to the Gran Sasso National Laboratory in Italy, traveling faster than light by a relative amount of (approximately 1 in 40,000), a statistic with 6.0-sigma significance. On 17 November 2011, a second follow-up experiment by OPERA scientists confirmed their initial results. However, scientists were skeptical about the results of these experiments, the significance of which was disputed. In March 2012, the ICARUS collaboration failed to reproduce the OPERA results with their equipment, detecting neutrino travel time from CERN to the Gran Sasso National Laboratory indistinguishable from the speed of light. Later the OPERA team reported two flaws in their equipment set-up that had caused errors far outside their original confidence interval: a fiber-optic cable attached improperly, which caused the apparently faster-than-light measurements, and a clock oscillator ticking too fast. Tachyons. In special relativity, it is impossible to accelerate an object to the speed of light, or for a massive object to move at the speed of light. However, it might be possible for an object to exist which always moves faster than light. The hypothetical elementary particles with this property are called tachyons or tachyonic particles. Attempts to quantize them failed to produce faster-than-light particles, and instead illustrated that their presence leads to an instability. Various theorists have suggested that the neutrino might have a tachyonic nature, while others have disputed the possibility. General relativity. General relativity was developed after special relativity to include concepts like gravity. It maintains the principle that no object can accelerate to the speed of light in the reference frame of any coincident observer. However, it permits distortions in spacetime that allow an object to move faster than light from the point of view of a distant observer. One such distortion is the Alcubierre drive, which can be thought of as producing a ripple in spacetime that carries an object along with it. Another possible system is the wormhole, which connects two distant locations as though by a shortcut. Both distortions would need to create a very strong curvature in a highly localized region of space-time and their gravity fields would be immense. To counteract the unstable nature, and prevent the distortions from collapsing under their own 'weight', one would need to introduce hypothetical exotic matter or negative energy. General relativity also recognizes that any means of faster-than-light travel could also be used for time travel. This raises problems with causality. Many physicists believe that the above phenomena are impossible and that future theories of gravity will prohibit them. One theory states that stable wormholes are possible, but that any attempt to use a network of wormholes to violate causality would result in their decay. In string theory, Eric G. Gimon and Petr Hořava have argued that in a supersymmetric five-dimensional Gödel universe, quantum corrections to general relativity effectively cut off regions of spacetime with causality-violating closed timelike curves. In particular, in the quantum theory a smeared supertube is present that cuts the spacetime in such a way that, although in the full spacetime a closed timelike curve passed through every point, no complete curves exist on the interior region bounded by the tube. In fiction and popular culture. FTL travel is a common plot device in science fiction. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\beta_\\text{rel} = \\frac{\\beta + \\beta}{1 + \\beta ^2} = \\frac{2\\beta}{1 + \\beta^2} \\leq 1." } ]
https://en.wikipedia.org/wiki?curid=11439
11439770
Plane mirror
Mirror with a flat reflecting surface A plane mirror is a mirror with a flat (planar) reflective surface. For light rays striking a plane mirror, the angle of reflection equals the angle of incidence. The angle of the incidence is the angle between the incident ray and the surface normal (an imaginary line perpendicular to the surface). Therefore, the angle of reflection is the angle between the reflected ray and the normal and a collimated beam of light does not spread out after reflection from a plane mirror, except for diffraction effects. A plane mirror makes an image of objects in front of the mirror; these images appear to be behind the plane in which the mirror lies. A straight line drawn from part of an object to the corresponding part of its image makes a right angle with, and is bisected by, the surface of the plane mirror. The image formed by a plane mirror is virtual (meaning that the light rays do not actually come from the image) it is not real image (meaning that the light rays do actually come from the image). it is always upright, and of the same shape and size as the object it is reflecting. A virtual image is a copy of an object formed at the location from which the light rays appear to come. Actually, the image formed in the mirror is a perverted image (Perversion), there is a misconception among people about having confused with perverted and laterally-inverted image. If a person is reflected in a plane mirror, the image of his right hand appears to be the left hand of the image. Plane mirrors are the only type of mirror for which a object produces an image that is virtual, erect and of the same size as the object in all cases irrespective of the shape, size and distance from mirror of the object however same is possible for other types of mirror (concave and convex) but only for a specific conditions. However the focal length of a plane mirror is infinity; its optical power is zero. Using the mirror equation, where formula_0 is the object distance, formula_1 is the image distance, and formula_2 is the focal length: formula_3 Since formula_4, formula_5 formula_6 Concave and Convex mirrors (spherical mirrors) are also able to produce images similar to a plane mirror. However, the images formed by them are not of the same size as the object like they are in a plane mirror in all conditions rather specific one . In a convex mirror, the virtual image formed is always diminished, whereas in a concave mirror when the object is placed between the focus and the pole, an enlarged virtual image is formed. Therefore, in applications where a virtual image of the same size is required, a plane mirror is preferred over spherical mirrors. Preparation. A plane mirror is made using some highly reflecting and polished surface such as a silver or aluminium surface in a process called silvering. After silvering, a thin layer of red lead oxide is applied at the back of the mirror. The reflecting surface reflects most of the light striking it as long as the surface remains uncontaminated by tarnishing or oxidation. Most modern plane mirrors are designed with a thin piece of plate glass that protects and strengthens the mirror surface and helps prevent tarnishing. Historically, mirrors were simply flat pieces of polished copper, obsidian, brass, or a precious metal. Mirrors made from liquid also exist, as the elements gallium and mercury are both highly reflective in their liquid state. Relation to curved mirrors. Mathematically, a plane mirror can be considered to be the limit of either a concave or a convex spherical curved mirror as the radius, and therefore the focal length becomes infinity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "d_0" }, { "math_id": 1, "text": "d_i" }, { "math_id": 2, "text": "f" }, { "math_id": 3, "text": "\\frac{1}{d_0}+\\frac{1}{d_i}=\\frac{1}{f}" }, { "math_id": 4, "text": "[\\frac{1}{f}=0]" }, { "math_id": 5, "text": "\\frac{1}{d_0}=-\\frac{1}{d_i}" }, { "math_id": 6, "text": "-d_0=d_i" } ]
https://en.wikipedia.org/wiki?curid=11439770
11441746
Classifying space for O(n)
In mathematics, the classifying space for the orthogonal group O("n") may be constructed as the Grassmannian of "n"-planes in an infinite-dimensional real space formula_0. Cohomology ring. The cohomology ring of formula_1 with coefficients in the field formula_2 of two elements is generated by the Stiefel–Whitney classes: formula_3 Infinite classifying space. The canonical inclusions formula_4 induce canonical inclusions formula_5 on their respective classifying spaces. Their respective colimits are denoted as: formula_6 formula_7 formula_8 is indeed the classifying space of formula_9.
[ { "math_id": 0, "text": "\\mathbb{R}^\\infty" }, { "math_id": 1, "text": "\\operatorname{BO}(n)" }, { "math_id": 2, "text": "\\mathbb{Z}_2" }, { "math_id": 3, "text": "H^*(\\operatorname{BO}(n);\\mathbb{Z}_2)\n=\\mathbb{Z}_2[w_1,\\ldots,w_n]." }, { "math_id": 4, "text": "\\operatorname{O}(n)\\hookrightarrow\\operatorname{O}(n+1)" }, { "math_id": 5, "text": "\\operatorname{BO}(n)\\hookrightarrow\\operatorname{BO}(n+1)" }, { "math_id": 6, "text": "\\operatorname{O}\n:=\\lim_{n\\rightarrow\\infty}\\operatorname{O}(n);" }, { "math_id": 7, "text": "\\operatorname{BO}\n:=\\lim_{n\\rightarrow\\infty}\\operatorname{BO}(n)." }, { "math_id": 8, "text": "\\operatorname{BO}" }, { "math_id": 9, "text": "\\operatorname{O}" } ]
https://en.wikipedia.org/wiki?curid=11441746
11442879
Phosphoribosylaminoimidazole carboxylase
Enzyme involved in purine synthesis The enzyme Phosphoribosylaminoimidazole carboxylase, or AIR carboxylase (EC 4.1.1.21) is involved in nucleotide biosynthesis and in particular in purine biosynthesis. It catalyzes the conversion of 5'-phosphoribosyl-5-aminoimidazole ("AIR") into 5'-phosphoribosyl-4-carboxy-5-aminoimidazole ("CAIR") as described in the reaction: 5-aminoimidazole ribonucleotide + CO2 formula_0 5'-phosphoribosyl-4-carboxy-5-aminoimidazole + 2 H+ In plants and fungi. Phosphoribosylaminoimidazole carboxylase is a fusion protein in plants and fungi, but consists of two non-interacting proteins in bacteria, PurK and PurE. The crystal structure of PurE indicates a unique quaternary structure that confirms the octameric nature of the enzyme. In "Escherichia coli". In the bacterium "Escherichia coli" the reaction is catalyzed in two steps carried out by two separate enzymes, PurK and PurE. PurK, "N"5-carboxyaminoimidazole ribonucleotide synthetase, catalyzes the conversion of 5-aminoimidazole ribonucleotide ("AIR"), ATP, and bicarbonate to "N"5-carboxyaminoimidazole ribonucleotide ("N5-CAIR"), ADP, and phosphate. PurE, "N"5-carboxyaminoimidazole ribonucleotide mutase, converts N5-CAIR to CAIR, the sixth step of "de novo" purine biosynthesis. In the presence of high concentrations of bicarbonate, PurE is reported able to convert AIR to CAIR directly and without ATP. Some members of this family contain two copies of this domain. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=11442879
11444387
Tyrosine aminotransferase
Mammalian protein found in Homo sapiens Tyrosine aminotransferase (or tyrosine transaminase) is an enzyme present in the liver and catalyzes the conversion of tyrosine to 4-hydroxyphenylpyruvate. L-tyrosine + 2-oxoglutarate formula_0 4-hydroxyphenylpyruvate + L-glutamate In humans, the tyrosine aminotransferase protein is encoded by the "TAT" gene. A deficiency of the enzyme in humans can result in what is known as type II tyrosinemia, wherein there is an abundance of tyrosine as a result of tyrosine failing to undergo an aminotransferase reaction to form 4-hydroxyphenylpyruvate. Mechanism. Structures of the three main molecules involved in chemical reaction catalyzed by the tyrosine aminotransferase enzyme are shown below: the amino acid tyrosine (left), the prosthetic group pyridoxal phosphate (right), and the resulting product 4-hydroxyphenylpyruvate (center). Each side of the dimer protein includes pyridoxal phosphate (PLP) bonded to the Lys280 residue of the tyrosine aminotransferase molecule. The amine group of tyrosine attacks the alpha carbon of the imine bonded to Lys280, forming a tetrahedral complex and then kicking off the LYS-ENZ. This process is known as transimination by the act of switching out the imine group bonded to PLP. The newly formed PLP-TYR molecule is then attacked by a base. A possible candidate for the base in the mechanism could be Lys280 that was just pushed off of PLP, which sequesters the newly formed amino group of the PLP-TYR molecule. In a similar mechanism of aspartate transaminase, the lysine that forms the initial imine to PLP later acts as the base that attacks the tyrosine in transimination. The electrons left behind from the loss of the proton move down to form a new double bond to the imine, which in turn pushes the already double bonded electrons through PLP and end up as a lone pair on the positively charged nitrogen in the six-membered ring of the molecule. Water attacks the alpha carbon of the imine of PLP-TYR and through acyl substitution kicks off the nitrogen of PLP and forming pyridoxamine phosphate (PMP) and 4-hydroxyphenylpyruvate. PMP is then regenerated into PLP by transferring its amine group to alpha-ketoglutarate, reforming its aldehyde functional group. This is followed by another substitution reaction with the Lys280 residue to reform its imine linkage to the enzyme, forming ENZ-PLP. Active site. Tyrosine Aminotransferase as a dimer has two identical active site. Lys280 is attached to PLP, which is held in place via two nonpolar amino acid side chains; phenylalanine and isoleucine (see thumbnail on right). The PLP is also held in place by hydrogen bonding to surrounding molecules mainly by its phosphate group. Shown below is one active site at three different magnifications: Pathology. Tyrosinemia is the most common metabolic disease associated with tyrosine aminotransferase. The disease results from a deficiency in hepatic tyrosine aminotransferase. Tyrosinemia type II (Richner-Hanhart syndrome, RHS) is a disease of autosomal recessive inheritance characterized by keratitis, palmoplantar hyperkeratosis, mental retardation, and elevated blood tyrosine levels. Keratitis in Tyrosinemia type II patients is caused by the deposition of tyrosine crystals in the cornea and results in corneal inflammation. The TAT gene is located on human chromosome 16q22-24 and extends over 10.9 kilobases (kb) containing 12 exons, and its 3.0 kb mRNA codes for a 454-amino acid protein of 50.4 kDa. Twelve different TAT gene mutations have been reported. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Molecular graphics images were produced using the UCSF Chimera package from the Resource for Biocomputing, Visualization, and Informatics at the University of California, San Francisco (supported by NIH P41 RR-01081).
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=11444387
1144487
Encephalization quotient
Relative brain size measure Encephalization quotient (EQ), encephalization level (EL), or just encephalization is a relative brain size measure that is defined as the ratio between observed and predicted brain mass for an animal of a given size, based on nonlinear regression on a range of reference species. It has been used as a proxy for intelligence and thus as a possible way of comparing the intelligence levels of different species. For this purpose, it is a more refined measurement than the raw brain-to-body mass ratio, as it takes into account allometric effects. Expressed as a formula, the relationship has been developed for mammals and may not yield relevant results when applied outside this group. Perspective on intelligence measures. Encephalization quotient was developed in an attempt to provide a way of correlating an animal's physical characteristics with perceived intelligence. It improved on the previous attempt, brain-to-body mass ratio, so it has persisted. Subsequent work, notably Roth, found EQ to be flawed and suggested brain size was a better predictor, but that has problems as well. Currently the best predictor for intelligence across all animals is forebrain neuron count. This was not seen earlier because neuron counts were previously inaccurate for most animals. For example, human brain neuron count was given as 100 billion for decades before Herculano-Houzel found a more reliable method of counting brain cells. It could have been anticipated that EQ might be superseded because of both the number of exceptions and the growing complexity of the formulae it used. (See the rest of this article.) The simplicity of counting neurons has replaced it. The concept in EQ of comparing the brain capacity exceeding that required for body sense and motor activity may yet live on to provide an even better prediction of intelligence, but that work has not been done yet. Variance in brain sizes. Body size accounts for 80–90% of the variance in brain size, between species, and a relationship described by an allometric equation: the regression of the logarithms of brain size on body size. The distance of a species from the regression line is a measure of its encephalization. The scales are logarithmic, distance, or residual, is an encephalization quotient (EQ), the ratio of actual brain size to expected brain size. Encephalization is a characteristic of a species. Rules for brain size relates to the number brain neurons have varied in evolution, then not all mammalian brains are necessarily built as larger or smaller versions of a same plan, with proportionately larger or smaller numbers of neurons. Similarly sized brains, such as a cow or chimpanzee, might in that scenario contain very different numbers of neurons, just as a very large cetacean brain might contain fewer neurons than a gorilla brain. Size comparison between the human brain and non-primate brains, larger or smaller, might simply be inadequate and uninformative – and our view of the human brain as outlier, a special oddity, may have been based on the mistaken assumption that all brains are made the same (Herculano-Houzel, 2012). Limitations and possible improvements over EQ. There is a distinction between brain parts that are necessary for the maintenance of the body and those that are associated with improved cognitive functions. These brain parts, although functionally different, all contribute to the overall weight of the brain. Jerison (1973) has for this reason considered 'extra neurons', neurons that contribute strictly to cognitive capacities, as more important indicators of intelligence than pure EQ. Gibson et al. (2001) reasoned that bigger brains generally contain more 'extra neurons' and thus are better predictors of cognitive abilities than pure EQ among primates. Factors such as the recent evolution of the cerebral cortex and different degrees of brain folding (gyrification), which increases the surface area (and volume) of the cortex, are positively correlated to intelligence in humans. In a meta-analysis, Deaner et al. (2007) tested absolute brain size (ABS), cortex size, cortex-to-brain ratio, EQ, and corrected relative brain size (cRBS) against global cognitive capacities. They have found that, after normalization, only ABS and neocortex size showed significant correlation to cognitive abilities. In primates, ABS, neocortex size, and N (the number of cortical neurons) correlated fairly well with cognitive abilities. However, there were inconsistencies found for N. According to the authors, these inconsistencies were the result of the faulty assumption that N increases linearly with the size of the cortical surface. This notion is incorrect because the assumption does not take into account the variability in cortical thickness and cortical neuron density, which should influence N. According to Cairo (2011), EQ has flaws to its design when considering individual data points rather than a species as a whole. It is inherently biased given that the cranial volume of an obese and underweight individual would be roughly similar, but their body masses would be drastically different. Another difference of this nature is a lack of accounting for sexual dimorphism. For example, the female human generally has smaller cranial volume than the male; however, this does not mean that a female and male of the same body mass would have different cognitive abilities. Considering all of these flaws, EQ should not be viewed as a valid metric for intraspecies comparison. The notion that encephalization quotient corresponds to intelligence has been disputed by Roth and Dicke (2012). They consider the absolute number of cortical neurons and neural connections as better correlates of cognitive ability. According to Roth and Dicke (2012), mammals with relatively high cortex volume and neuron packing density (NPD) are more intelligent than mammals with the same brain size. The human brain stands out from the rest of the mammalian and vertebrate taxa because of its large cortical volume and high NPD, conduction velocity, and cortical parcellation. All aspects of human intelligence are found, at least in its primitive form, in other nonhuman primates, mammals, or vertebrates, with the exception of syntactical language. Roth and Dicke consider syntactical language an "intelligence amplifier". Brain-body size relationship. Brain size usually increases with body size in animals (is positively correlated), i.e. large animals usually have larger brains than smaller animals. The relationship is not linear, however. Generally, small mammals have relatively larger brains than big ones. Mice have a direct brain/body size ratio similar to humans (1/40), while elephants have a comparatively small brain/body size (1/560), despite being quite intelligent animals. Treeshrews have a brain/body mass ratio of (1/10). Several reasons for this trend are possible, one of which is that neural cells have a relative constant size. Some brain functions, like the brain pathway responsible for a basic task like drawing breath, are basically similar in a mouse and an elephant. Thus, the same amount of brain matter can govern breathing in a large or a small body. While not all control functions are independent of body size, some are, and hence large animals need comparatively less brain than small animals. This phenomenon can be described by an equation formula_0 where formula_1 and formula_2 are brain and body weights respectively, and formula_3 is called the cephalization factor. To determine the value of this factor, the brain and body weights of various mammals were plotted against each other, and the curve of such formula chosen as the best fit to that data. The cephalization factor and the subsequent encephalization quotient was developed by H. J. Jerison in the late 1960s. The formula for the curve varies, but an empirical fitting of the formula to a sample of mammals gives formula_4 As this formula is based on data from mammals, it should be applied to other animals with caution. For some of the other vertebrate classes the power of 3/4 rather than 2/3 is sometimes used, and for many groups of invertebrates the formula may give no meaningful results at all. Calculation. Snell's equation of simple allometry is formula_5 were formula_1 is the weight of the brain, formula_3 is the cephalization factor, formula_2 is body weight, and formula_6 is the exponential constant. The "encephalization quotient" (EQ) is the coefficient formula_3 in Snell's allometry equation, usually normalized with respect to a reference species. In the following table, the coefficients have been normalized with respect to the value for the cat, which is therefore attributed an EQ of 1. Another way to calculate encephalization quotient is by dividing the actual weight of an animal's brain with its predicted weight according to Jerison's formula. This measurement of approximate intelligence is more accurate for mammals than for other classes and phyla of Animalia. EQ and intelligence in mammals. Intelligence in animals is hard to establish, but the larger the brain is relative to the body, the more brain weight might be available for more complex cognitive tasks. The EQ formula, as opposed to the method of simply measuring raw brain weight or brain weight to body weight, makes for a ranking of animals that coincides better with observed complexity of behaviour. A primary reason for the use of EQ instead of a simple brain to body mass ratio is that smaller animals tend to have a higher proportional brain mass, but do not show the same indications of higher cognition as animals with a high EQ. Grey floor. The driving theorization behind the development of EQ is that an animal of a certain size requires a minimum number of neurons for basic functioning, sometimes referred to as a grey floor. There is also a limit to how large an animal's brain can grow given its body size – due to limitations like gestation period, energetics, and the need to physically support the encephalized region throughout maturation. When normalizing a standard brain size for a group of animals, a slope can be determined to show what a species' expected brain to body mass ratio would be. Species with brain to body mass ratios below this standard are nearing the grey floor, and do not need extra grey matter. Species which fall above this standard have more grey matter than is necessary for basic functions. Presumably these extra neurons are used for higher cognitive processes. Taxonomic trends. Mean EQ for mammals is around 1, with carnivorans, cetaceans and primates above 1, and insectivores and herbivores below. Large mammals tend to have the highest EQs of all animals, while small mammals and avians have similar EQs. This reflects two major trends. One is that brain matter is extremely costly in terms of energy needed to sustain it. Animals with nutrient rich diets tend to have higher EQs, which is necessary for the energetically costly tissue of brain matter. Not only is it metabolically demanding to grow throughout embryonic and postnatal development, it is costly to maintain as well. Arguments have been made that some carnivores may have higher EQ's due to their relatively enriched diets, as well as the cognitive capacity required for effectively hunting prey. One example of this is brain size of a wolf; about 30% larger than a similarly sized domestic dog, potentially derivative of different needs in their respective way of life. Dietary trends. Of the animals demonstrating the highest EQ's (see associated table), many are primarily frugivores, including apes, macaques, and proboscideans. This dietary categorization is significant to inferring the pressures which drive higher EQ's. Specifically, frugivores must utilize a complex, trichromatic map of visual space to locate and pick ripe fruits and are able to provide for the high energetic demands of increased brain mass. Trophic level—"height" on the food chain—is yet another factor that has been correlated with EQ in mammals. Eutheria with either high AB (absolute brain-mass) or high EQ occupy positions at high trophic levels. Eutheria low on the network of food chains can only develop a high RB (relative brain-mass) so long as they have small body masses. This presents an interesting conundrum for intelligent small animals, who have behaviors radically different from intelligent large animals. According to Steinhausen "et al".(2016): Animals with high RB [relative brain-mass] usually have (1) a short life span, (2) reach sexual maturity early, and (3) have short and frequent gestations. Moreover, males of species with high RB also have few potential sexual partners. In contrast, animals with high EQs have (1) a high number of potential sexual partners, (2) delayed sexual maturity, and (3) rare gestations with small litter sizes. Sociality. Another factor previously thought to have great impact on brain size is sociality and flock size. This was a long-standing theory until the correlation between frugivory and EQ was shown to be more statistically significant. While no longer the predominant inference as to selection pressure for high EQ, the social brain hypothesis still has some support. For example, dogs (a social species) have a higher EQ than cats (a mostly solitary species). Animals with very large flock size and/or complex social systems consistently score high EQ, with dolphins and orcas having the highest EQ of all cetaceans, and humans with their extremely large societies and complex social life topping the list by a good margin. Comparisons with non-mammalian animals. Birds generally have lower EQ than mammals, but parrots and particularly the corvids show remarkable complex behaviour and high learning ability. Their brains are at the high end of the bird spectrum, but low compared to mammals. Bird cell size is on the other hand generally smaller than that of mammals, which may mean more brain cells and hence synapses per volume, allowing for more complex behaviour from a smaller brain. Both bird intelligence and brain anatomy are however very different from those of mammals, making direct comparison difficult. Manta rays have the highest EQ among fish, and either octopuses or jumping spiders have the highest among invertebrates. Despite the jumping spider having a huge brain for its size, it is minuscule in absolute terms, and humans have a much higher EQ despite having a lower raw brain-to-body weight ratio. Mean EQs for reptiles are about one tenth of those of mammals. EQ in birds (and estimated EQ in other dinosaurs) generally also falls below that of mammals, possibly due to lower thermoregulation and/or motor control demands. Estimation of brain size in "Archaeopteryx" (one of the oldest known ancestors of birds), shows it had an EQ well above the reptilian range, and just below that of living birds. Biologist Stephen Jay Gould has noted that if one looks at vertebrates with very low encephalization quotients, their brains are slightly less massive than their spinal cords. Theoretically, intelligence might correlate with the absolute amount of brain an animal has after subtracting the weight of the spinal cord from the brain. This formula is useless for invertebrates because they do not have spinal cords or, in some cases, central nervous systems. EQ in paleoneurology. Behavioral complexity in living animals can to some degree be observed directly, making the predictive power of the encephalization quotient less relevant. It is however central in paleoneurology, where the endocast of the brain cavity and estimated body weight of an animal is all one has to work from. The behavior of extinct mammals and dinosaurs is typically investigated using EQ formulas. Encephalization quotient is also used in estimating evolution of intelligent behavior in human ancestors. This technique can help in mapping the development of behavioral complexities during human evolution. However, this technique is only limited to when there are both cranial and post-cranial remains associated with individual fossils, to allow for brain to body size comparisons. For example, remains of one Middle Pleistocene human fossil from Jinniushan province in northern China has allowed scientists to study the relationship between brain and body size using the Encephalization Quotient. Researchers obtained an EQ of 4.150 for the Jinniushan fossil, and then compared this value with preceding Middle Pleistocene estimates of EQ at 3.7770. The difference in EQ estimates has been associated with a rapid increase in encephalization in Middle Pleistocene hominins. Paleo-neurological comparisons between Neanderthals and anatomically modern "Homo sapiens" (AMHS) via Encephalization quotient often rely on the use of endocasts, but this method has many drawbacks. For example, endocasts do not provide any information regarding the internal organization of the brain. Furthermore, endocasts are often unclear in terms of the preservation of their boundaries, and it becomes hard to measure where exactly a certain structure starts and ends. If endocasts themselves are not reliable, then the value for brain size used to calculate the EQ could also be unreliable. Additionally, previous studies have suggested that Neanderthals have the same encephalization quotient as modern humans, although their post-crania suggests that they weighed more than modern humans. Because EQ relies on values from both postcrania and crania, the margin for error increases in relying on this proxy in paleo-neurology because of the inherent difficulty in obtaining accurate brain and body mass measurements from the fossil record. EQ of livestock animals. The EQ of livestock farm animals such as the domestic pig may be significantly lower than would suggest for their apparent intelligence. According to Minervini et al (2016) the brain of the domestic pig is a rather small size compared to the mass of the animal. The tremendous increase in body weight imposed by industrial farming significantly influences brain-to-body weight measures, including the EQ. The EQ of the domestic adult pig is just 0.38, yet pigs can use visual information seen in a mirror to find food, show evidence of self-recognition when presented with their reflections and there is evidence suggesting that pigs are as socially complex as many other highly intelligent animals, possibly sharing a number of cognitive capacities related to social complexity. History. The concept of encephalization has been a key evolutionary trend throughout human evolution, and consequently an important area of study. Over the course of hominin evolution, brain size has seen an overall increase from 400 cm3 to 1400 cm3. Furthermore, the genus "Homo" is specifically defined by a significant increase in brain size. The earliest "Homo" species were larger in brain size as compared to contemporary "Australopithecus" counterparts, with which they co-inhabited parts of Eastern and Southern Africa. Throughout modern history, humans have been fascinated by the large relative size of our brains, trying to connect brain sizes to overall levels of intelligence. Early brain studies were focused in the field of phrenology, which was pioneered by Franz Joseph Gall in 1796 and remained a prevalent discipline throughout the early 19th century. Specifically, phrenologists paid attention to the external morphology of the skull, trying to relate certain lumps to corresponding aspects of personality. They further measured physical brain size in order to equate larger brain sizes to greater levels of intelligence. Today, however, phrenology is considered a pseudoscience. Among ancient Greek philosophers, Aristotle in particular believed that after the heart, the brain was the second most important organ of the body. He also focused on the size of the human brain, writing in 335 BCE that "of all the animals, man has the brain largest in proportion to his size." In 1861, French neurologist Paul Broca tried to make a connection between brain size and intelligence. Through observational studies, he noticed that people working in what he deemed to be more complex fields had larger brains than people working in less complex fields. Also, in 1871, Charles Darwin wrote in his book "The Descent of Man": "No one, I presume, doubts that the large proportion which the size of man's brain bears to his body, compared to the same proportion in the gorilla or orang, is closely connected with his mental powers." The concept of quantifying encephalization is also not a recent phenomenon. In 1889, Sir Francis Galton, through a study on college students, attempted to quantify the relationship between brain size and intelligence. Due to Hitler's racial policies during World War II, studies on brain size and intelligence temporarily gained a negative reputation. However, with the advent of imaging techniques such as the fMRI and PET scan, several scientific studies were launched to suggest a relationship between encephalization and advanced cognitive abilities. Harry J. Jerison, who invented the formula for encephalization quotient, believed that brain size was proportional to the ability of humans to process information. With this belief, a higher level of encephalization equated to a higher ability to process information. A larger brain could mean a number of different things, including a larger cerebral cortex, a greater number of neuronal associations, or a greater number of neurons overall. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C = E/S^{2/3}," }, { "math_id": 1, "text": "E" }, { "math_id": 2, "text": "S" }, { "math_id": 3, "text": "C" }, { "math_id": 4, "text": "\n \\frac{w(\\text{brain})}{1~\\text{g}} = 0.12 \\left(\\frac{w(\\text{body})}{1~\\text{g}}\\right)^{\\frac{2}{3}}.\n" }, { "math_id": 5, "text": "E = CS^r," }, { "math_id": 6, "text": "r" } ]
https://en.wikipedia.org/wiki?curid=1144487
1144509
Fick principle
Principle applied to the measurement of blood flow to an organ The Fick principle states that blood flow to an organ can be calculated using a marker substance if the following information is known: Developed by Adolf Eugen Fick (1829–1901), the Fick principle has been applied to the measurement of cardiac output. Its underlying principles may also be applied in a variety of clinical situations. In Fick's original method, the "organ" was the entire human body and the marker substance was oxygen. The first published mention was in conference proceedings from July 9, 1870 from a lecture he gave at that conference; it is this publishing that is most often used by articles to cite Fick's contribution.The principle may be applied in different ways. For example, if the blood flow to an organ is known, together with the arterial and venous concentrations of the marker substance, the uptake of marker substance by the organ may then be calculated. Variables. In Fick's original method, the following variables are measured: Equation. From these values, we know that: formula_0 where This allows us to say formula_1 and hence calculate cardiac output. Note that ("Ca" – "Cv") is also known as the arteriovenous oxygen difference. Assumed Fick determination. In reality, this method is rarely used due to the difficulty of collecting and analysing the gas concentrations. However, by using an assumed value for oxygen consumption, cardiac output can be closely approximated without the cumbersome and time-consuming oxygen consumption measurement. This is sometimes called an assumed Fick determination. A commonly used value for O2 consumption at rest is O2 per minute per square meter of body surface area. Underlying principles. The Fick principle relies on the observation that the total uptake of (or release of) a substance by the peripheral tissues is equal to the product of the blood flow to the peripheral tissues and the arterial-venous concentration difference (gradient) of the substance. In the determination of cardiac output, the substance most commonly measured is the oxygen content of blood thus giving the arteriovenous oxygen difference, and the flow calculated is the flow across the pulmonary system. This gives a simple way to calculate the cardiac output: formula_2 Assuming there is no intracardiac shunt, the pulmonary blood flow equals the systemic blood flow. Measurement of the arterial and venous oxygen content of blood involves the sampling of blood from the pulmonary artery (low oxygen content) and from the pulmonary vein (high oxygen content). In practice, sampling of peripheral arterial blood is a surrogate for pulmonary venous blood. Determination of the oxygen consumption of the peripheral tissues is more complex. The calculation of the arterial and venous oxygen concentration of the blood is a straightforward process. Almost all oxygen in the blood is bound to hemoglobin molecules in the red blood cells. Measuring the content of hemoglobin in the blood and the percentage of saturation of hemoglobin (the oxygen saturation of the blood) is a simple process and is readily available to physicians. Using the fact that each gram of hemoglobin can carry of O2, the oxygen content of the blood (either arterial or venous) can be estimated by the following formula: formula_3 Assuming a hemoglobin concentration of and an oxygen saturation of 99%, the oxygen concentration of arterial blood is approximately of O2 per L. The saturation of mixed venous blood is approximately 75% in health. Using this value in the above equation, the oxygen concentration of mixed venous blood is approximately of O2 per L. Therefore, using the assumed Fick determination, the approximated cardiac output for an average man (1.9 m3) is: Cardiac Output = ( O2/minute × 1.9) / ( O2/L − O2/L) = Cardiac output may also be estimated with the Fick principle using production of carbon dioxide as a marker substance. Use in renal physiology. The principle can also be used in renal physiology to calculate renal blood flow. In this context, it is not oxygen which is measured, but a marker such as para-aminohippurate. However, the principles are essentially the same. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ce{\\dot VO2} = (CO \\times\\ C_a) - (CO \\times\\ C_v)" }, { "math_id": 1, "text": "CO = \\frac\\ce{\\dot VO2}{C_a - C_v}" }, { "math_id": 2, "text": " \\text{Cardiac Output} = \\frac {\\text{oxygen consumption}} {\\text{arteriovenous oxygen difference}} " }, { "math_id": 3, "text": " \\text{Oxygen Content of blood} = \\left [\\text{Hb} \\right] \\left ( \\text{g/dl} \\right ) \\ \\times\\ 1.34 \\left ( \\text{mL}\\ \\ce{O2} /\\text{g of Hb} \\right ) \\times\\ O_2^{\\text{saturation fraction}} +\\ 0.0032\\ \\times\\ P_\\ce{O2} (\\text{torr}) " } ]
https://en.wikipedia.org/wiki?curid=1144509
11448016
BrownBoost
Boosting algorithm BrownBoost is a boosting algorithm that may be robust to noisy datasets. BrownBoost is an adaptive version of the boost by majority algorithm. As is true for all boosting algorithms, BrownBoost is used in conjunction with other machine learning methods. BrownBoost was introduced by Yoav Freund in 2001. Motivation. AdaBoost performs well on a variety of datasets; however, it can be shown that AdaBoost does not perform well on noisy data sets. This is a result of AdaBoost's focus on examples that are repeatedly misclassified. In contrast, BrownBoost effectively "gives up" on examples that are repeatedly misclassified. The core assumption of BrownBoost is that noisy examples will be repeatedly mislabeled by the weak hypotheses and non-noisy examples will be correctly labeled frequently enough to not be "given up on." Thus only noisy examples will be "given up on," whereas non-noisy examples will contribute to the final classifier. In turn, if the final classifier is learned from the non-noisy examples, the generalization error of the final classifier may be much better than if learned from noisy and non-noisy examples. The user of the algorithm can set the amount of error to be tolerated in the training set. Thus, if the training set is noisy (say 10% of all examples are assumed to be mislabeled), the booster can be told to accept a 10% error rate. Since the noisy examples may be ignored, only the true examples will contribute to the learning process. Algorithm description. BrownBoost uses a non-convex potential loss function, thus it does not fit into the AdaBoost framework. The non-convex optimization provides a method to avoid overfitting noisy data sets. However, in contrast to boosting algorithms that analytically minimize a convex loss function (e.g. AdaBoost and LogitBoost), BrownBoost solves a system of two equations and two unknowns using standard numerical methods. The only parameter of BrownBoost (formula_0 in the algorithm) is the "time" the algorithm runs. The theory of BrownBoost states that each hypothesis takes a variable amount of time (formula_1 in the algorithm) which is directly related to the weight given to the hypothesis formula_2. The time parameter in BrownBoost is analogous to the number of iterations formula_3 in AdaBoost. A larger value of formula_0 means that BrownBoost will treat the data as if it were less noisy and therefore will give up on fewer examples. Conversely, a smaller value of formula_0 means that BrownBoost will treat the data as more noisy and give up on more examples. During each iteration of the algorithm, a hypothesis is selected with some advantage over random guessing. The weight of this hypothesis formula_2 and the "amount of time passed" formula_1 during the iteration are simultaneously solved in a system of two non-linear equations ( 1. uncorrelated hypothesis w.r.t example weights and 2. hold the potential constant) with two unknowns (weight of hypothesis formula_2 and time passed formula_1). This can be solved by bisection (as implemented in the JBoost software package) or Newton's method (as described in the original paper by Freund). Once these equations are solved, the margins of each example (formula_4 in the algorithm) and the amount of time remaining formula_5 are updated appropriately. This process is repeated until there is no time remaining. The initial potential is defined to be formula_6. Since a constraint of each iteration is that the potential be held constant, the final potential is formula_7. Thus the final error is "likely" to be near formula_8. However, the final potential function is not the 0–1 loss error function. For the final error to be exactly formula_8, the variance of the loss function must decrease linearly w.r.t. time to form the 0–1 loss function at the end of boosting iterations. This is not yet discussed in the literature and is not in the definition of the algorithm below. The final classifier is a linear combination of weak hypotheses and is evaluated in the same manner as most other boosting algorithms. BrownBoost learning algorithm definition. Input: Initialise: While formula_17: Output: formula_29 Empirical results. In preliminary experimental results with noisy datasets, BrownBoost outperformed AdaBoost's generalization error; however, LogitBoost performed as well as BrownBoost. An implementation of BrownBoost can be found in the open source software JBoost. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c" }, { "math_id": 1, "text": "t" }, { "math_id": 2, "text": "\\alpha" }, { "math_id": 3, "text": "T" }, { "math_id": 4, "text": "r_i(x_j)" }, { "math_id": 5, "text": "s" }, { "math_id": 6, "text": "\\frac{1}{m}\\sum_{j=1}^m 1-\\mbox{erf}(\\sqrt{c}) = 1-\\mbox{erf}(\\sqrt{c})" }, { "math_id": 7, "text": "\\frac{1}{m}\\sum_{j=1}^m 1-\\mbox{erf}(r_i(x_j)/\\sqrt{c}) = 1-\\mbox{erf}(\\sqrt{c})" }, { "math_id": 8, "text": "1-\\mbox{erf}(\\sqrt{c})" }, { "math_id": 9, "text": "m" }, { "math_id": 10, "text": "(x_{1},y_{1}),\\ldots,(x_{m},y_{m})" }, { "math_id": 11, "text": "x_{j} \\in X,\\, y_{j} \\in Y = \\{-1, +1\\}" }, { "math_id": 12, "text": "s=c" }, { "math_id": 13, "text": "r_i(x_j) = 0" }, { "math_id": 14, "text": "\\forall j" }, { "math_id": 15, "text": "i" }, { "math_id": 16, "text": "x_j" }, { "math_id": 17, "text": "s > 0" }, { "math_id": 18, "text": "W_{i}(x_j) = e^{- \\frac{(r_i(x_j)+s)^2}{c}}" }, { "math_id": 19, "text": "h_i : X \\to \\{-1,+1\\}" }, { "math_id": 20, "text": "\\sum_j W_i(x_j) h_i(x_j) y_j > 0" }, { "math_id": 21, "text": "\\alpha, t" }, { "math_id": 22, "text": "\\sum_j h_i(x_j) y_j e^{-\\frac{(r_i(x_j)+\\alpha h_i(x_j) y_j + s - t)^2}{c}} = 0" }, { "math_id": 23, "text": "E_{W_{i+1}}[h_i(x_j) y_j]=0" }, { "math_id": 24, "text": "W_{i+1} = \\exp\\left(\\frac{\\cdots}{\\cdots}\\right)" }, { "math_id": 25, "text": " \\sum \\left(\\Phi\\left(r_i(x_j) + \\alpha h(x_j) y_j + s - t\\right) - \\Phi\\left( r_i(x_j) + s \\right) \\right) = 0 " }, { "math_id": 26, "text": " \\Phi(z) = 1-\\mbox{erf}(z/\\sqrt{c}) " }, { "math_id": 27, "text": "r_{i+1}(x_j) = r_i(x_j) + \\alpha h(x_j) y_j" }, { "math_id": 28, "text": "s = s - t" }, { "math_id": 29, "text": "H(x) = \\textrm{sign}\\left( \\sum_i \\alpha_{i} h_{i}(x) \\right)" } ]
https://en.wikipedia.org/wiki?curid=11448016
11450209
Antiparallel lines
In geometry, two lines formula_0 and formula_1 are antiparallel with respect to a given line formula_2 if they each make congruent angles with formula_2 in opposite senses. More generally, lines formula_0 and formula_1 are "antiparallel" with respect to another pair of lines formula_3 and formula_4 if they are antiparallel with respect to the angle bisector of formula_3 and formula_5 In any cyclic quadrilateral, any two opposite sides are antiparallel with respect to the other two sides. Conic sections. In an oblique cone, there are exactly two families of parallel planes whose sections with the cone are circles. One of these families is parallel to the fixed generating circle and the other is called by Apollonius the subcontrary sections. If one looks at the triangles formed by the diameters of the circular sections (both families) and the vertex of the cone (triangles ABC and ADB), they are all similar. That is, if CB and BD are antiparallel with respect to lines AB and AC, then all sections of the cone parallel to either one of these circles will be circles. This is Book 1, Proposition 5 in Apollonius. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "l_1" }, { "math_id": 1, "text": "l_2" }, { "math_id": 2, "text": "m" }, { "math_id": 3, "text": "m_1" }, { "math_id": 4, "text": "m_2" }, { "math_id": 5, "text": "m_2." } ]
https://en.wikipedia.org/wiki?curid=11450209
11451347
10-Formyltetrahydrofolate
&lt;templatestyles src="Chembox/styles.css"/&gt; Chemical compound 10-Formyltetrahydrofolate (10-CHO-THF) is a form of tetrahydrofolate that acts as a donor of formyl groups in anabolism. In these reactions 10-CHO-THF is used as a substrate in formyltransferase reactions. Functions. Two equivalents of 10-CHO-THF are required in purine biosynthesis through the pentose phosphate pathway, where 10-CHO-THF is a substrate for phosphoribosylaminoimidazolecarboxamide formyltransferase. 10-CHO-THF is required for the formylation of methionyl-tRNA formyltransferase to give fMet-tRNA. Formation from methenyltetrahydrofolate. 10-CHO-THF is produced from methylenetetrahydrofolate (CH2H4F) via a two step process. The first step generates 5,10-methenyltetrahydrofolate: CH2H4F + NAD+ formula_0 CH2H2F + NADH + H+ In the second step 5,10-methenyltetrahydrofolate undergoes hydrolysis: CH2H2F + H2O formula_0 CHO-H4F + The latter is equivalently written: 5,10-methenyltetrahydrofolate + H2O formula_0 10-formyltetrahydrofolate 10-CHO-THF is also produced by the reaction ATP + formate + tetrahydrofolate formula_0 ADP + phosphate + 10-formyltetrahydrofolate This reaction is catalyzed by formate-tetrahydrofolate ligase. It can be converted back into tetrahydrofolate (THF) by formyltetrahydrofolate dehydrogenase or THF and formate by formyltetrahydrofolate deformylase. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=11451347
1145374
Snubber
Device used to suppress some phenomenon A snubber is a device used to suppress ("snub") a phenomenon such as voltage transients in electrical systems, pressure transients in fluid systems (caused by for example water hammer) or excess force or rapid movement in mechanical systems. Electrical systems. Snubbers are frequently used in electrical systems with an inductive load where the sudden interruption of current flow leads to a large counter-electromotive force: a rise in voltage across the current switching device that opposes the change in current, in accordance with Faraday's law. This transient can be a source of electromagnetic interference (EMI) in other circuits. Additionally, if the voltage generated across the device is beyond what the device is intended to tolerate, it may damage or destroy it. The snubber provides a short-term alternative current path around the current switching device so that the inductive element may be safely discharged. Inductive elements are often unintentional, arising from the current loops implied by physical circuitry like long and/or tortuous wires. While current switching is everywhere, snubbers will generally only be required where a major current path is switched, such as in power supplies. Snubbers are also often used to prevent arcing across the contacts of relays and switches, or electrical interference, or the welding of the contacts that can occur (see also arc suppression). Resistor-capacitor (RC). A simple RC snubber uses a small resistor (R) in series with a small capacitor (C). This combination can be used to suppress the rapid rise in voltage across a thyristor, preventing the erroneous turn-on of the thyristor; it does this by limiting the rate of rise in voltage ( formula_0 ) across the thyristor to a value which will not trigger it. An appropriately designed RC snubber can be used with either DC or AC loads. This sort of snubber is commonly used with inductive loads such as electric motors. The voltage across a capacitor cannot change instantaneously, so a decreasing transient current will flow through it for a fraction of a second, allowing the voltage across the switch to increase more slowly when the switch is opened. Determination of voltage rating can be difficult owing to the nature of transient waveforms, and may be defined simply by the power rating of the snubber components and the application. RC snubbers can be made discretely and are also built as a single component (see also Boucherot cell). Diodes. When the current flowing is DC, a simple rectifier diode is often employed as a snubber. The snubber diode is wired in parallel with an inductive load (such as a relay coil or electric motor). The diode is installed so that it does not conduct under normal conditions. When the external driving current is interrupted, the inductor current flows instead through the diode. The stored energy of the inductor is then gradually dissipated by the diode voltage drop and the resistance of the inductor itself. One disadvantage of using a simple rectifier diode as a snubber is that the diode allows current to continue flowing for some time, causing the inductor to remain active for slightly longer than desired. When such a snubber is utilized in a relay, this effect may cause a significant delay in the "drop out", or disengagement, of the actuator. The diode must immediately enter into forward conduction mode as the driving current is interrupted. Most ordinary diodes, even "slow" power silicon diodes, are able to turn on very quickly, in contrast to their slow reverse recovery time. These are sufficient for snubbing electromechanical devices such as relays and motors. In high-speed cases, where the switching is faster than 10 nanoseconds, such as in certain switching power regulators, "fast", "ultrafast", or Schottky diodes may be required. Resistor-capacitor-diode. More sophisticated designs use a diode with an RC network. Solid-state devices. In some DC circuits, a varistor made of inexpensive metal oxide, called a metal oxide varistor (MOV) is used. They may be unipolar or bipolar, like two inverse-series silicon Zener diodes, but are prone to wear out after about a dozen max-rated joules of energy absorption such as lightning protection, but are suitable for lower energy. Now with lower series resistance (Rs) in semiconductors they are generally called transient voltage suppressors (TVS), or surge protection devices (SPD). Transient voltage suppressors (TVS) may be used instead of the simple diode. The coil diode clamp makes the relay turn off slower ( formula_1 ) and thus increases contact arc if with a motor load which also needs a snubber. The diode clamp works well for coasting a uni-directional motor to a stop, but for bi-directional motors, a bipolar TVS is used. A higher voltage Zener-like TVS may make the relay open faster than it would with a simple rectifier diode clamp, as R is higher while the voltage rises to the clamp level. A Zener diode connected to ground will protect against positive transients that go over the Zener's breakdown voltage, and will protect against negative transients greater than a normal forward diode drop. Transient-voltage-suppression diodes are like silicon controlled rectifiers (SCRs) which trigger from overvoltage then clamp like Darlington transistors for lower voltage drop over a longer time period. In AC circuits a rectifier diode snubber cannot be used; if a simple RC snubber is not adequate a more complex bidirectional snubber design must be used. Mechanical and hydraulic systems. Snubbers for pipes and equipment are used to control movement during abnormal conditions such as earthquakes, turbine trips, safety valve closure, relief valve closure, or hydraulic fuse closure. Snubbers allow for free thermal movement of a component during regular conditions, but restrain the component in irregular conditions. A hydraulic snubber allows for pipe deflection under normal operating conditions. When subjected to an impulse load, the snubber becomes activated and acts as a restraint in order to restrict pipe movement. A mechanical snubber uses mechanical means to provide the restraint force. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "dV/dt" }, { "math_id": 1, "text": "T=L/R" } ]
https://en.wikipedia.org/wiki?curid=1145374
11454768
Bowyer–Watson algorithm
In computational geometry, the Bowyer–Watson algorithm is a method for computing the Delaunay triangulation of a finite set of points in any number of dimensions. The algorithm can be also used to obtain a Voronoi diagram of the points, which is the dual graph of the Delaunay triangulation. Description. The Bowyer–Watson algorithm is an incremental algorithm. It works by adding points, one at a time, to a valid Delaunay triangulation of a subset of the desired points. After every insertion, any triangles whose circumcircles contain the new point are deleted, leaving a star-shaped polygonal hole which is then re-triangulated using the new point. By using the connectivity of the triangulation to efficiently locate triangles to remove, the algorithm can take "O(N log N)" operations to triangulate N points, although special degenerate cases exist where this goes up to "O(N2)". History. The algorithm is sometimes known just as the Bowyer Algorithm or the Watson Algorithm. Adrian Bowyer and David Watson devised it independently of each other at the same time, and each published a paper on it in the same issue of "The Computer Journal" (see below). Pseudocode. The following pseudocode describes a basic implementation of the Bowyer-Watson algorithm. Its time complexity is formula_0. Efficiency can be improved in a number of ways. For example, the triangle connectivity can be used to locate the triangles which contain the new point in their circumcircle, without having to check all of the triangles - by doing so we can decrease time complexity to formula_1. Pre-computing the circumcircles can save time at the expense of additional memory usage. And if the points are uniformly distributed, sorting them along a space filling Hilbert curve prior to insertion can also speed point location. function BowyerWatson (pointList) // pointList is a set of coordinates defining the points to be triangulated triangulation := empty triangle mesh data structure add super-triangle to triangulation // must be large enough to completely contain all the points in pointList for each point in pointList do // add all the points one at a time to the triangulation badTriangles := empty set for each triangle in triangulation do // first find all the triangles that are no longer valid due to the insertion if point is inside circumcircle of triangle add triangle to badTriangles polygon := empty set for each triangle in badTriangles do // find the boundary of the polygonal hole for each edge in triangle do if edge is not shared by any other triangles in badTriangles add edge to polygon for each triangle in badTriangles do // remove them from the data structure remove triangle from triangulation for each edge in polygon do // re-triangulate the polygonal hole newTri := form a triangle from edge to point add newTri to triangulation for each triangle in triangulation // done inserting points, now clean up if triangle contains a vertex from original super-triangle remove triangle from triangulation return triangulation References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "O(n^2)" }, { "math_id": 1, "text": "O(n \\log n)" } ]
https://en.wikipedia.org/wiki?curid=11454768
1145733
BIBO stability
When a system's outputs are bounded for every bounded input In signal processing, specifically control theory, bounded-input, bounded-output (BIBO) stability is a form of stability for signals and systems that take inputs. If a system is BIBO stable, then the output will be bounded for every input to the system that is bounded. A signal is bounded if there is a finite value formula_0 such that the signal magnitude never exceeds formula_1, that is For discrete-time signals: formula_2 For continuous-time signals: formula_3 Time-domain condition for linear time-invariant systems. Continuous-time necessary and sufficient condition. For a continuous time linear time-invariant (LTI) system, the condition for BIBO stability is that the impulse response, formula_4 , be absolutely integrable, i.e., its L1 norm exists. formula_5 Discrete-time sufficient condition. For a discrete time LTI system, the condition for BIBO stability is that the impulse response be absolutely summable, i.e., its formula_6 norm exists. formula_7 Proof of sufficiency. Given a discrete time LTI system with impulse response formula_8 the relationship between the input formula_9 and the output formula_10 is formula_11 where formula_12 denotes convolution. Then it follows by the definition of convolution formula_13 Let formula_14 be the maximum value of formula_15, i.e., the formula_16-norm. formula_17 formula_18 (by the triangle inequality) formula_19 If formula_20 is absolutely summable, then formula_21 and formula_22 So if formula_20 is absolutely summable and formula_23 is bounded, then formula_24 is bounded as well because formula_25. The proof for continuous-time follows the same arguments. Frequency-domain condition for linear time-invariant systems. Continuous-time signals. For a rational and continuous-time system, the condition for stability is that the region of convergence (ROC) of the Laplace transform includes the imaginary axis. When the system is causal, the ROC is the open region to the right of a vertical line whose abscissa is the real part of the "largest pole", or the pole that has the greatest real part of any pole in the system. The real part of the largest pole defining the ROC is called the abscissa of convergence. Therefore, all poles of the system must be in the strict left half of the s-plane for BIBO stability. This stability condition can be derived from the above time-domain condition as follows: formula_26 where formula_27 and formula_28 The region of convergence must therefore include the imaginary axis. Discrete-time signals. For a rational and discrete time system, the condition for stability is that the region of convergence (ROC) of the z-transform includes the unit circle. When the system is causal, the ROC is the open region outside a circle whose radius is the magnitude of the pole with largest magnitude. Therefore, all poles of the system must be inside the unit circle in the z-plane for BIBO stability. This stability condition can be derived in a similar fashion to the continuous-time derivation: formula_29 where formula_30 and formula_31. The region of convergence must therefore include the unit circle. Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "B > 0" }, { "math_id": 1, "text": "B" }, { "math_id": 2, "text": "\\exists B \\forall n(\\ |y[n]| \\leq B) \\quad n \\in \\mathbb{Z}" }, { "math_id": 3, "text": "\\exists B \\forall t(\\ |y(t)| \\leq B) \\quad t \\in \\mathbb{R}" }, { "math_id": 4, "text": " h(t)" }, { "math_id": 5, "text": " \\int_{-\\infty}^\\infty \\left|h(t)\\right|\\,\\mathord{\\operatorname{d}}t = \\| h \\|_1 \\in \\mathbb{R}" }, { "math_id": 6, "text": "\\ell^1" }, { "math_id": 7, "text": "\\ \\sum_{n=-\\infty}^\\infty |h[n]| = \\| h \\|_1 \\in \\mathbb{R}" }, { "math_id": 8, "text": "\\ h[n]" }, { "math_id": 9, "text": "\\ x[n]" }, { "math_id": 10, "text": "\\ y[n]" }, { "math_id": 11, "text": "\\ y[n] = h[n] * x[n]" }, { "math_id": 12, "text": "*" }, { "math_id": 13, "text": "\\ y[n] = \\sum_{k=-\\infty}^\\infty h[k] x[n-k]" }, { "math_id": 14, "text": "\\| x \\|_{\\infty}" }, { "math_id": 15, "text": "\\ |x[n]|" }, { "math_id": 16, "text": "L_{\\infty}" }, { "math_id": 17, "text": "\\left|y[n]\\right| = \\left|\\sum_{k=-\\infty}^\\infty h[n-k] x[k]\\right|" }, { "math_id": 18, "text": "\\le \\sum_{k=-\\infty}^\\infty \\left|h[n-k]\\right| \\left|x[k]\\right|" }, { "math_id": 19, "text": "\n\\begin{align}\n& \\le \\sum_{k=-\\infty}^\\infty \\left|h[n-k]\\right| \\| x \\|_\\infty \\\\\n& = \\| x \\|_{\\infty} \\sum_{k=-\\infty}^\\infty \\left|h[n-k]\\right| \\\\\n& = \\| x \\|_{\\infty} \\sum_{k=-\\infty}^\\infty \\left|h[k]\\right|\n\\end{align}\n" }, { "math_id": 20, "text": "h[n]" }, { "math_id": 21, "text": "\\sum_{k=-\\infty}^{\\infty}{\\left|h[k]\\right|} = \\| h \\|_1 \\in \\mathbb{R}" }, { "math_id": 22, "text": "\\| x \\|_\\infty \\sum_{k=-\\infty}^\\infty \\left|h[k]\\right| = \\| x \\|_\\infty \\| h \\|_1" }, { "math_id": 23, "text": "\\left|x[n]\\right|" }, { "math_id": 24, "text": "\\left|y[n]\\right|" }, { "math_id": 25, "text": "\\| x \\|_{\\infty} \\| h \\|_1 \\in \\mathbb{R}" }, { "math_id": 26, "text": "\n\\begin{align}\n\\int_{-\\infty}^\\infty \\left|h(t)\\right| \\, dt\n& = \\int_{-\\infty}^\\infty \\left|h(t)\\right| \\left| e^{-j \\omega t }\\right| \\, dt \\\\\n& = \\int_{-\\infty}^\\infty \\left|h(t) (1 \\cdot e)^{-j \\omega t} \\right| \\, dt \\\\\n& = \\int_{-\\infty}^\\infty \\left|h(t) (e^{\\sigma + j \\omega})^{- t} \\right| \\, dt \\\\\n& = \\int_{-\\infty}^\\infty \\left|h(t) e^{-s t} \\right| \\, dt\n\\end{align}\n" }, { "math_id": 27, "text": "s = \\sigma + j \\omega" }, { "math_id": 28, "text": "\\operatorname{Re}(s) = \\sigma = 0." }, { "math_id": 29, "text": "\n\\begin{align}\n\\sum_{n = -\\infty}^\\infty \\left|h[n]\\right|\n& = \\sum_{n = -\\infty}^\\infty \\left|h[n]\\right| \\left| e^{-j \\omega n} \\right| \\\\\n& = \\sum_{n = -\\infty}^\\infty \\left|h[n] (1 \\cdot e)^{-j \\omega n} \\right| \\\\\n& =\\sum_{n = -\\infty}^\\infty \\left|h[n] (r e^{j \\omega})^{-n} \\right| \\\\\n& = \\sum_{n = -\\infty}^\\infty \\left|h[n] z^{- n} \\right|\n\\end{align}\n" }, { "math_id": 30, "text": "z = r e^{j \\omega}" }, { "math_id": 31, "text": "r = |z| = 1" } ]
https://en.wikipedia.org/wiki?curid=1145733
1145820
Online codes
In computer science, online codes are an example of rateless erasure codes. These codes can encode a message into a number of symbols such that knowledge of any fraction of them allows one to recover the original message (with high probability). "Rateless" codes produce an arbitrarily large number of symbols which can be broadcast until the receivers have enough symbols. The online encoding algorithm consists of several phases. First the message is split into "n" fixed size message blocks. Then the "outer encoding" is an erasure code which produces auxiliary blocks that are appended to the message blocks to form a composite message. From this the inner encoding generates check blocks. Upon receiving a certain number of check blocks some fraction of the composite message can be recovered. Once enough has been recovered the outer decoding can be used to recover the original message. Detailed discussion. Online codes are parameterised by the block size and two scalars, "q" and "ε". The authors suggest "q"=3 and ε=0.01. These parameters set the balance between the complexity and performance of the encoding. A message of "n" blocks can be recovered, with high probability, from (1+3ε)"n" check blocks. The probability of failure is (ε/2)q+1. Outer encoding. Any erasure code may be used as the outer encoding, but the author of online codes suggest the following. For each message block, pseudo-randomly choose "q" auxiliary blocks (from a total of 0.55"q"ε"n" auxiliary blocks) to attach it to. Each auxiliary block is then the XOR of all the message blocks which have been attached to it. Inner encoding. The inner encoding takes the composite message and generates a stream of check blocks. A check block is the XOR of all the blocks from the composite message that it is attached to. The "degree" of a check block is the number of blocks that it is attached to. The degree is determined by sampling a random distribution, "p", which is defined as: formula_0 formula_1 formula_2 for formula_3 Once the degree of the check block is known, the blocks from the composite message which it is attached to are chosen uniformly. Decoding. Obviously the decoder of the inner stage must hold check blocks which it cannot currently decode. A check block can only be decoded when all but one of the blocks which it is attached to are known. The graph to the left shows the progress of an inner decoder. The x-axis plots the number of check blocks received and the dashed line shows the number of check blocks which cannot currently be used. This climbs almost linearly at first as many check blocks with degree &gt; 1 are received but unusable. At a certain point, some of the check blocks are suddenly usable, resolving more blocks which then causes more check blocks to be usable. Very quickly the whole file can be decoded. As the graph also shows the inner decoder falls just shy of decoding everything for a little while after having received "n" check blocks. The outer encoding ensures that a few elusive blocks from the inner decoder are not an issue, as the file can be recovered without them.
[ { "math_id": 0, "text": "F=\\left\\lceil\\frac{\\ln(\\epsilon^2/4)}{\\ln(1-\\epsilon/2)}\\right\\rceil" }, { "math_id": 1, "text": "p_1=1-\\frac{1+1/F}{1+\\epsilon}" }, { "math_id": 2, "text": "p_i=\\frac{(1-p_1)F}{(F-1)i(i-1)}" }, { "math_id": 3, "text": "2\\le i\\le F" } ]
https://en.wikipedia.org/wiki?curid=1145820
1145848
Pairing function
Function uniquely mapping two numbers into a single number In mathematics, a pairing function is a process to uniquely encode two natural numbers into a single natural number. Any pairing function can be used in set theory to prove that integers and rational numbers have the same cardinality as natural numbers. Definition. A pairing function is a bijection formula_0 More generally, a pairing function on a set "A" is a function that maps each pair of elements from "A" into an element of "A", such that any two pairs of elements of "A" are associated with different elements of "A," or a bijection from formula_1 to "A". Hopcroft and Ullman pairing function. Hopcroft and Ullman (1979) define the following pairing function: formula_2, where formula_3. This is the same as the Cantor pairing function below, shifted to exclude 0 (i.e., formula_4, formula_5, and formula_6). Cantor pairing function. The Cantor pairing function is a primitive recursive pairing function formula_7 defined by formula_8 where formula_9. It can also be expressed as formula_10. It is also strictly monotonic w.r.t. each argument, that is, for all formula_11, if formula_12, then formula_13; similarly, if formula_14, then formula_15. The statement that this is the only quadratic pairing function is known as the Fueter–Pólya theorem. Whether this is the only polynomial pairing function is still an open question. When we apply the pairing function to "k"1 and "k"2 we often denote the resulting number as ⟨"k"1, "k"2⟩. This definition can be inductively generalized to the formula_16 for formula_17 as formula_18 with the base case defined above for a pair: formula_19 Inverting the Cantor pairing function. Let formula_20 be an arbitrary natural number. We will show that there exist unique values formula_21 such that formula_22 and hence that the function "π(x, y)" is invertible. It is helpful to define some intermediate values in the calculation: formula_23 formula_24 formula_25 where "t" is the triangle number of "w". If we solve the quadratic equation formula_26 for "w" as a function of "t", we get formula_27 which is a strictly increasing and continuous function when "t" is non-negative real. Since formula_28 we get that formula_29 and thus formula_30 where ⌊ ⌋ is the floor function. So to calculate "x" and "y" from "z", we do: formula_31 formula_32 formula_33 formula_34 Since the Cantor pairing function is invertible, it must be one-to-one and onto. Examples. To calculate "π"(47, 32): 47 + 32 79, 79 + 1 80, 79 × 80 6320, 6320 ÷ 2 3160, 3160 + 32 3192, so "π"(47, 32) 3192. To find "x" and "y" such that "π"("x", "y") 1432: 8 × 1432 11456, 11456 + 1 11457, 107.037 − 1 106.037, 106.037 ÷ 2 53.019, ⌊53.019⌋ 53, so "w" 53; 53 + 1 54, 53 × 54 2862, 2862 ÷ 2 1431, so "t" 1431; 1432 − 1431 1, so "y" 1; 53 − 1 52, so "x" 52; thus "π"(52, 1) 1432. Derivation. The graphical shape of Cantor's pairing function, a diagonal progression, is a standard trick in working with infinite sequences and countability. The algebraic rules of this diagonal-shaped function can verify its validity for a range of polynomials, of which a quadratic will turn out to be the simplest, using the method of induction. Indeed, this same technique can also be followed to try and derive any number of other functions for any variety of schemes for enumerating the plane. A pairing function can usually be defined inductively – that is, given the "n"th pair, what is the ("n"+1)th pair? The way Cantor's function progresses diagonally across the plane can be expressed as formula_35. The function must also define what to do when it hits the boundaries of the 1st quadrant – Cantor's pairing function resets back to the x-axis to resume its diagonal progression one step further out, or algebraically: formula_36. Also we need to define the starting point, what will be the initial step in our induction method: "π"(0, 0) 0. Assume that there is a quadratic 2-dimensional polynomial that can fit these conditions (if there were not, one could just repeat by trying a higher-degree polynomial). The general form is then formula_37. Plug in our initial and boundary conditions to get "f" 0 and: formula_38, so we can match our "k" terms to get "b" "a" "d" 1-"a" "e" 1+"a". So every parameter can be written in terms of "a" except for "c", and we have a final equation, our diagonal step, that will relate them: formula_39 Expand and match terms again to get fixed values for "a" and "c", and thus all parameters: "a" "b" "d" "c" 1 "e" "f" 0. Therefore formula_40 is the Cantor pairing function, and we also demonstrated through the derivation that this satisfies all the conditions of induction. Other pairing functions. The function formula_41 is a pairing function. In 1990, Regan proposed the first known pairing function that is computable in linear time and with constant space (as the previously known examples can only be computed in linear time if multiplication can be too, which is doubtful). In fact, both this pairing function and its inverse can be computed with finite-state transducers that run in real time. In the same paper, the author proposed two more monotone pairing functions that can be computed online in linear time and with logarithmic space; the first can also be computed offline with zero space. In 2001, Pigeon proposed a pairing function based on bit-interleaving, defined recursively as: formula_42 where formula_43 and formula_44 are the least significant bits of "i" and "j" respectively. In 2006, Szudzik proposed a "more elegant" pairing function defined by the expression: formula_45 Which can be unpaired using the expression: formula_46 (Qualitatively, it assigns consecutive numbers to pairs along the edges of squares.) This pairing function orders SK combinator calculus expressions by depth. This method is the mere application to formula_47 of the idea, found in most textbooks on Set Theory, used to establish formula_48 for any infinite cardinal formula_49 in ZFC. Define on formula_50 the binary relation formula_51 formula_52 is then shown to be a well-ordering such that every element has formula_53 predecessors, which implies that formula_48. It follows that formula_54 is isomorphic to formula_55 and the pairing function above is nothing more than the enumeration of integer couples in increasing order. (See also .) Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi:\\mathbb{N} \\times \\mathbb{N} \\to \\mathbb{N}." }, { "math_id": 1, "text": "A^2" }, { "math_id": 2, "text": "\\langle i, j\\rangle := \\frac{1}{2}(i+j-2)(i+j-1) + i" }, { "math_id": 3, "text": "i, j\\in\\{1, 2, 3, \\dots \\}" }, { "math_id": 4, "text": "i=k_2+1" }, { "math_id": 5, "text": "j=k_1+1" }, { "math_id": 6, "text": "\\langle i, j\\rangle - 1 = \\pi(k_2,k_1)" }, { "math_id": 7, "text": "\\pi:\\mathbb{N} \\times \\mathbb{N} \\to \\mathbb{N}" }, { "math_id": 8, "text": "\\pi(k_1,k_2) := \\frac{1}{2}(k_1 + k_2)(k_1 + k_2 + 1)+k_2" }, { "math_id": 9, "text": "k_1, k_2\\in\\{0, 1, 2, 3, \\dots\\}" }, { "math_id": 10, "text": "\\pi(x, y) := \\frac{x^2 + x + 2xy + 3y + y^2}{2}" }, { "math_id": 11, "text": "k_1, k_1', k_2, k_2' \\in \\mathbb{N}" }, { "math_id": 12, "text": "k_1 < k_{1}'" }, { "math_id": 13, "text": "\\pi(k_1, k_2) < \\pi(k_1', k_2)" }, { "math_id": 14, "text": "k_2 < k_{2}'" }, { "math_id": 15, "text": "\\pi(k_1, k_2) < \\pi(k_1, k_2')" }, { "math_id": 16, "text": "\\pi^{(n)}:\\mathbb{N}^n \\to \\mathbb{N}" }, { "math_id": 17, "text": "n > 2" }, { "math_id": 18, "text": "\\pi^{(n)}(k_1, \\ldots, k_{n-1}, k_n) := \\pi ( \\pi^{(n-1)}(k_1, \\ldots, k_{n-1}) , k_n)" }, { "math_id": 19, "text": "\\pi^{(2)}(k_1,k_2) := \\pi(k_1,k_2)." }, { "math_id": 20, "text": "z \\in \\mathbb{N}" }, { "math_id": 21, "text": "x, y \\in \\mathbb{N}" }, { "math_id": 22, "text": " z = \\pi(x, y) = \\frac{(x + y + 1)(x + y)}{2} + y " }, { "math_id": 23, "text": " w = x + y \\!" }, { "math_id": 24, "text": " t = \\frac{1}{2}w(w + 1) = \\frac{w^2 + w}{2} " }, { "math_id": 25, "text": " z = t + y \\!" }, { "math_id": 26, "text": " w^2 + w - 2t = 0 \\!" }, { "math_id": 27, "text": " w = \\frac{\\sqrt{8t + 1} - 1}{2} " }, { "math_id": 28, "text": " t \\leq z = t + y < t + (w + 1) = \\frac{(w + 1)^2 + (w + 1)}{2} " }, { "math_id": 29, "text": " w \\leq \\frac{\\sqrt{8z + 1} - 1}{2} < w + 1 " }, { "math_id": 30, "text": " w = \\left\\lfloor \\frac{\\sqrt{8z + 1} - 1}{2} \\right\\rfloor. " }, { "math_id": 31, "text": " w = \\left\\lfloor \\frac{\\sqrt{8z + 1} - 1}{2} \\right\\rfloor " }, { "math_id": 32, "text": " t = \\frac{w^2 + w}{2} " }, { "math_id": 33, "text": " y = z - t \\!" }, { "math_id": 34, "text": " x = w - y. \\!" }, { "math_id": 35, "text": "\\pi(x,y)+1 = \\pi(x-1,y+1)" }, { "math_id": 36, "text": "\\pi(0,k)+1 = \\pi(k+1,0)" }, { "math_id": 37, "text": "\\pi(x,y) = ax^2+by^2+cxy+dx+ey+f" }, { "math_id": 38, "text": "bk^2+ek+1 = a(k+1)^2+d(k+1)" }, { "math_id": 39, "text": "\\begin{align}\n \\pi(x,y)+1 &= a(x^2+y^2) + cxy + (1-a)x + (1+a)y + 1 \\\\\n &= a((x-1)^2+(y+1)^2) + c(x-1)(y+1) + (1-a)(x-1) + (1+a)(y+1).\n \\end{align}" }, { "math_id": 40, "text": "\\begin{align}\n \\pi(x,y) &= \\frac{1}{2}(x^2+y^2) + xy + \\frac{1}{2}x + \\frac{3}{2}y \\\\ \n &= \\frac{1}{2}(x+y)(x+y+1) + y,\n \\end{align}" }, { "math_id": 41, "text": "P_2(x, y):= 2^x(2y + 1) - 1" }, { "math_id": 42, "text": "\\langle i,j\\rangle_{P}=\\begin{cases}\nT & \\text{if}\\ i=j=0;\\\\\n\\langle\\lfloor i/2\\rfloor,\\lfloor j/2\\rfloor\\rangle_{P}:i_0:j_0&\\text{otherwise,}\n\\end{cases}" }, { "math_id": 43, "text": "i_0" }, { "math_id": 44, "text": "j_0" }, { "math_id": 45, "text": "\\operatorname{ElegantPair}[x, y] := \\begin{cases}\ny^2 + x&\\text{if}\\ x < y,\\\\\nx^2 + x + y&\\text{if}\\ x \\ge y.\\\\\n\\end{cases}" }, { "math_id": 46, "text": "\\operatorname{ElegantUnpair}[z] := \\begin{cases}\n\\left\\{ z - \\lfloor\\sqrt{z}\\rfloor^2, \\lfloor\\sqrt{z}\\rfloor \\right\\}\n& \\text{if }z - \\lfloor\\sqrt{z}\\rfloor^2 < \\lfloor\\sqrt{z}\\rfloor,\n\\\\\n\\left\\{ \\lfloor\\sqrt{z}\\rfloor, z - \\lfloor\\sqrt{z}\\rfloor^2 - \\lfloor\\sqrt{z}\\rfloor \\right\\}\n& \\text{if }z - \\lfloor\\sqrt{z}\\rfloor^2\\geq\\lfloor\\sqrt{z}\\rfloor.\n\\end{cases}" }, { "math_id": 47, "text": "\\N" }, { "math_id": 48, "text": "\\kappa^2=\\kappa" }, { "math_id": 49, "text": "\\kappa" }, { "math_id": 50, "text": "\\kappa\\times\\kappa" }, { "math_id": 51, "text": "(\\alpha,\\beta)\\preccurlyeq(\\gamma,\\delta) \\text{ if either } \\begin{cases}\n(\\alpha,\\beta) = (\\gamma,\\delta),\\\\[4pt]\n\\max(\\alpha,\\beta) < \\max(\\gamma,\\delta),\\\\[4pt]\n\\max(\\alpha,\\beta) = \\max(\\gamma,\\delta)\\ \\text{and}\\ \\alpha<\\gamma,\\text{ or}\\\\[4pt]\n\\max(\\alpha,\\beta) = \\max(\\gamma,\\delta)\\ \\text{and}\\ \\alpha=\\gamma\\ \\text{and}\\ \\beta<\\delta.\n\\end{cases}" }, { "math_id": 52, "text": "\\preccurlyeq" }, { "math_id": 53, "text": "{}<\\kappa" }, { "math_id": 54, "text": "(\\N\\times\\N,\\preccurlyeq)" }, { "math_id": 55, "text": "(\\N,\\leqslant)" } ]
https://en.wikipedia.org/wiki?curid=1145848
1145930
Lucas–Carmichael number
In mathematics, a Lucas–Carmichael number is a positive composite integer "n" such that The first condition resembles the Korselt's criterion for Carmichael numbers, where -1 is replaced with +1. The second condition eliminates from consideration some trivial cases like cubes of prime numbers, such as 8 or 27, which otherwise would be Lucas–Carmichael numbers (since "n"3 + 1 = ("n" + 1)("n"2 − "n" + 1) is always divisible by "n" + 1). They are named after Édouard Lucas and Robert Carmichael. Properties. The smallest Lucas–Carmichael number is 399 = 3 × 7 × 19. It is easy to verify that 3+1, 7+1, and 19+1 are all factors of 399+1 = 400. The smallest Lucas–Carmichael number with 4 factors is 8855 = 5 × 7 × 11 × 23. The smallest Lucas–Carmichael number with 5 factors is 588455 = 5 × 7 × 17 × 23 × 43. It is not known whether any Lucas–Carmichael number is also a Carmichael number. Thomas Wright proved in 2016 that there are infinitely many Lucas–Carmichael numbers. If we let formula_0 denote the number of Lucas–Carmichael numbers up to formula_1, Wright showed that there exists a positive constant formula_2 such that formula_3. List of Lucas–Carmichael numbers. The first few Lucas–Carmichael numbers (sequence in the OEIS) and their prime factors are listed below. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " N(X)" }, { "math_id": 1, "text": " X" }, { "math_id": 2, "text": "K" }, { "math_id": 3, "text": " N(X) \\gg X^{K/\\left( \\log\\log \\log X\\right)^2}" } ]
https://en.wikipedia.org/wiki?curid=1145930
1145955
NL (complexity)
&lt;templatestyles src="Unsolved/styles.css" /&gt; Unsolved problem in computer science: In computational complexity theory, NL (Nondeterministic Logarithmic-space) is the complexity class containing decision problems that can be solved by a nondeterministic Turing machine using a logarithmic amount of memory space. NL is a generalization of L, the class for logspace problems on a deterministic Turing machine. Since any deterministic Turing machine is also a nondeterministic Turing machine, we have that L is contained in NL. NL can be formally defined in terms of the computational resource nondeterministic space (or NSPACE) as NL = NSPACE(log "n"). Important results in complexity theory allow us to relate this complexity class with other classes, telling us about the relative power of the resources involved. Results in the field of algorithms, on the other hand, tell us which problems can be solved with this resource. Like much of complexity theory, many important questions about NL are still open (see Unsolved problems in computer science). Occasionally NL is referred to as RL due to its probabilistic definition below; however, this name is more frequently used to refer to randomized logarithmic space, which is not known to equal NL. NL-complete problems. Several problems are known to be NL-complete under log-space reductions, including ST-connectivity and 2-satisfiability. ST-connectivity asks, for nodes "S" and "T" in a directed graph, whether "T" is reachable from "S". 2-satisfiability asks, given a propositional formula of which each clause is the disjunction of two literals, if there is a variable assignment that makes the formula true. An example instance, where formula_0 indicates "not", might be: formula_1 Containments. It is known that NL is contained in P, since there is a polynomial-time algorithm for 2-satisfiability, but it is not known whether NL P or whether L NL. It is known that NL co-NL, where co-NL is the class of languages whose complements are in NL. This result (the Immerman–Szelepcsényi theorem) was independently discovered by Neil Immerman and Róbert Szelepcsényi in 1987; they received the 1995 Gödel Prize for this work. In circuit complexity, NL can be placed within the NC hierarchy. In Papadimitriou 1994, Theorem 16.1, we have: formula_2. More precisely, NL is contained in AC1. It is known that NL is equal to ZPL, the class of problems solvable by randomized algorithms in logarithmic space and unbounded time, with no error. It is not, however, known or believed to be equal to RLP or ZPLP, the polynomial-time restrictions of RL and ZPL, which some authors refer to as RL and ZPL. We can relate NL to deterministic space using Savitch's theorem, which tells us that any nondeterministic algorithm can be simulated by a deterministic machine in at most quadratically more space. From Savitch's theorem, we have directly that: formula_3 This was the strongest deterministic-space inclusion known in 1994 (Papadimitriou 1994 Problem 16.4.10, "Symmetric space"). Since larger space classes are not affected by quadratic increases, the nondeterministic and deterministic classes are known to be equal, so that for example we have PSPACE NPSPACE. Alternative definitions. Probabilistic definition. Suppose "C" is the complexity class of decision problems solvable in logarithmithic space with probabilistic Turing machines that never accept incorrectly but are allowed to reject incorrectly less than 1/3 of the time; this is called "one-sided error". The constant 1/3 is arbitrary; any "x" with 0 ≤ "x" &lt; 1/2 would suffice. It turns out that "C" = NL. Notice that "C", unlike its deterministic counterpart L, is not limited to polynomial time, because although it has a polynomial number of configurations it can use randomness to escape an infinite loop. If we do limit it to polynomial time, we get the class RL, which is contained in but not known or believed to equal NL. There is a simple algorithm that establishes that "C" = NL. Clearly "C" is contained in NL, since: To show that NL is contained in "C", we simply take an NL algorithm and choose a random computation path of length "n", and execute this 2"n" times. Because no computation path exceeds length "n", and because there are 2"n" computation paths in all, we have a good chance of hitting the accepting one (bounded below by a constant). The only problem is that we don't have room in log space for a binary counter that goes up to 2"n". To get around this we replace it with a "randomized" counter, which simply flips "n" coins and stops and rejects if they all land on heads. Since this event has probability 2−"n", we expect to take 2"n" steps on average before stopping. It only needs to keep a running total of the number of heads in a row it sees, which it can count in log space. Because of the Immerman–Szelepcsényi theorem, according to which NL is closed under complements, the one-sided error in these probabilistic computations can be replaced by zero-sided error. That is, these problems can be solved by probabilistic Turing machines that use logarithmic space and never make errors. The corresponding complexity class that also requires the machine to use only polynomial time is called ZPLP. Thus, when we only look at space, it seems that randomization and nondeterminism are equally powerful. Certificate definition. NL can equivalently be characterised by certificates, analogous to classes such as NP. Consider a deterministic logarithmic-space bounded Turing machine that has an additional read-only read-once input tape. A language is in NL if and only if such a Turing machine accepts any word in the language for an appropriate choice of certificate in its additional input tape, and rejects any word not in the language regardless of the certificate. Cem Say and Abuzer Yakaryılmaz have proven that the deterministic logarithmic-space Turing machine in the statement above can be replaced by a bounded-error probabilistic constant-space Turing machine that is allowed to use only a constant number of random bits. Descriptive complexity. There is a simple logical characterization of NL: it contains precisely those languages expressible in first-order logic with an added transitive closure operator. Closure properties. The class NL is closed under the operations complementation, union, and therefore intersection, concatenation, and Kleene star. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\neg " }, { "math_id": 1, "text": "(x_1 \\vee \\neg x_3) \\wedge (\\neg x_2 \\vee x_3) \\wedge (\\neg x_1 \\vee \\neg x_2)" }, { "math_id": 2, "text": "\\mathsf{NC_1 \\subseteq L \\subseteq NL \\subseteq NC_2}" }, { "math_id": 3, "text": "\\mathsf{NL \\subseteq SPACE}(\\log^2 n) \\ \\ \\ \\ \\text{equivalently, } \\mathsf{NL \\subseteq L}^2." } ]
https://en.wikipedia.org/wiki?curid=1145955
1145989
Lights Out (game)
1995 electronic game Lights Out is an electronic game released by Tiger Electronics in 1995. The game consists of a 5 by 5 grid of lights. When the game starts, a random number or a stored pattern of these lights is switched on. Pressing any of the lights will toggle it and the adjacent lights. The goal of the puzzle is to switch all the lights off, preferably with as few button presses as possible. "Merlin", a similar electronic game, was released by Parker Brothers in the 1970s with similar rules on a 3 by 3 grid. Another similar game was produced by Vulcan Electronics in 1983 under the name "XL-25". Tiger Toys also produced a cartridge version of "Lights Out" for its Game com handheld game console in 1997, shipped free with the console. A number of new puzzles similar to "Lights Out" have been released, such as "Lights Out 2000" (5×5 with multiple colors), "Lights Out Cube" (six 3×3 faces with effects across edges), and "Lights Out Deluxe" (6×6). Inventors. "Lights Out" was created by a group of people including Avi Olti, Gyora Benedek, Zvi Herman, Revital Bloomberg, Avi Weiner and Michael Ganor. The members of the group together and individually also invented several other games, such as Hidato, NimX, iTop and many more. Gameplay. The game consists of a 5 by 5 grid of lights. When the game starts, a random number or a stored pattern of these lights is switched on. Pressing any of the lights will toggle it and the four adjacent lights. The goal of the puzzle is to switch all the lights off, preferably in as few button presses as possible. Mathematics. If a light is on, it must be toggled an odd number of times to be turned off. If a light is off, it must be toggled an even number of times (including none at all) for it to remain off. Several conclusions are used for the game's strategy. Firstly, the order in which the lights are pressed does not matter, as the result will be the same. Secondly, in a minimal solution, each light needs to be pressed no more than once, because pressing a light twice is equivalent to not pressing it at all. In 1998, Marlow Anderson and Todd Feil used linear algebra to prove that not all configurations are solvable and also to prove that there are exactly four winning scenarios, not including redundant moves, for any solvable 5×5 problem. The 5×5 grid of "Lights Out" can be represented as a 25x1 column vector with a 1 and 0 signifying a light in its on and off state respectively. Each entry is an element of Z2, the field of integers modulo 2. Anderson and Feil found that in order for a configuration to be solvable (deriving the null vector from the original configuration) it must be orthogonal to the two vectors N1 and N2 below (pictured as a 5×5 array but not to be confused with matrices). formula_0 In addition, they found that N1 and N2 can be used to find three additional solutions from a solution, and that these four solutions are the only four solutions (excluding redundant moves) to the starting given configuration. These four solutions are X, X + N1, X + N2, and X + N1 + N2 where X is a solution to the starting given configuration. An introduction into this method was published by Robert Eisele. Light chasing. "Light chasing" is a method similar to Gaussian elimination which always solves the puzzle (if a solution exists), although with the possibility of many redundant steps. In this approach, rows are manipulated one at a time starting with the top row. All the lights are disabled in the row by toggling the adjacent lights in the row directly below. The same method is then used on the consecutive rows up to the last one. The last row is solved separately, depending on its active lights. Corresponding lights (see table below) in the top row are toggled and the initial algorithm is run again, resulting in a solution. Tables and strategies for other board sizes are generated by playing "Lights Out" with a blank board and observing the result of bringing a particular light from the top row down to the bottom row. Further results. Once a single solution is found, a solution with the minimum number of moves can be determined through elimination of redundant sets of button presses that have no cumulative effect. Existence of solutions has been proved for a wide variety of board configurations, such as hexagonal, while solutions to n-by-n boards for n≤200 have been explicitly constructed. There exists a solution for every N×N case. It is solvable on any undirected graph, where clicking on one vertex flips its value and its neighbours. More generally if the action matrix is symmetric then its diagonal is always solvable. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N_1 = \\begin{pmatrix} 0 & 1 & 1 & 1 & 0 \\\\ 1 & 0 & 1 & 0 & 1 \\\\ 1 & 1 & 0 & 1 & 1 \\\\ 1 & 0 & 1 & 0 & 1 \\\\ 0 & 1 & 1 & 1 & 0 \\end{pmatrix}, N_2 = \\begin{pmatrix} 1 & 0 & 1 & 0 & 1 \\\\ 1 & 0 & 1 & 0 & 1 \\\\ 0 & 0 & 0 & 0 & 0 \\\\ 1 & 0 & 1 & 0 & 1 \\\\ 1 & 0 & 1 & 0 & 1 \\end{pmatrix} " } ]
https://en.wikipedia.org/wiki?curid=1145989
1146188
Erasure code
Code added to allow recovery of lost data In coding theory, an erasure code is a forward error correction (FEC) code under the assumption of bit erasures (rather than bit errors), which transforms a message of "k" symbols into a longer message (code word) with "n" symbols such that the original message can be recovered from a subset of the "n" symbols. The fraction "r" = "k"/"n" is called the code rate. The fraction "k’/k", where "k’" denotes the number of symbols required for recovery, is called reception efficiency. The recovery algorithm expects that it is known which of the "n" symbols are lost. History. Erasure coding was invented by Irving Reed and Gustave Solomon in 1960. There are many different erasure coding schemes. The most popular erasure codes are Reed-Solomon coding, Low-density parity-check code (LDPC codes), and Turbo codes. As of 2023, modern data storage systems can be designed to tolerate the complete failure of a few disks without data loss, using one of 3 approaches: While technically RAID can be seen as a kind of erasure code, "RAID" is generally applied to an array attached to a single host computer (which is a single point of failure), while "erasure coding" generally implies multiple hosts, sometimes called a Redundant Array of Inexpensive Servers (RAIS). The erasure code allows operations to continue when any one of those hosts stops. Compared to block-level RAID systems, object storage erasure coding has some significant differences that make it more resilient. Optimal erasure codes. Optimal erasure codes have the property that any "k" out of the "n" code word symbols are sufficient to recover the original message (i.e., they have optimal reception efficiency). Optimal erasure codes are maximum distance separable codes (MDS codes). Parity check. Parity check is the special case where "n "= "k" + 1. From a set of "k" values formula_0, a checksum is computed and appended to the "k" source values: formula_1 The set of "k" + 1 values formula_2 is now consistent with regard to the checksum. If one of these values, formula_3, is erased, it can be easily recovered by summing the remaining variables: formula_4 RAID 5 is a widely-used application of the parity check erasure code. Polynomial oversampling. Example: Err-mail ("k" = 2). In the simple case where "k" = 2, redundancy symbols may be created by sampling different points along the line between the two original symbols. This is pictured with a simple example, called err-mail: Alice wants to send her telephone number (555629) to Bob using err-mail. Err-mail works just like e-mail, except Instead of asking Bob to acknowledge the messages she sends, Alice devises the following scheme. Bob knows that the form of "f"("k") is formula_5, where "a" and "b" are the two parts of the telephone number. Now suppose Bob receives "D=777" and "E=851". Bob can reconstruct Alice's phone number by computing the values of "a" and "b" from the values ("f"(4) and "f"(5)) he has received. Bob can perform this procedure using any two err-mails, so the erasure code in this example has a rate of 40%. Note that Alice cannot encode her telephone number in just one err-mail, because it contains six characters, and that the maximum length of one err-mail message is five characters. If she sent her phone number in pieces, asking Bob to acknowledge receipt of each piece, at least four messages would have to be sent anyway (two from Alice, and two acknowledgments from Bob). So the erasure code in this example, which requires five messages, is quite economical. This example is a little bit contrived. For truly generic erasure codes that work over any data set, we would need something other than the "f"("i") given. General case. The linear construction above can be generalized to polynomial interpolation. Additionally, points are now computed over a finite field. First we choose a finite field "F" with order of at least "n", but usually a power of 2. The sender numbers the data symbols from 0 to "k" − 1 and sends them. He then constructs a (Lagrange) polynomial "p"("x") of order "k" such that "p"("i") is equal to data symbol "i". He then sends "p"("k"), ..., "p"("n" − 1). The receiver can now also use polynomial interpolation to recover the lost packets, provided he receives "k" symbols successfully. If the order of "F" is less than 2"b", where b is the number of bits in a symbol, then multiple polynomials can be used. The sender can construct symbols "k" to "n" − 1 'on the fly', i.e., distribute the workload evenly between transmission of the symbols. If the receiver wants to do his calculations 'on the fly', he can construct a new polynomial "q", such that "q"("i") = "p"("i") if symbol "i" &lt; "k" was received successfully and "q"("i") = 0 when symbol "i" &lt; "k" was not received. Now let "r"("i") = "p"("i") − "q"("i"). Firstly we know that "r"("i") = 0 if symbol "i" &lt; "k" has been received successfully. Secondly, if symbol "i" ≥ "k" has been received successfully, then "r"("i") = "p"("i") − "q"("i") can be calculated. So we have enough data points to construct "r" and evaluate it to find the lost packets. So both the sender and the receiver require "O"("n" ("n" − "k")) operations and only "O"("n" − "k") space for operating 'on the fly'. Real world implementation. This process is implemented by Reed–Solomon codes, with code words constructed over a finite field using a Vandermonde matrix. Most practical erasure codes are systematic codes -- each one of the original "k" symbols can be found copied, unencoded, as one of the "n" message symbols. (Erasure codes that support secret sharing never use a systematic code). Near-optimal erasure codes. "Near-optimal erasure codes" require (1 + ε)"k" symbols to recover the message (where ε&gt;0). Reducing ε can be done at the cost of CPU time. "Near-optimal erasure codes" trade correction capabilities for computational complexity: practical algorithms can encode and decode with linear time complexity. Fountain codes (also known as "rateless erasure codes") are notable examples of "near-optimal erasure codes". They can transform a "k" symbol message into a practically infinite encoded form, i.e., they can generate an arbitrary amount of redundancy symbols that can all be used for error correction. Receivers can start decoding after they have received slightly more than "k" encoded symbols. Regenerating codes address the issue of rebuilding (also called repairing) lost encoded fragments from existing encoded fragments. This issue occurs in distributed storage systems where communication to maintain encoded redundancy is a problem. Applications of erasure coding in storage systems. Erasure coding is now standard practice for reliable data storage. In particular, various implementations of Reed-Solomon erasure coding are used by Apache Hadoop, the RAID-6 built into Linux, Microsoft Azure, Facebook cold storage, and Backblaze Vaults. The classical way to recover from failures in storage systems was to use replication. However, replication incurs significant overhead in terms of wasted bytes. Therefore, increasingly large storage systems, such as those used in data centers use erasure-coded storage. The most common form of erasure coding used in storage systems is Reed-Solomon (RS) code, an advanced mathematics formula used to enable regeneration of missing data from pieces of known data, called parity blocks. In a ("k", "m") RS code, a given set of "k" data blocks, called "chunks", are encoded into ("k" + "m") chunks. The total set of chunks comprises a "stripe". The coding is done such that as long as at least "k" out of ("k" + "m") chunks are available, one can recover the entire data. This means a ("k", "m") RS-encoded storage can tolerate up to "m" failures. Example: In RS (10, 4) code, which is used in Facebook for their HDFS, 10 MB of user data is divided into ten 1MB blocks. Then, four additional 1 MB parity blocks are created to provide redundancy. This can tolerate up to 4 concurrent failures. The storage overhead here is 14/10 = 1.4X. In the case of a fully replicated system, the 10 MB of user data will have to be replicated 4 times to tolerate up to 4 concurrent failures. The storage overhead in that case will be 50/10 = 5 times. This gives an idea of the lower storage overhead of erasure-coded storage compared to full replication and thus the attraction in today's storage systems. Initially, erasure codes were used to reduce the cost of storing "cold" (rarely-accessed) data efficiently; but erasure codes can also be used to improve performance serving "hot" (more-frequently-accessed) data. RAID N+M divides data blocks across N+M drives, and can recover all the data when any M drives fail. In particular, RAID 7.3 refers to triple-parity RAID, and can recover all the data when any 3 drives fail. Examples. Here are some examples of implementations of the various codes: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\{v_i\\}_{1\\leq i \\leq k}" }, { "math_id": 1, "text": "v_{k+1}= - \\sum_{i=1}^k v_i." }, { "math_id": 2, "text": "\\{v_i\\}_{1\\leq i \\leq k+1}" }, { "math_id": 3, "text": "v_e" }, { "math_id": 4, "text": "v_{e}= - \\sum_{i=1 ,i \\neq e }^{k+1}v_i." }, { "math_id": 5, "text": "f(i) = a + (b-a)(i-1)" }, { "math_id": 6, "text": "f(i) = 555 + 74(i-1)" }, { "math_id": 7, "text": "f(1) = 555" }, { "math_id": 8, "text": "f(2) = 629" } ]
https://en.wikipedia.org/wiki?curid=1146188
1146267
Geometric quantization
Recipe for constructing a quantum analog of a classical physical theory In mathematical physics, geometric quantization is a mathematical approach to defining a quantum theory corresponding to a given classical theory. It attempts to carry out quantization, for which there is in general no exact recipe, in such a way that certain analogies between the classical theory and the quantum theory remain manifest. For example, the similarity between the Heisenberg equation in the Heisenberg picture of quantum mechanics and the Hamilton equation in classical physics should be built in. Origins. One of the earliest attempts at a natural quantization was Weyl quantization, proposed by Hermann Weyl in 1927. Here, an attempt is made to associate a quantum-mechanical observable (a self-adjoint operator on a Hilbert space) with a real-valued function on classical phase space. The position and momentum in this phase space are mapped to the generators of the Heisenberg group, and the Hilbert space appears as a group representation of the Heisenberg group. In 1946, H. J. Groenewold considered the product of a pair of such observables and asked what the corresponding function would be on the classical phase space. This led him to discover the phase-space star-product of a pair of functions. The modern theory of geometric quantization was developed by Bertram Kostant and Jean-Marie Souriau in the 1970s. One of the motivations of the theory was to understand and generalize Kirillov's orbit method in representation theory. Types. The geometric quantization procedure falls into the following three steps: prequantization, polarization, and metaplectic correction. Prequantization produces a natural Hilbert space together with a quantization procedure for observables that exactly transforms Poisson brackets on the classical side into commutators on the quantum side. Nevertheless, the prequantum Hilbert space is generally understood to be "too big". The idea is that one should then select a Poisson-commuting set of "n" variables on the 2"n"-dimensional phase space and consider functions (or, more properly, sections) that depend only on these "n" variables. The "n" variables can be either real-valued, resulting in a position-style Hilbert space, or complex analytic, producing something like the Segal–Bargmann space. A polarization is a coordinate-independent description of such a choice of "n" Poisson-commuting functions. The metaplectic correction (also known as the half-form correction) is a technical modification of the above procedure that is necessary in the case of real polarizations and often convenient for complex polarizations. Prequantization. Suppose formula_0 is a symplectic manifold with symplectic form formula_1. Suppose at first that formula_1 is exact, meaning that there is a globally defined "symplectic potential" formula_2 with formula_3. We can consider the "prequantum Hilbert space" of square-integrable functions on formula_4 (with respect to the Liouville volume measure). For each smooth function formula_5 on formula_4, we can define the Kostant–Souriau prequantum operator formula_6. where formula_7 is the Hamiltonian vector field associated to formula_5. More generally, suppose formula_0 has the property that the integral of formula_8 over any closed surface is an integer. Then we can construct a line bundle formula_9 with connection whose curvature 2-form is formula_10. In that case, the prequantum Hilbert space is the space of square-integrable sections of formula_9, and we replace the formula for formula_11 above with formula_12, with formula_13 the connection. The prequantum operators satisfy formula_14 for all smooth functions formula_5 and formula_15. The construction of the preceding Hilbert space and the operators formula_11 is known as "prequantization". Polarization. The next step in the process of geometric quantization is the choice of a polarization. A polarization is a choice at each point in formula_4 a Lagrangian subspace of the complexified tangent space of formula_4. The subspaces should form an integrable distribution, meaning that the commutator of two vector fields lying in the subspace at each point should also lie in the subspace at each point. The "quantum" (as opposed to prequantum) Hilbert space is the space of sections of formula_9 that are covariantly constant in the direction of the polarization. The idea is that in the quantum Hilbert space, the sections should be functions of only formula_16 variables on the formula_17-dimensional classical phase space. If formula_5 is a function for which the associated Hamiltonian flow preserves the polarization, then formula_11 will preserve the quantum Hilbert space. The assumption that the flow of formula_5 preserve the polarization is a strong one. Typically not very many functions will satisfy this assumption. Half-form correction. The half-form correction—also known as the metaplectic correction—is a technical modification to the above procedure that is necessary in the case of real polarizations to obtain a nonzero quantum Hilbert space; it is also often useful in the complex case. The line bundle formula_9 is replaced by the tensor product of formula_9 with the square root of the canonical bundle of formula_9. In the case of the vertical polarization, for example, instead of considering functions formula_18 of formula_19 that are independent of formula_20, one considers objects of the form formula_21. The formula for formula_11 must then be supplemented by an additional Lie derivative term. In the case of a complex polarization on the plane, for example, the half-form correction allows the quantization of the harmonic oscillator to reproduce the standard quantum mechanical formula for the energies, formula_22, with the "formula_23" coming courtesy of the half-forms. Poisson manifolds. Geometric quantization of Poisson manifolds and symplectic foliations also is developed. For instance, this is the case of partially integrable and superintegrable Hamiltonian systems and non-autonomous mechanics. Example. In the case that the symplectic manifold is the 2-sphere, it can be realized as a coadjoint orbit in formula_24. Assuming that the area of the sphere is an integer multiple of formula_25, we can perform geometric quantization and the resulting Hilbert space carries an irreducible representation of SU(2). In the case that the area of the sphere is formula_25, we obtain the two-dimensional spin-1/2 representation. Generalization. More generally, this technique leads to deformation quantization, where the ★-product is taken to be a deformation of the algebra of functions on a symplectic manifold or Poisson manifold. However, as a natural quantization scheme (a functor), Weyl's map is not satisfactory. For example, the Weyl map of the classical angular-momentum-squared is not just the quantum angular momentum squared operator, but it further contains a constant term 3ħ2/2. (This extra term is actually physically significant, since it accounts for the nonvanishing angular momentum of the ground-state Bohr orbit in the hydrogen atom.) As a mere representation change, however, Weyl's map underlies the alternate phase-space formulation of conventional quantum mechanics. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; Sources. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "(M,\\omega)" }, { "math_id": 1, "text": "\\omega" }, { "math_id": 2, "text": "\\theta" }, { "math_id": 3, "text": "d\\theta=\\omega" }, { "math_id": 4, "text": "M" }, { "math_id": 5, "text": "f" }, { "math_id": 6, "text": "Q(f):= - i\\hbar\\left( X_f +\\frac{1}{i\\hbar}\\theta(X_f)\\right) +f" }, { "math_id": 7, "text": "X_f" }, { "math_id": 8, "text": "\\omega/(2\\pi\\hbar)" }, { "math_id": 9, "text": "L" }, { "math_id": 10, "text": "\\omega/\\hbar" }, { "math_id": 11, "text": "Q(f)" }, { "math_id": 12, "text": "Q(f)= - i\\hbar\\nabla_{X_f}+f" }, { "math_id": 13, "text": "\\nabla" }, { "math_id": 14, "text": "[Q(f),Q(g)]=i\\hbar Q(\\{ f,g\\} )" }, { "math_id": 15, "text": "g" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "2n" }, { "math_id": 18, "text": "f(x)" }, { "math_id": 19, "text": "x" }, { "math_id": 20, "text": "p" }, { "math_id": 21, "text": "f(x)\\sqrt{dx}" }, { "math_id": 22, "text": "(n+1/2)\\hbar\\omega" }, { "math_id": 23, "text": "+1/2" }, { "math_id": 24, "text": "\\mathfrak{su}(2)^*" }, { "math_id": 25, "text": "2\\pi\\hbar" } ]
https://en.wikipedia.org/wiki?curid=1146267
1146292
Kac–Moody algebra
Lie algebra that can be defined by generators and relations through a generalized Cartan matrix In mathematics, a Kac–Moody algebra (named for Victor Kac and Robert Moody, who independently and simultaneously discovered them in 1968) is a Lie algebra, usually infinite-dimensional, that can be defined by generators and relations through a generalized Cartan matrix. These algebras form a generalization of finite-dimensional semisimple Lie algebras, and many properties related to the structure of a Lie algebra such as its root system, irreducible representations, and connection to flag manifolds have natural analogues in the Kac–Moody setting. A class of Kac–Moody algebras called affine Lie algebras is of particular importance in mathematics and theoretical physics, especially two-dimensional conformal field theory and the theory of exactly solvable models. Kac discovered an elegant proof of certain combinatorial identities, the Macdonald identities, which is based on the representation theory of affine Kac–Moody algebras. Howard Garland and James Lepowsky demonstrated that Rogers–Ramanujan identities can be derived in a similar fashion. History of Kac–Moody algebras. The initial construction by Élie Cartan and Wilhelm Killing of finite dimensional simple Lie algebras from the Cartan integers was type dependent. In 1966 Jean-Pierre Serre showed that relations of Claude Chevalley and Harish-Chandra, with simplifications by Nathan Jacobson, give a defining presentation for the Lie algebra. One could thus describe a simple Lie algebra in terms of generators and relations using data from the matrix of Cartan integers, which is naturally positive definite. "Almost simultaneously in 1967, Victor Kac in the USSR and Robert Moody in Canada developed what was to become Kac–Moody algebra. Kac and Moody noticed that if Wilhelm Killing's conditions were relaxed, it was still possible to associate to the Cartan matrix a Lie algebra which, necessarily, would be infinite dimensional." – A. J. Coleman In his 1967 thesis, Robert Moody considered Lie algebras whose Cartan matrix is no longer positive definite. This still gave rise to a Lie algebra, but one which is now infinite dimensional. Simultaneously, Z-graded Lie algebras were being studied in Moscow where I. L. Kantor introduced and studied a general class of Lie algebras including what eventually became known as Kac–Moody algebras. Victor Kac was also studying simple or nearly simple Lie algebras with polynomial growth. A rich mathematical theory of infinite dimensional Lie algebras evolved. An account of the subject, which also includes works of many others is given in (Kac 1990). See also (Seligman 1987). Introduction. Given an "n"×"n" generalized Cartan matrix formula_0, one can construct a Lie algebra formula_1 defined by generators formula_2, formula_3, and formula_4 and relations given by: Under a "symmetrizability" assumption, formula_1 identifies with the derived subalgebra formula_17 of the affine Kac-Moody algebra formula_18 defined below. Definition. Assume we are given an formula_19 generalized Cartan matrix "C" = ("cij") of rank "r". For every such formula_20, there exists a unique up to isomorphism "realization" of formula_20, i.e. a triple formula_21) where formula_22 is a complex vector space, formula_23 is a subset of elements of formula_22, and formula_24 is a subset of the dual space formula_25 satisfying the following three conditions: The formula_27 are analogue to the simple roots of a semi-simple Lie algebra, and the formula_28 to the simple coroots. Then we define the "Kac-Moody algebra" associated to formula_20 as the Lie algebra formula_29 defined by generators formula_2 and formula_4 and the elements of formula_22 and relations A real (possibly infinite-dimensional) Lie algebra is also considered a Kac–Moody algebra if its complexification is a Kac–Moody algebra. Root-space decomposition of a Kac–Moody algebra. formula_22 is the analogue of a Cartan subalgebra for the Kac–Moody algebra formula_16. If formula_36 is an element of formula_16 such that formula_37 for some formula_38, then formula_39 is called a root vector and formula_40 is a root of formula_16. (The zero functional is not considered a root by convention.) The set of all roots of formula_16 is often denoted by formula_41 and sometimes by formula_42. For a given root formula_40, one denotes by formula_43 the root space of formula_40; that is, formula_44. It follows from the defining relations of formula_16 that formula_45 and formula_46. Also, if formula_47 and formula_48, then formula_49 by the Jacobi identity. A fundamental result of the theory is that any Kac–Moody algebra can be decomposed into the direct sum of formula_22 and its root spaces, that is formula_50, and that every root formula_40 can be written as formula_51 with all the formula_52 being integers of the same sign. Types of Kac–Moody algebras. Properties of a Kac–Moody algebra are controlled by the algebraic properties of its generalized Cartan matrix "C". In order to classify Kac–Moody algebras, it is enough to consider the case of an "indecomposable" matrix "C", that is, assume that there is no decomposition of the set of indices "I" into a disjoint union of non-empty subsets "I"1 and "I"2 such that "C""ij" = 0 for all "i" in "I"1 and "j" in "I"2. Any decomposition of the generalized Cartan matrix leads to the direct sum decomposition of the corresponding Kac–Moody algebra: formula_53 where the two Kac–Moody algebras in the right hand side are associated with the submatrices of "C" corresponding to the index sets "I"1 and "I"2. An important subclass of Kac–Moody algebras corresponds to "symmetrizable" generalized Cartan matrices "C", which can be decomposed as "DS", where "D" is a diagonal matrix with positive integer entries and "S" is a symmetric matrix. Under the assumptions that "C" is symmetrizable and indecomposable, the Kac–Moody algebras are divided into three classes: Symmetrizable indecomposable generalized Cartan matrices of finite and affine type have been completely classified. They correspond to Dynkin diagrams and affine Dynkin diagrams. Little is known about the Kac–Moody algebras of indefinite type, although the groups corresponding to these Kac–Moody algebras were constructed over arbitrary fields by Jacques Tits. Among the Kac–Moody algebras of indefinite type, most work has focused on those hyperbolic type, for which the matrix "S" is indefinite, but for each proper subset of "I", the corresponding submatrix is positive definite or positive semidefinite. Hyperbolic Kac–Moody algebras have rank at most 10, and they have been completely classified. There are infinitely many of rank 2, and 238 of ranks between 3 and 10. Citations. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "C = \\begin{pmatrix} c_{ij}\\end{pmatrix}" }, { "math_id": 1, "text": "\\mathfrak{g}'(C)" }, { "math_id": 2, "text": "e_i" }, { "math_id": 3, "text": "h_i" }, { "math_id": 4, "text": "f_i \\left(i \\in \\{1, \\ldots, n\\}\\right)" }, { "math_id": 5, "text": "\\left[h_i, h_j\\right] = 0\\ " }, { "math_id": 6, "text": "i, j \\in \\{1, \\ldots, n\\}" }, { "math_id": 7, "text": "\\left[h_i, e_j\\right] = c_{ij}e_j" }, { "math_id": 8, "text": "\\left[h_i, f_j\\right] = -c_{ij}f_j" }, { "math_id": 9, "text": "\\left[e_i, f_j\\right] = \\delta_{ij}h_i " }, { "math_id": 10, "text": " \\delta_{ij}" }, { "math_id": 11, "text": "i \\neq j" }, { "math_id": 12, "text": "c_{ij} \\leq 0" }, { "math_id": 13, "text": "\\textrm{ad}(e_i)^{1-c_{ij}}(e_j) = 0" }, { "math_id": 14, "text": "\\operatorname{ad}(f_i)^{1-c_{ij}}(f_j) = 0" }, { "math_id": 15, "text": "\\operatorname{ad}: \\mathfrak{g}\\to\\operatorname{End}(\\mathfrak{g}),\\operatorname{ad}(x)(y) = [x, y]," }, { "math_id": 16, "text": "\\mathfrak{g}" }, { "math_id": 17, "text": "\\mathfrak{g}'(C) = [\\mathfrak{g}(C), \\mathfrak{g}(C)]" }, { "math_id": 18, "text": "\\mathfrak{g}(C)" }, { "math_id": 19, "text": "n \\times n" }, { "math_id": 20, "text": "C" }, { "math_id": 21, "text": "(\\mathfrak{h}, \\{\\alpha_i\\}_{i = 1}^n, \\{\\alpha_i^\\vee\\}_{i = 1}^n, " }, { "math_id": 22, "text": "\\mathfrak{h}" }, { "math_id": 23, "text": "\\{\\alpha_i^\\vee\\}_{i = 1}^n" }, { "math_id": 24, "text": "\\{\\alpha_i\\}_{i = 1}^n" }, { "math_id": 25, "text": "\\mathfrak{h}^*" }, { "math_id": 26, "text": "1 \\leq i, j \\leq n, \\alpha_i\\left(\\alpha_j^\\vee\\right) = C_{ji}" }, { "math_id": 27, "text": "\\alpha_i" }, { "math_id": 28, "text": "\\alpha_i^\\vee" }, { "math_id": 29, "text": "\\mathfrak{g} := \\mathfrak{g}(C)" }, { "math_id": 30, "text": "\\left[h, h'\\right] = 0\\ " }, { "math_id": 31, "text": "h,h' \\in \\mathfrak{h}" }, { "math_id": 32, "text": "\\left[h, e_i\\right] = \\alpha_i(h)e_i" }, { "math_id": 33, "text": "h \\in \\mathfrak{h}" }, { "math_id": 34, "text": "\\left[h, f_i\\right] = -\\alpha_i(h)f_i" }, { "math_id": 35, "text": "\\left[e_i, f_j\\right] = \\delta_{ij}\\alpha_i^\\vee " }, { "math_id": 36, "text": "x\\neq 0" }, { "math_id": 37, "text": "\\forall h\\in\\mathfrak{h}, [h, x] = \\lambda(h)x" }, { "math_id": 38, "text": "\\lambda\\in\\mathfrak{h}^*\\backslash\\{0\\}" }, { "math_id": 39, "text": "x" }, { "math_id": 40, "text": "\\lambda" }, { "math_id": 41, "text": "\\Delta" }, { "math_id": 42, "text": "R" }, { "math_id": 43, "text": "\\mathfrak{g}_\\lambda" }, { "math_id": 44, "text": "\\mathfrak{g}_\\lambda = \\{x\\in\\mathfrak{g}:\\forall h\\in\\mathfrak{h}, [h,x] = \\lambda(h)x\\}" }, { "math_id": 45, "text": "e_i\\in\\mathfrak{g}_{\\alpha_i}" }, { "math_id": 46, "text": "f_i\\in\\mathfrak{g}_{-\\alpha_i}" }, { "math_id": 47, "text": "x_1\\in\\mathfrak{g}_{\\lambda_1}" }, { "math_id": 48, "text": "x_2\\in\\mathfrak{g}_{\\lambda_2}" }, { "math_id": 49, "text": "\\left[x_1, x_2\\right]\\in\\mathfrak{g}_{\\lambda_1+\\lambda_2}" }, { "math_id": 50, "text": "\\mathfrak{g} = \\mathfrak{h}\\oplus\\bigoplus_{\\lambda\\in\\Delta} \\mathfrak{g}_\\lambda" }, { "math_id": 51, "text": "\\lambda = \\sum_{i=1}^n z_i\\alpha_i" }, { "math_id": 52, "text": "z_i" }, { "math_id": 53, "text": "\\mathfrak{g}(C) \\simeq \\mathfrak{g}\\left(C_1\\right) \\oplus \\mathfrak{g}\\left(C_2\\right)," } ]
https://en.wikipedia.org/wiki?curid=1146292
1146294
Current algebra
Infinite dimensional Lie algebra occurring in quantum field theory Certain commutation relations among the current density operators in quantum field theories define an infinite-dimensional Lie algebra called a current algebra. Mathematically these are Lie algebras consisting of smooth maps from a manifold into a finite dimensional Lie algebra. History. The original current algebra, proposed in 1964 by Murray Gell-Mann, described weak and electromagnetic currents of the strongly interacting particles, hadrons, leading to the Adler–Weisberger formula and other important physical results. The basic concept, in the era just preceding quantum chromodynamics, was that even without knowing the Lagrangian governing hadron dynamics in detail, exact kinematical information – the local symmetry – could still be encoded in an algebra of currents. The commutators involved in current algebra amount to an infinite-dimensional extension of the Jordan map, where the quantum fields represent infinite arrays of oscillators. Current algebraic techniques are still part of the shared background of particle physics when analyzing symmetries and indispensable in discussions of the Goldstone theorem. Example. In a non-Abelian Yang–Mills symmetry, where V and A are flavor-current and axial-current 0th components (charge densities), respectively, the paradigm of a current algebra is formula_0 and formula_1 where f are the structure constants of the Lie algebra. To get meaningful expressions, these must be normal ordered. The algebra resolves to a direct sum of two algebras, L and R, upon defining formula_2 whereupon formula_3 Conformal field theory. For the case where space is a one-dimensional circle, current algebras arise naturally as a central extension of the loop algebra, known as Kac–Moody algebras or, more specifically, affine Lie algebras. In this case, the commutator and normal ordering can be given a very precise mathematical definition in terms of integration contours on the complex plane, thus avoiding some of the formal divergence difficulties commonly encountered in quantum field theory. When the Killing form of the Lie algebra is contracted with the current commutator, one obtains the energy–momentum tensor of a two-dimensional conformal field theory. When this tensor is expanded as a Laurent series, the resulting algebra is called the Virasoro algebra. This calculation is known as the Sugawara construction. The general case is formalized as the vertex operator algebra. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\bigl[\\ V^a(\\vec{x}),\\ V^b(\\vec{y})\\ \\bigr] = i\\ f^{ab}_c\\ \\delta(\\vec{x}-\\vec{y})\\ V^c(\\vec{x})\\ , " }, { "math_id": 1, "text": "\n\\bigl[\\ V^a(\\vec{x}),\\ A^b(\\vec{y})\\ \\bigr] = i\\ f^{ab}_c\\ \\delta(\\vec{x} - \\vec{y})\\ A^c(\\vec{x})\\ ,\\qquad\n\\bigl[\\ A^a(\\vec{x}),\\ A^b(\\vec{y})\\ \\bigr] = i\\ f^{ab}_c\\ \\delta(\\vec{x} - \\vec{y})\\ V^c(\\vec{x}) ~," }, { "math_id": 2, "text": " L^a(\\vec{x})\\equiv \\tfrac{1}{2}\\bigl(\\ V^a(\\vec{x}) - A^a(\\vec{x})\\ \\bigr)\\ , \\qquad R^a(\\vec{x}) \\equiv \\tfrac{1}{2}\\bigl(\\ V^a(\\vec{x}) + A^a(\\vec{x})\\ \\bigr)\\ ," }, { "math_id": 3, "text": " \\bigl[\\ L^a(\\vec{x}),\\ L^b(\\vec{y})\\ \\bigr]= i\\ f^{ab}_c\\ \\delta(\\vec{x}-\\vec{y})\\ L^c(\\vec{x})\\ ,\\quad\n\\bigl[\\ L^a(\\vec{x}),\\ R^b(\\vec{y})\\ \\bigr] = 0, \\quad\n\\bigl[\\ R^a(\\vec{x}),\\ R^b(\\vec{y})\\ \\bigr] = i\\ f^{ab}_c\\ \\delta(\\vec{x}-\\vec{y})\\ R^c(\\vec{x})~.\n" } ]
https://en.wikipedia.org/wiki?curid=1146294
1146338
Bogoliubov transformation
Mathematical operation in quantum optics, general relativity and other areas of physics In theoretical physics, the Bogoliubov transformation, also known as the Bogoliubov–Valatin transformation, was independently developed in 1958 by Nikolay Bogolyubov and John George Valatin for finding solutions of BCS theory in a homogeneous system. The Bogoliubov transformation is an isomorphism of either the canonical commutation relation algebra or canonical anticommutation relation algebra. This induces an autoequivalence on the respective representations. The Bogoliubov transformation is often used to diagonalize Hamiltonians, which yields the stationary solutions of the corresponding Schrödinger equation. The Bogoliubov transformation is also important for understanding the Unruh effect, Hawking radiation, Davies-Fulling radiation (moving mirror model), pairing effects in nuclear physics, and many other topics. The Bogoliubov transformation is often used to diagonalize Hamiltonians, "with" a corresponding transformation of the state function. Operator eigenvalues calculated with the diagonalized Hamiltonian on the transformed state function thus are the same as before. Single bosonic mode example. Consider the canonical commutation relation for bosonic creation and annihilation operators in the harmonic oscillator basis formula_0 Define a new pair of operators formula_1 formula_2 for complex numbers "u" and "v", where the latter is the Hermitian conjugate of the first. The Bogoliubov transformation is the canonical transformation mapping the operators formula_3 and formula_4 to formula_5 and formula_6. To find the conditions on the constants "u" and "v" such that the transformation is canonical, the commutator is evaluated, namely, formula_7 It is then evident that formula_8 is the condition for which the transformation is canonical. Since the form of this condition is suggestive of the hyperbolic identity formula_9 the constants u and v can be readily parametrized as formula_10 formula_11 This is interpreted as a linear symplectic transformation of the phase space. By comparing to the Bloch–Messiah decomposition, the two angles formula_12 and formula_13 correspond to the orthogonal symplectic transformations (i.e., rotations) and the squeezing factor formula_14 corresponds to the diagonal transformation. Applications. The most prominent application is by Nikolai Bogoliubov himself in the context of superfluidity. Other applications comprise Hamiltonians and excitations in the theory of antiferromagnetism. When calculating quantum field theory in curved spacetimes the definition of the vacuum changes, and a Bogoliubov transformation between these different vacua is possible. This is used in the derivation of Hawking radiation. Bogoliubov transforms are also used extensively in quantum optics, particularly when working with gaussian unitaries (such as beamsplitters, phase shifters, and squeezing operations). Fermionic mode. For the anticommutation relations formula_15 the Bogoliubov transformation is constrained by formula_16. Therefore, the only non-trivial possibility is formula_17 corresponding to particle–antiparticle interchange (or particle–hole interchange in many-body systems) with the possible inclusion of a phase shift. Thus, for a single particle, the transformation can only be implemented (1) for a Dirac fermion, where particle and antiparticle are distinct (as opposed to a Majorana fermion or chiral fermion), or (2) for multi-fermionic systems, in which there is more than one type of fermion. Applications. The most prominent application is again by Nikolai Bogoliubov himself, this time for the BCS theory of superconductivity. The point where the necessity to perform a Bogoliubov transform becomes obvious is that in mean-field approximation the Hamiltonian of the system can be written in both cases as a sum of bilinear terms in the original creation and destruction operators, involving finite formula_18 terms, i.e. one must go beyond the usual Hartree–Fock method. In particular, in the mean-field Bogoliubov–de Gennes Hamiltonian formalism with a superconducting pairing term such as formula_19, the Bogoliubov transformed operators formula_20 annihilate and create quasiparticles (each with well-defined energy, momentum and spin but in a quantum superposition of electron and hole state), and have coefficients formula_21 and formula_22 given by eigenvectors of the Bogoliubov–de Gennes matrix. Also in nuclear physics, this method is applicable, since it may describe the "pairing energy" of nucleons in a heavy element. Multimode example. The Hilbert space under consideration is equipped with these operators, and henceforth describes a higher-dimensional quantum harmonic oscillator (usually an infinite-dimensional one). The ground state of the corresponding Hamiltonian is annihilated by all the annihilation operators: formula_23 All excited states are obtained as linear combinations of the ground state excited by some creation operators: formula_24 One may redefine the creation and the annihilation operators by a linear redefinition: formula_25 where the coefficients formula_26 must satisfy certain rules to guarantee that the annihilation operators and the creation operators formula_27, defined by the Hermitian conjugate equation, have the same commutators for bosons and anticommutators for fermions. The equation above defines the Bogoliubov transformation of the operators. The ground state annihilated by all formula_28 is different from the original ground state formula_29, and they can be viewed as the Bogoliubov transformations of one another using the operator–state correspondence. They can also be defined as squeezed coherent states. BCS wave function is an example of squeezed coherent state of fermions. Unified matrix description. Because Bogoliubov transformations are linear recombination of operators, it is more convenient and insightful to write them in terms of matrix transformations. If a pair of annihilators formula_30 transform as formula_31 where formula_32is a formula_33 matrix. Then naturally formula_34 For fermion operators, the requirement of commutation relations reflects in two requirements for the form of matrix formula_32 formula_35 and formula_36 For boson operators, the commutation relations require formula_37 and formula_38 These conditions can be written uniformly as formula_39 where formula_40 where formula_41 applies to fermions and bosons, respectively. Diagonalizing a quadratic Hamiltonian using matrix description. Bogoliubov transformation lets us diagonalize a quadratic Hamiltonian formula_42 by just diagonalizing the matrix formula_43. In the notations above, it is important to distinguish the operator formula_44 and the numeric matrix formula_45. This fact can be seen by rewriting formula_44 as formula_46 and formula_47 if and only if formula_32 diagonalizes formula_43, i.e. formula_48. Useful properties of Bogoliubov transformations are listed below. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. The whole topic, and a lot of definite applications, are treated in the following textbooks:
[ { "math_id": 0, "text": "\\left [ \\hat{a}, \\hat{a}^\\dagger \\right ] = 1." }, { "math_id": 1, "text": "\\hat{b} = u \\hat{a} + v \\hat{a}^\\dagger," }, { "math_id": 2, "text": "\\hat{b}^\\dagger = u^* \\hat{a}^\\dagger + v^* \\hat{a}," }, { "math_id": 3, "text": "\\hat{a}" }, { "math_id": 4, "text": "\\hat{a}^\\dagger" }, { "math_id": 5, "text": "\\hat{b}" }, { "math_id": 6, "text": "\\hat{b}^\\dagger" }, { "math_id": 7, "text": "\\left [ \\hat{b}, \\hat{b}^\\dagger \\right ]\n = \\left [ u \\hat{a} + v \\hat{a}^\\dagger , u^* \\hat{a}^\\dagger + v^* \\hat{a} \\right ]\n = \\cdots = \\left ( |u|^2 - |v|^2 \\right ) \\left [ \\hat{a}, \\hat{a}^\\dagger \\right ]. " }, { "math_id": 8, "text": "|u|^2 - |v|^2 = 1" }, { "math_id": 9, "text": "\\cosh^2 x - \\sinh^2 x = 1," }, { "math_id": 10, "text": "u = e^{i \\theta_1} \\cosh r," }, { "math_id": 11, "text": "v = e^{i \\theta_2} \\sinh r." }, { "math_id": 12, "text": "\\theta_1" }, { "math_id": 13, "text": "\\theta_2" }, { "math_id": 14, "text": "r" }, { "math_id": 15, "text": "\\left\\{ \\hat{a}, \\hat{a}\\right\\} = 0, \\left\\{ \\hat{a}, \\hat{a}^\\dagger \\right\\} = 1," }, { "math_id": 16, "text": "uv=0, |u|^2+|v|^2=1" }, { "math_id": 17, "text": "u=0, |v|=1," }, { "math_id": 18, "text": "\\langle a_i^+a_j^+\\rangle" }, { "math_id": 19, "text": "\\Delta a_i^+a_j^+ + \\text{h.c.}" }, { "math_id": 20, "text": "b, b^\\dagger" }, { "math_id": 21, "text": "u" }, { "math_id": 22, "text": "v" }, { "math_id": 23, "text": "\\forall i \\qquad a_i |0\\rangle = 0." }, { "math_id": 24, "text": "\\prod_{k=1}^n a_{i_k}^\\dagger |0\\rangle." }, { "math_id": 25, "text": "a'_i = \\sum_j (u_{ij} a_j + v_{ij} a^\\dagger_j)," }, { "math_id": 26, "text": "u_{ij},v_{ij}" }, { "math_id": 27, "text": "a^{\\prime\\dagger}_i" }, { "math_id": 28, "text": "a'_i" }, { "math_id": 29, "text": "|0\\rangle" }, { "math_id": 30, "text": "(a , b)" }, { "math_id": 31, "text": "\n\\begin{pmatrix}\n\\alpha\\\\\n\\beta\n\\end{pmatrix}\n=\nU\n\\begin{pmatrix}\na\\\\\nb\n\\end{pmatrix}\n" }, { "math_id": 32, "text": "U" }, { "math_id": 33, "text": "2\\times2" }, { "math_id": 34, "text": "\n\\begin{pmatrix}\n\\alpha^\\dagger\\\\\n\\beta^\\dagger\n\\end{pmatrix}\n=\nU^*\n\\begin{pmatrix}\na^\\dagger\\\\\nb^\\dagger\n\\end{pmatrix}\n" }, { "math_id": 35, "text": "\nU=\n\\begin{pmatrix}\nu & v\\\\\n-v^* & u^*\n\\end{pmatrix}\n" }, { "math_id": 36, "text": "\n|u|^2 + |v|^2 = 1\n" }, { "math_id": 37, "text": "\nU=\n\\begin{pmatrix}\nu & v\\\\\nv^* & u^*\n\\end{pmatrix}\n" }, { "math_id": 38, "text": "\n|u|^2 - |v|^2 = 1\n" }, { "math_id": 39, "text": "\nU \\Gamma_\\pm U^\\dagger = \\Gamma_\\pm\n" }, { "math_id": 40, "text": "\n\\Gamma_\\pm = \n\\begin{pmatrix}\n1 & 0\\\\\n0 & \\pm1\n\\end{pmatrix}\n" }, { "math_id": 41, "text": "\\Gamma_\\pm" }, { "math_id": 42, "text": "\n\\hat{H} =\n\\begin{pmatrix}\na^\\dagger & b^\\dagger\n\\end{pmatrix}\nH\n\\begin{pmatrix}\na \\\\ b\n\\end{pmatrix}\n" }, { "math_id": 43, "text": "\\Gamma_\\pm H" }, { "math_id": 44, "text": "\\hat{H}" }, { "math_id": 45, "text": "H" }, { "math_id": 46, "text": "\n\\hat{H} =\n\\begin{pmatrix}\n\\alpha^\\dagger & \\beta^\\dagger\n\\end{pmatrix}\n\\Gamma_\\pm U (\\Gamma_\\pm H) U^{-1}\n\\begin{pmatrix}\n\\alpha \\\\ \\beta\n\\end{pmatrix}\n" }, { "math_id": 47, "text": "\\Gamma_\\pm U (\\Gamma_\\pm H) U^{-1}=D" }, { "math_id": 48, "text": "U (\\Gamma_\\pm H) U^{-1} = \\Gamma_\\pm D" } ]
https://en.wikipedia.org/wiki?curid=1146338
11463665
Iteratively reweighted least squares
Method for solving certain optimization problems The method of iteratively reweighted least squares (IRLS) is used to solve certain optimization problems with objective functions of the form of a "p"-norm: formula_0 by an iterative method in which each step involves solving a weighted least squares problem of the form: formula_1 IRLS is used to find the maximum likelihood estimates of a generalized linear model, and in robust regression to find an M-estimator, as a way of mitigating the influence of outliers in an otherwise normally-distributed data set, for example, by minimizing the least absolute errors rather than the least square errors. One of the advantages of IRLS over linear programming and convex programming is that it can be used with Gauss–Newton and Levenberg–Marquardt numerical algorithms. Examples. "L"1 minimization for sparse recovery. IRLS can be used for ℓ"1 minimization and smoothed ℓ"p minimization, "p" &lt; 1, in compressed sensing problems. It has been proved that the algorithm has a linear rate of convergence for "ℓ"1 norm and superlinear for "ℓ""t" with "t" &lt; 1, under the restricted isometry property, which is generally a sufficient condition for sparse solutions. "Lp" norm linear regression. To find the parameters β = ("β"1, …,"β""k")T which minimize the "Lp" norm for the linear regression problem, formula_2 the IRLS algorithm at step "t" + 1 involves solving the weighted linear least squares problem: formula_3 where "W"("t") is the diagonal matrix of weights, usually with all elements set initially to: formula_4 and updated after each iteration to: formula_5 In the case "p" = 1, this corresponds to least absolute deviation regression (in this case, the problem would be better approached by use of linear programming methods, so the result would be exact) and the formula is: formula_6 To avoid dividing by zero, regularization must be done, so in practice the formula is: formula_7 where formula_8 is some small value, like 0.0001. Note the use of formula_8 in the weighting function is equivalent to the Huber loss function in robust estimation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathop{\\operatorname{arg\\,min}}_{\\boldsymbol\\beta} \\sum_{i=1}^n \\big| y_i - f_i (\\boldsymbol\\beta) \\big|^p, " }, { "math_id": 1, "text": "\\boldsymbol\\beta^{(t+1)} = \\underset{\\boldsymbol\\beta} {\\operatorname{arg\\,min}} \\sum_{i=1}^n w_i (\\boldsymbol\\beta^{(t)}) \\big| y_i - f_i (\\boldsymbol\\beta) \\big|^2. " }, { "math_id": 2, "text": "\n\\underset{\\boldsymbol \\beta}{ \\operatorname{arg\\,min} }\n \\big\\| \\mathbf y - X \\boldsymbol \\beta \\|_p\n =\n\\underset{\\boldsymbol \\beta}{ \\operatorname{arg\\,min} }\n \\sum_{i=1}^n \\left| y_i - X_i \\boldsymbol\\beta \\right|^p ,\n" }, { "math_id": 3, "text": "\n\\boldsymbol\\beta^{(t+1)}\n =\n\\underset{\\boldsymbol\\beta}{ \\operatorname{arg\\,min} }\n \\sum_{i=1}^n w_i^{(t)} \\left| y_i - X_i \\boldsymbol\\beta \\right|^2\n =\n(X^{\\rm T} W^{(t)} X)^{-1} X^{\\rm T} W^{(t)} \\mathbf{y},\n" }, { "math_id": 4, "text": "w_i^{(0)} = 1" }, { "math_id": 5, "text": "w_i^{(t)} = \\big|y_i - X_i \\boldsymbol \\beta ^{(t)} \\big|^{p-2}." }, { "math_id": 6, "text": "w_i^{(t)} = \\frac{1}{\\big|y_i - X_i \\boldsymbol \\beta ^{(t)} \\big|}." }, { "math_id": 7, "text": "w_i^{(t)} = \\frac 1 {\\max\\left\\{\\delta, \\left|y_i - X_i \\boldsymbol \\beta ^{(t)} \\right|\\right\\} }." }, { "math_id": 8, "text": "\\delta" } ]
https://en.wikipedia.org/wiki?curid=11463665
11464110
Secondary measure
In mathematics, the secondary measure associated with a measure of positive density ρ when there is one, is a measure of positive density μ, turning the secondary polynomials associated with the orthogonal polynomials for ρ into an orthogonal system. Introduction. Under certain assumptions, it is possible to obtain the existence of a secondary measure and even to express it. For example, this can be done when working in the Hilbert space "L"2([0, 1], R, ρ) formula_0 with formula_1 in the general case, or: formula_2 when ρ satisfies a Lipschitz condition. This application φ is called the reducer of ρ. More generally, μ et ρ are linked by their Stieltjes transformation with the following formula: formula_3 in which "c"1 is the moment of order 1 of the measure ρ. Secondary measures and the theory around them may be used to derive traditional formulas of analysis concerning the Gamma function, the Riemann zeta function, and the Euler–Mascheroni constant. They have also allowed the clarification of various integrals and series, although this tends to be difficult a priori. Finally they make it possible to solve integral equations of the form formula_4 where "g" is the unknown function, and lead to theorems of convergence towards the Chebyshev and Dirac measures. The broad outlines of the theory. Let ρ be a measure of positive density on an interval I and admitting moments of any order. From this, a family {"Pn"} of orthogonal polynomials for the inner product induced by ρ can be created. Let {"Qn"} be the sequence of the secondary polynomials associated with the family "P". Under certain conditions there is a measure for which the family "Q" is orthogonal. This measure, which can be clarified from ρ, is called a secondary measure associated initial measure ρ. When ρ is a probability density function, a sufficient condition that allows μ to be a secondary measure associated with ρ while admitting moments of any order is that its Stieltjes Transformation is given by an equality of the type formula_5 where "a" is an arbitrary constant and "c"1 indicates the moment of order 1 of ρ. For "a" = 1, the measure known as secondary can be obtained. For "n" ≥ 1 the norm of the polynomial "Pn" for ρ coincides exactly with the norm of the secondary polynomial associated "Qn" when using the measure μ. In this paramount case, and if the space generated by the orthogonal polynomials is dense in "L"2("I", R, ρ), the operator "T"ρ defined by formula_6 creating the secondary polynomials can be furthered to a linear map connecting space "L"2("I", R, ρ) to "L"2("I", R, μ) and becomes isometric if limited to the hyperplane "H"ρ of the orthogonal functions with "P"0 = 1. For unspecified functions square integrable for ρ a more general formula of covariance may be obtained: formula_7 The theory continues by introducing the concept of reducible measure, meaning that the quotient ρ/μ is element of "L"2("I", R, μ). The following results are then established: formula_8. formula_9 defined on the polynomials is prolonged in an isometry "S"ρ linking the closure of the space of these polynomials in "L"2("I", R, ρ2μ−1) to the hyperplane "H"ρ provided with the norm induced by ρ. Finally the two operators are also connected, provided the images in question are defined, by the fundamental formula of composition: formula_10 Case of the Lebesgue measure and some other examples. The Lebesgue measure on the standard interval [0, 1] is obtained by taking the constant density ρ("x") = 1. The associated orthogonal polynomials are called Legendre polynomials and can be clarified by formula_11 The norm of "Pn" is worth formula_12 The recurrence relation in three terms is written: formula_13 The reducer of this measure of Lebesgue is given by formula_14 The associated secondary measure is then clarified as formula_15. If we normalize the polynomials of Legendre, the coefficients of Fourier of the reducer φ related to this orthonormal system are null for an even index and are given by formula_16 for an odd index "n". The Laguerre polynomials are linked to the density ρ("x") = "e−x" on the interval "I" = [0, ∞). They are clarified by formula_17 and are normalized. The reducer associated is defined by formula_18 The coefficients of Fourier of the reducer φ related to the Laguerre polynomials are given by formula_19 This coefficient "Cn"(φ) is no other than the opposite of the sum of the elements of the line of index "n" in the table of the harmonic triangular numbers of Leibniz. The Hermite polynomials are linked to the Gaussian density formula_20 on "I" = R. They are clarified by formula_21 and are normalized. The reducer associated is defined by formula_22 The coefficients of Fourier of the reducer φ related to the system of Hermite polynomials are null for an even index and are given by formula_23 for an odd index "n". The Chebyshev measure of the second form. This is defined by the density formula_24 on the interval [0, 1]. It is the only one which coincides with its secondary measure normalised on this standard interval. Under certain conditions it occurs as the limit of the sequence of normalized secondary measures of a given density. Examples of non-reducible measures. Jacobi measure on (0, 1) of density formula_25 Chebyshev measure on (−1, 1) of the first form of density formula_26 Sequence of secondary measures. The secondary measure μ associated with a probability density function ρ has its moment of order 0 given by the formula formula_27 where "c"1 and "c"2 indicating the respective moments of order 1 and 2 of ρ. This process can be iterated by 'normalizing' μ while defining ρ1 = μ/"d"0 which becomes in its turn a density of probability called naturally the normalised secondary measure associated with ρ. From ρ1, a secondary normalised measure ρ2 can be created. This can be iterated to obtain ρ3 from ρ2 and so on. Therefore, a sequence of successive secondary measures, created from ρ0 = ρ, is such that ρ"n"+1 that is the secondary normalised measure deduced from ρ"n" It is possible to clarify the density ρ"n" by using the orthogonal polynomials "Pn" for ρ, the secondary polynomials "Qn" and the reducer associated φ. This gives the formula formula_28 The coefficient formula_29 is easily obtained starting from the leading coefficients of the polynomials "P""n"−1 and "Pn". The reducer φ"n" associated with ρ"n", as well as the orthogonal polynomials corresponding to ρ"n", can also be clarified. The evolution of these densities when the index tends towards the infinite can be related to the support of the measure on the standard interval [0, 1]: Let formula_30 be the classic recurrence relation in three terms. If formula_31 then the sequence {ρ"n"} converges completely towards the Chebyshev density of the second form formula_32. These conditions about limits are checked by a very broad class of traditional densities. A derivation of the sequence of secondary measures and convergence can be found in. Equinormal measures. One calls two measures thus leading to the same normalised secondary density. It is remarkable that the elements of a given class and having the same moment of order 1 are connected by a homotopy. More precisely, if the density function ρ has its moment of order 1 equal to "c"1, then these densities equinormal with ρ are given by a formula of the type: formula_33 "t" describing an interval containing ]0, 1]. If μ is the secondary measure of ρ, that of ρ"t" will be "t"μ. The reducer of ρ"t" is formula_34 by noting "G"("x") the reducer of μ. Orthogonal polynomials for the measure ρ"t" are clarified from "n" = 1 by the formula formula_35 with "Qn" secondary polynomial associated with "Pn". It is remarkable also that, within the meaning of distributions, the limit when "t" tends towards 0 per higher value of ρ"t" is the Dirac measure concentrated at "c"1. For example, the equinormal densities with the Chebyshev measure of the second form are defined by: formula_36 with "t" describing ]0, 2]. The value "t" = 2 gives the Chebyshev measure of the first form. Applications. In the formulas below "G" is Catalan's constant, γ is the Euler's constant, β2"n" is the Bernoulli number of order 2"n", "H"2"n"+1 is the harmonic number of order 2"n"+1 and Ei is the Exponential integral function. formula_37 formula_38 formula_39 The notation formula_40 indicating the 2 periodic function coinciding with formula_41 on (−1, 1). formula_42 formula_43 formula_44 formula_45 formula_46 formula_47 formula_48 formula_49 formula_50 If the measure ρ is reducible and let φ be the associated reducer, one has the equality formula_51 If the measure ρ is reducible with μ the associated reducer, then if "f" is square integrable for μ, and if "g" is square integrable for ρ and is orthogonal with "P"0 = 1, the following equivalence holds: formula_52 "c"1 indicates the moment of order 1 of ρ and "T"ρ the operator formula_53 In addition, the sequence of secondary measures has applications in Quantum Mechanics, where it gives rise to the "sequence of residual spectral densities" for specialized Pauli-Fierz Hamiltonians. This also provides a physical interpretation for the sequence of secondary measures. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\forall x \\in [0,1], \\qquad \\mu(x)=\\frac{\\rho(x)}{\\frac{\\varphi^2(x)}{4} + \\pi^2\\rho^2(x)}" }, { "math_id": 1, "text": " \\varphi(x) = \\lim_{\\varepsilon \\to 0^+} 2\\int_0^1\\frac{(x-t)\\rho(t)}{(x-t)^2+\\varepsilon^2} \\, dt " }, { "math_id": 2, "text": " \\varphi(x) = 2\\rho(x)\\text{ln}\\left(\\frac{x}{1-x}\\right) - 2 \\int_0^1\\frac{\\rho(t)-\\rho(x)}{t-x} \\, dt" }, { "math_id": 3, "text": "S_{\\mu}(z)=z-c_1-\\frac{1}{S_{\\rho}(z)}" }, { "math_id": 4, "text": "f(x)=\\int_0^1\\frac{g(t)-g(x)}{t-x}\\rho(t)\\,dt" }, { "math_id": 5, "text": "S_{\\mu}(z)=a\\left(z-c_1-\\frac{1}{S_{\\rho}(z)}\\right)," }, { "math_id": 6, "text": "f(x) \\mapsto \\int_I \\frac{f(t)-f(x)}{t-x}\\rho (t)dt" }, { "math_id": 7, "text": " \\langle f/g \\rangle_\\rho - \\langle f/1 \\rangle_\\rho\\times \\langle g/1\\rangle_\\rho = \\langle T_\\rho(f)/T_\\rho (g) \\rangle_\\mu." }, { "math_id": 8, "text": " \\langle f/\\varphi \\rangle_\\rho = \\langle T_\\rho (f)/1 \\rangle_\\rho" }, { "math_id": 9, "text": "f\\mapsto \\varphi\\times f -T_\\rho (f)" }, { "math_id": 10, "text": "T_\\rho\\circ S_\\rho \\left( f\\right)=\\frac{\\rho}{\\mu}\\times (f)." }, { "math_id": 11, "text": "P_n(x)=\\frac{d^n}{dx^n}\\left(x^n(1-x)^n\\right)." }, { "math_id": 12, "text": "\\frac{n!}{\\sqrt{2n+1}}." }, { "math_id": 13, "text": "2(2n+1)XP_n(X)=-P_{n+1}(X)+(2n+1)P_n(X)-n^2P_{n-1}(X)." }, { "math_id": 14, "text": "\\varphi(x)=2\\ln\\left(\\frac{x}{1-x}\\right)." }, { "math_id": 15, "text": "\\mu(x)=\\frac{1}{\\ln^2\\left(\\frac{x}{1-x}\\right)+\\pi^2}" }, { "math_id": 16, "text": "C_n(\\varphi)=-\\frac{4\\sqrt{2n+1}}{n(n+1)}" }, { "math_id": 17, "text": "L_n(x)=\\frac{e^x}{n!}\\frac{d^n}{dx^n}(x^ne^{-x})=\\sum_{k=0}^{n}\\binom{n}{k}(-1)^k\\frac{x^k}{k!}" }, { "math_id": 18, "text": "\\varphi(x)=2\\left (\\ln(x)-\\int_0^{\\infty}e^{-t}\\ln|x-t|dt\\right )." }, { "math_id": 19, "text": "C_n(\\varphi)=-\\frac{1}{n}\\sum_{k=0}^{n-1}\\frac{1}{\\binom{n-1}{k}}." }, { "math_id": 20, "text": "\\rho(x)=\\frac{e^{-\\frac{x^2}{2}}}{\\sqrt{2\\pi}}" }, { "math_id": 21, "text": "H_n(x)=\\frac{1}{\\sqrt{n!}}e^{\\frac{x^2}{2}}\\frac{d^n}{dx^n}\\left(e^{-\\frac{x^2}{2}}\\right)" }, { "math_id": 22, "text": "\\varphi(x)=-\\frac{2}{\\sqrt{2\\pi}}\\int_{-\\infty}^{\\infty}te^{-\\frac{t^2}{2}}\\ln|x-t|\\,dt." }, { "math_id": 23, "text": "C_n(\\varphi)=(-1)^{\\frac{n+1}{2}}\\frac{\\left(\\frac{n-1}{2}\\right)!}{\\sqrt{n!}}" }, { "math_id": 24, "text": "\\rho(x)=\\frac{8}{\\pi}\\sqrt{x(1-x)}" }, { "math_id": 25, "text": "\\rho(x)=\\frac{2}{\\pi}\\sqrt{\\frac{1-x}{x}}." }, { "math_id": 26, "text": "\\rho(x)=\\frac{1}{\\pi\\sqrt{1-x^2}}." }, { "math_id": 27, "text": "d_0 = c_2 -c_1^2, " }, { "math_id": 28, "text": "\\rho_n(x)=\\frac{1}{d_0^{n-1}} \\frac{\\rho(x)}{\\left(P_{n-1}(x) \\frac{\\varphi(x)}{2}-Q_{n-1}(x)\\right)^2 + \\pi^2\\rho^2(x) P_{n-1}^2(x)}." }, { "math_id": 29, "text": "d_0^{n-1}" }, { "math_id": 30, "text": "xP_n (x)=t_nP_{n+1}(x)+s_nP_n(x)+t_{n-1}P_{n-1}(x)" }, { "math_id": 31, "text": "\\lim_{n \\mapsto \\infty} t_n=\\tfrac{1}{4}, \\quad \\lim_{n \\mapsto \\infty} s_n =\\tfrac{1}{2}," }, { "math_id": 32, "text": "\\rho_{tch}(x)=\\frac{8}{\\pi}\\sqrt{x(1-x)}" }, { "math_id": 33, "text": "\\rho_{t}(x)=\\frac{t\\rho(x)}{\\left (\\tfrac{1}{2}(t-1)(x-c_1)\\varphi(x)-t\\right )^2+\\pi^2\\rho^2(x)(t-1)^2(x-c_1)^2}," }, { "math_id": 34, "text": "\\varphi_t(x)=\\frac{2 (x-c_1)-tG(x)}{\\left((x-c_1)-t\\tfrac{1}{2}G(x) \\right)^2+t^2\\pi^2\\mu^2(x)}" }, { "math_id": 35, "text": "P_n^t(x)=\\frac{tP_n(x)+(1-t)(x-c_1)Q_n(x)}{\\sqrt{t}}" }, { "math_id": 36, "text": "\\rho_t(x)=\\frac{2t\\sqrt{1-x^2}}{\\pi\\left[t^2+4(1-t)x^2\\right]}," }, { "math_id": 37, "text": "\\frac{1}{\\ln(p)} = \\frac{1}{p-1}+\\int_0^{\\infty}\\frac{1}{(x+p)(\\ln^2(x)+\\pi^2)} dx \\qquad \\qquad \\forall p > 1" }, { "math_id": 38, "text": "\\gamma = \\int_0^{\\infty}\\frac{\\ln(1+\\frac{1}{x})}{\\ln^2(x)+\\pi^2} dx" }, { "math_id": 39, "text": "\\gamma = \\frac{1}{2}+\\int_0^{\\infty} \\frac{\\overline {(x+1)\\cos(\\pi x)}}{x+1} dx" }, { "math_id": 40, "text": "x\\mapsto \\overline {(x+1)\\cos(\\pi x)}" }, { "math_id": 41, "text": "x\\mapsto (x+1) \\cos(\\pi x)" }, { "math_id": 42, "text": "\\gamma = \\frac{1}{2} + \\sum_{k=1}^{n} \\frac{\\beta_{2k}}{2k} - \\frac{\\beta_{2n}}{\\zeta(2n)}\\int_1^{\\infty} \\lfloor t \\rfloor \\cos(2\\pi t) t^{-2n-1} dt" }, { "math_id": 43, "text": "\\beta_k = \\frac{(-1)^kk!}{\\pi} \\text{Im} \\left(\\int_{-\\infty}^{\\infty} \\frac{e^x}{(1+e^x)(x-i\\pi)^k} dx \\right)" }, { "math_id": 44, "text": "\\int_0^1\\ln^{2n}\\left(\\frac{x}{1-x}\\right)\\,dx = (-1)^{n+1}(2^{2n}-2)\\beta_{2n}\\pi^{2n}" }, { "math_id": 45, "text": "\\int_0^1\\cdots \\int_0^1 \\left(\\sum_{k=1}^{2n} \\frac{\\ln(t_k)}{\\prod_{i \\neq k}(t_k-t_i)}\\right)\\, dt_1 \\cdots dt_{2n}=\\tfrac{1}{2}(-1)^{n+1}(2\\pi)^{2n}\\beta_{2n}" }, { "math_id": 46, "text": "\\int_0^{\\infty}\\frac{e^{-\\alpha x}}{\\Gamma(x+1)}dx = e^{e^{-\\alpha}}-1+\\int_0^{\\infty} \\frac{1-e^{-x}}{(\\ln(x)+\\alpha)^2+\\pi^2}\\frac{dx}{x} \\qquad \\qquad \\forall \\alpha \\in \\mathbf{R}" }, { "math_id": 47, "text": "\\sum_{n=1}^{\\infty} \\left(\\frac{1}{n}\\sum_{k=0}^{n-1} \\frac{1}{\\binom{n-1}{k}}\\right)^2=\\tfrac{4}{9}\\pi^2=\\int_0^{\\infty}4 \\left (\\mathrm {Ei} (1,-x)+i\\pi \\right )^2 e^{-3x} \\, dx." }, { "math_id": 48, "text": "\\frac{23}{15}-\\ln(2) = \\sum_{n=0}^{\\infty} \\frac{1575}{2(n+1)(2n+1)(4n-3)(4n-1)(4n+1)(4n+5)(4n+7)(4n+9)}" }, { "math_id": 49, "text": "G= \\sum_{k=0}^{\\infty} \\frac{(-1)^k}{4^{k+1}} \\left(\\frac{1}{(4k+3)^2}+\\frac{2}{(4k+2)^2}+\\frac{2}{(4k+1)^2}\\right)+\\frac{\\pi}{8}\\ln(2)" }, { "math_id": 50, "text": "G= \\frac{\\pi}{8}\\ln(2)+\\sum_{n=0}^{\\infty}(-1)^n\\frac{H_{2n+1}}{2n+1}." }, { "math_id": 51, "text": "\\int_I\\varphi^2(x)\\rho(x) \\, dx = \\frac{4\\pi^2}{3}\\int_I\\rho^3(x) \\, dx." }, { "math_id": 52, "text": "f(x)=\\int_I\\frac{g(t)-g(x)}{t-x}\\rho(t)dt \\Leftrightarrow g(x)=(x-c_1)f(x)-T_{\\mu}(f(x))=\\frac{\\varphi(x)\\mu(x)}{\\rho(x)}f(x)-T_{\\rho} \\left(\\frac{\\mu(x)}{\\rho(x)}f(x)\\right)" }, { "math_id": 53, "text": "g(x)\\mapsto \\int_I\\frac{g(t)-g(x)}{t-x}\\rho(t)\\,dt." } ]
https://en.wikipedia.org/wiki?curid=11464110
1146433
Cartan matrix
Matrices named after Élie CartanIn mathematics, the term Cartan matrix has three meanings. All of these are named after the French mathematician Élie Cartan. Amusingly, the Cartan matrices in the context of Lie algebras were first investigated by Wilhelm Killing, whereas the Killing form is due to Cartan. Lie algebras. A (symmetrizable) generalized Cartan matrix is a square matrix formula_0 with integer entries such that For example, the Cartan matrix for "G"2 can be decomposed as such: formula_9 The third condition is not independent but is really a consequence of the first and fourth conditions. We can always choose a "D" with positive diagonal entries. In that case, if "S" in the above decomposition is positive definite, then "A" is said to be a Cartan matrix. The Cartan matrix of a simple Lie algebra is the matrix whose elements are the scalar products formula_10 (sometimes called the Cartan integers) where "ri" are the simple roots of the algebra. The entries are integral from one of the properties of roots. The first condition follows from the definition, the second from the fact that for formula_11 is a root which is a linear combination of the simple roots "ri" and "rj" with a positive coefficient for "rj" and so, the coefficient for "ri" has to be nonnegative. The third is true because orthogonality is a symmetric relation. And lastly, let formula_12 and formula_13. Because the simple roots span a Euclidean space, S is positive definite. Conversely, given a generalized Cartan matrix, one can recover its corresponding Lie algebra. (See Kac–Moody algebra for more details). Classification. An formula_14 matrix "A" is decomposable if there exists a nonempty proper subset formula_15 such that formula_3 whenever formula_16 and formula_17. "A" is indecomposable if it is not decomposable. Let "A" be an indecomposable generalized Cartan matrix. We say that "A" is of finite type if all of its principal minors are positive, that "A" is of affine type if its proper principal minors are positive and "A" has determinant 0, and that "A" is of indefinite type otherwise. Finite type indecomposable matrices classify the finite dimensional simple Lie algebras (of types formula_18), while affine type indecomposable matrices classify the affine Lie algebras (say over some algebraically closed field of characteristic 0). Determinants of the Cartan matrices of the simple Lie algebras. The determinants of the Cartan matrices of the simple Lie algebras are given in the following table (along with A1=B1=C1, B2=C2, D3=A3, D2=A1A1, E5=D5, E4=A4, and E3=A2A1). Another property of this determinant is that it is equal to the index of the associated root system, i.e. it is equal to formula_19 where P, Q denote the weight lattice and root lattice, respectively. Representations of finite-dimensional algebras. In modular representation theory, and more generally in the theory of representations of finite-dimensional associative algebras "A" that are "not" semisimple, a Cartan matrix is defined by considering a (finite) set of principal indecomposable modules and writing composition series for them in terms of irreducible modules, yielding a matrix of integers counting the number of occurrences of an irreducible module. Cartan matrices in M-theory. In M-theory, one may consider a geometry with two-cycles which intersects with each other at a finite number of points, in the limit where the area of the two-cycles goes to zero. At this limit, there appears a local symmetry group. The matrix of intersection numbers of a basis of the two-cycles is conjectured to be the Cartan matrix of the Lie algebra of this local symmetry group. This can be explained as follows. In M-theory one has solitons which are two-dimensional surfaces called "membranes" or "2-branes". A 2-brane has a tension and thus tends to shrink, but it may wrap around a two-cycles which prevents it from shrinking to zero. One may compactify one dimension which is shared by all two-cycles and their intersecting points, and then take the limit where this dimension shrinks to zero, thus getting a dimensional reduction over this dimension. Then one gets type IIA string theory as a limit of M-theory, with 2-branes wrapping a two-cycles now described by an open string stretched between D-branes. There is a U(1) local symmetry group for each D-brane, resembling the degree of freedom of moving it without changing its orientation. The limit where the two-cycles have zero area is the limit where these D-branes are on top of each other, so that one gets an enhanced local symmetry group. Now, an open string stretched between two D-branes represents a Lie algebra generator, and the commutator of two such generator is a third one, represented by an open string which one gets by gluing together the edges of two open strings. The latter relation between different open strings is dependent on the way 2-branes may intersect in the original M-theory, i.e. in the intersection numbers of two-cycles. Thus the Lie algebra depends entirely on these intersection numbers. The precise relation to the Cartan matrix is because the latter describes the commutators of the simple roots, which are related to the two-cycles in the basis that is chosen. Generators in the Cartan subalgebra are represented by open strings which are stretched between a D-brane and itself. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A = (a_{ij})" }, { "math_id": 1, "text": "a_{ii} = 2 " }, { "math_id": 2, "text": "a_{ij} \\leq 0 " }, { "math_id": 3, "text": "a_{ij} = 0" }, { "math_id": 4, "text": "a_{ji} = 0" }, { "math_id": 5, "text": "A" }, { "math_id": 6, "text": "DS" }, { "math_id": 7, "text": "D" }, { "math_id": 8, "text": "S" }, { "math_id": 9, "text": "\n \\begin{bmatrix}\n 2 & -3 \\\\\n -1 & 2\n \\end{bmatrix} =\n \\begin{bmatrix}\n 3&0\\\\\n 0&1\n \\end{bmatrix}\\begin{bmatrix}\n \\frac{2}{3} & -1 \\\\\n -1 & 2\n \\end{bmatrix}.\n" }, { "math_id": 10, "text": "a_{ji}=2 {(r_i,r_j)\\over (r_j,r_j)}" }, { "math_id": 11, "text": "i\\neq j, r_j-{2(r_i,r_j)\\over (r_i,r_i)}r_i" }, { "math_id": 12, "text": "D_{ij}={\\delta_{ij}\\over (r_i,r_i)}" }, { "math_id": 13, "text": "S_{ij}=2(r_i,r_j)" }, { "math_id": 14, "text": "n \\times n" }, { "math_id": 15, "text": "I \\subset \\{1,\\dots,n\\}" }, { "math_id": 16, "text": "i \\in I" }, { "math_id": 17, "text": "j \\notin I" }, { "math_id": 18, "text": "A_n, B_n, C_n, D_n, E_6, E_7, E_8, F_4, G_2 " }, { "math_id": 19, "text": "|P/Q| " } ]
https://en.wikipedia.org/wiki?curid=1146433
11465576
Minimax eversion
In geometry, minimax eversions are a class of sphere eversions, constructed by using half-way models. It is a variational method, and consists of special homotopies (they are shortest paths with respect to Willmore energy); contrast with Thurston's corrugations, which are generic. The original method of half-way models was not optimal: the regular homotopies passed through the midway models, but the path from the round sphere to the midway model was constructed by hand, and was not gradient ascent/descent. Eversions via half-way models are called "tobacco-pouch eversions" by Francis and Morin. Half-way models. A half-way model is an immersion of the sphere formula_0 in formula_1, which is so-called because it is the half-way point of a sphere eversion. This class of eversions has time symmetry: the first half of the regular homotopy goes from the standard round sphere to the half-way model, and the second half (which goes from the half-way model to the inside-out sphere) is the same process in reverse. Explanation. Rob Kusner proposed optimal eversions using the Willmore energy on the space of all immersions of the sphere formula_0 in formula_2. The round sphere and the inside-out round sphere are the unique global minima for Willmore energy, and a minimax eversion is a path connecting these by passing over a saddle point (like traveling between two valleys via a mountain pass). Kusner's half-way models are saddle points for Willmore energy, arising (according to a theorem of Bryant) from certain complete minimal surfaces in 3-space; the minimax eversions consist of gradient ascent from the round sphere to the half-way model, then gradient descent down (gradient descent for Willmore energy is called Willmore flow). More symmetrically, start at the half-way model; push in one direction and follow Willmore flow down to a round sphere; push in the opposite direction and follow Willmore flow down to the inside-out round sphere. There are two families of half-way models (this observation is due to Francis and Morin): History. The first explicit sphere eversion was by Shapiro and Phillips in the early 1960s, using Boy's surface as a half-way model. Later Morin discovered the Morin surface and used it to construct other sphere eversions. Kusner conceived the minimax eversions in the early 1980s: historical details. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S^2" }, { "math_id": 1, "text": "\\R^3" }, { "math_id": 2, "text": "\\mathbf{R}^3" } ]
https://en.wikipedia.org/wiki?curid=11465576
11465932
Lifting-line theory
Mathematical model to quantify lift The Lanchester-Prandtl lifting-line theory is a mathematical model in aerodynamics that predicts lift distribution over a three-dimensional wing from the wing's geometry. The theory was expressed independently by Frederick W. Lanchester in 1907, and by Ludwig Prandtl in 1918–1919 after working with Albert Betz and Max Munk. In this model, the vortex bound to the wing develops along the whole wingspan because it is shed as a vortex-sheet from the trailing edge, rather than just as a single vortex from the wing-tips. Introduction. It is difficult to predict analytically the overall amount of lift that a wing of given geometry will generate. When analyzing a three-dimensional finite wing, a traditional approach slices the wing into cross-sections and analyzes each cross-section independently as a wing in a two-dimensional world. Each of these slices is called an airfoil, and it is easier to understand an airfoil than a complete three-dimensional wing. One might expect that understanding the full wing simply involves adding up the independently calculated forces from each airfoil segment. However, this approximation is grossly incorrect: on a real wing, the lift from each infinitesimal wing section is strongly affected by the airflow over neighboring wing sections. Lifting-line theory corrects some of the errors in the naive two-dimensional approach by including some interactions between the wing slices. Principle and derivation. Lifting line theory supposes wings that are long and thin with negligible fuselage, akin to a thin bar (the eponymous "lifting line") of span 2"s" driven through the fluid. From the Kutta–Joukowski theorem, the lift "L"("y") on a 2-dimensional segment of the wing at distance y from the fuselage is proportional to the circulation Γ("y") about the bar at y. When the aircraft is stationary on the ground, these circulations are all equal, but when the craft is in motion, they vary with y. By Helmholtz's theorems, the generation of spatially-varying circulation must correspond to shedding an equal-strength vortex filament downstream from the wing. In the lifting line theory, the resulting vortex line is presumed to remain bound to the wing, so that it changes the effective vertical angle of the incoming freestream air. The vertical motion induced by a vortex line of strength γ on air a distance r away is , so that the entire vortex system induces a freestream vertical motion at position y of formula_0 where the integral is understood in the sense of a Cauchy principal value. This flow changes the effective angle of attack at y; if the circulation response of the airfoils comprising the wing are understood over a range of attack angles, then one can develop an integral equation to determine Γ("y"). Formally, there is some angle of orientation such that the airfoil at position y develops no lift. For airstreams of velocity V oriented at an angle α relative to the liftless angle, the airfoil will develop some circulation "V"⋅"C"("y",α); for small α, Taylor expansion approximates that circulation as "V"⋅&lt;templatestyles src="Fraction/styles.css" /&gt;∂"C"⁄∂α("y",0)⋅α. If the airfoil is ideal and has chord "c"("y"), then theory predicts that formula_1 but real airfoils may be less efficient. Suppose the freestream flow attacks the airfoil at position y at angle α("y") (relative to the liftless angle for the airfoil at position y — thus a uniform flow across a wing may still have varying α("y")). By the small-angle approximation, the effective angle of attack at y of the combined freestream and vortex system is α("y")+&lt;templatestyles src="Fraction/styles.css" /&gt;"w"("y")⁄"V". Combining the above formulae, All the quantities in this equation except V and Γ are geometric properties of the wing, and so an engineer can (in principle) solve for Γ("y") given a fixed V. As in the derivation of thin-airfoil theory, a common approach is to expand Γ as a Fourier series along the wing, and then keep only the first few terms. Once the velocity V, circulation Γ, and fluid density ρ are known, the lift generated by the wing is assumed to be the net lift produced by each airfoil with the prescribed circulation... formula_2 ...and the drag is likewise the total across airfoils: formula_3 From these quantities and the aspect ratio A‍R, the span efficiency factor formula_4may be computed. Effects of control inputs. Control surface deflection changes the shape each airfoil slice, which can produce a different angle-of-no-lift for that airfoil, as well as a different angle-of-attack response. These do not require substantial modification to the theory, only changing ∂α"C"("y",0) and α("y") in (1). However, a body with rapidly moving wings, such as a rolling aircraft or flapping bird, experiences a vertical flow across the wing due to the wing's change in orientation, which appears as a missing term in the theory. Rolling wings. When the aircraft is rolling at rate p about the fuselage, an airfoil at (signed) position y experiences a vertical airflow at rate "py", which correspondingly adds to the effective angle of attack. Thus (1) becomes: formula_5which correspondingly modifies both the lift and the induced drag. This "drag force" comprises the main production of thrust for flapping wings. Elliptical wings. The efficiency e is theoretically optimized in an elliptical wing with no twist, in which formula_6 where θ is an alternate parameterization of station along the wing. For such a wing,formula_7which yields the equation for the elliptic induced drag coefficient:formula_8According to lifting-line theory, any wing planform can achieve the same efficiency through twist (a position-varying increase in pitch) relative to the fuselage. Useful approximations. A useful approximation for the 3D lift coefficient for elliptical circulation distribution isformula_9Note that this equation becomes the thin airfoil equation if "AR" goes to infinity. Limitations. The lifting line theory does not take into account compression of the air by the wings, viscous flow within the fuselage's boundary layer, or wing shapes other than the long, straight and thin, such as swept or low–aspect-ratio wings. The theory also presupposes that flow around the wings is in equilibrium, and does not address bodies that are quickly accelerated relative to the freestream air.
[ { "math_id": 0, "text": " w(y)=\\int_{-s}^s{\\frac{d\\Gamma(\\tilde{y})}{4\\pi(y-\\tilde{y})}}," }, { "math_id": 1, "text": " C(y,\\alpha)=2\\pi c(y)\\sin(\\alpha)\\text{,}" }, { "math_id": 2, "text": " L=\\rho V\\int_{-s}^s{\\Gamma(y)\\,dy} " }, { "math_id": 3, "text": "D=\\rho V\\int_{-s}^s{\\Gamma(y)\\alpha(y)\\,dy}\\text{.}" }, { "math_id": 4, "text": " e=\\frac{1}{\\pi\\mathrm{A\\!R}}\\cdot\\frac{L^2}{\\rho VD}" }, { "math_id": 5, "text": "\\Gamma(y)=\\partial_\\alpha C(y,0)\\left(V\\alpha(y)+py+\\frac{1}{4\\pi}\\int_{-s}^s{\\frac{\\Gamma'(\\tilde{y})\\,dy}{y-\\tilde{y}}}\\right)\\text{,}" }, { "math_id": 6, "text": "\\begin{align}\ny&=s\\cos\\theta \\\\\nc(y)&=\\mathrm{A\\!R}\\,s\\sin\\theta\\text{,} \n\\end{align}" }, { "math_id": 7, "text": "e = 1\\text{,}" }, { "math_id": 8, "text": " C_{D_{induced}} = \\frac{C_L^2}{\\pi A\\!R} " }, { "math_id": 9, "text": " \\ C_{L} = \\partial_\\alpha C(0,0)\\left(1+\\frac{2}{\\mathrm{A\\!R}}\\right)^{-1}\\alpha " } ]
https://en.wikipedia.org/wiki?curid=11465932
1146707
Whitney immersion theorem
On immersions of smooth m-dimensional manifolds in 2m-space and (2m-1) space In differential topology, the Whitney immersion theorem (named after Hassler Whitney) states that for formula_0, any smooth formula_1-dimensional manifold (required also to be Hausdorff and second-countable) has a one-to-one immersion in Euclidean formula_2-space, and a (not necessarily one-to-one) immersion in formula_3-space. Similarly, every smooth formula_1-dimensional manifold can be immersed in the formula_4-dimensional sphere (this removes the formula_0 constraint). The weak version, for formula_5, is due to transversality (general position, dimension counting): two "m"-dimensional manifolds in formula_6 intersect generically in a 0-dimensional space. Further results. William S. Massey went on to prove that every "n"-dimensional manifold is cobordant to a manifold that immerses in formula_7 where formula_8 is the number of 1's that appear in the binary expansion of formula_9. In the same paper, Massey proved that for every "n" there is manifold (which happens to be a product of real projective spaces) that does not immerse in formula_10. The conjecture that every "n"-manifold immerses in formula_7 became known as the immersion conjecture. This conjecture was eventually solved in the affirmative by Ralph Cohen (1985).
[ { "math_id": 0, "text": "m>1" }, { "math_id": 1, "text": "m" }, { "math_id": 2, "text": "2m" }, { "math_id": 3, "text": "(2m-1)" }, { "math_id": 4, "text": "2m-1" }, { "math_id": 5, "text": "2m+1" }, { "math_id": 6, "text": "\\mathbf{R}^{2m}" }, { "math_id": 7, "text": "S^{2n-a(n)}" }, { "math_id": 8, "text": "a(n)" }, { "math_id": 9, "text": "n" }, { "math_id": 10, "text": "S^{2n-1-a(n)}" } ]
https://en.wikipedia.org/wiki?curid=1146707
1147024
Buddhabrot
Probability distribution over the trajectories of points that escape the Mandelbrot fractal The Buddhabrot is the probability distribution over the trajectories of points that escape the Mandelbrot fractal. Its name reflects its pareidolic resemblance to classical depictions of Gautama Buddha, seated in a meditation pose with a forehead mark ("tikka"), a traditional oval crown ("ushnisha"), and ringlet of hair. Discovery. The "Buddhabrot" rendering technique was discovered by Melinda Green, who later described it in a 1993 Usenet post to sci.fractals. Previous researchers had come very close to finding the precise Buddhabrot technique. In 1988, Linas Vepstas relayed similar images to Cliff Pickover for inclusion in Pickover's then-forthcoming book "Computers, Pattern, Chaos, and Beauty". This led directly to the discovery of Pickover stalks. Noel Griffin also implemented this idea in the 1993 "Mandelcloud" option in the Fractint renderer. However, these researchers did not filter out non-escaping trajectories required to produce the ghostly forms reminiscent of Hindu art. The inverse, "Anti-Buddhabrot" filter produces images similar to no filtering. Green first named this pattern Ganesh, since an Indian co-worker "instantly recognized it as the god 'Ganesha' which is the one with the head of an elephant." The name "Buddhabrot" was coined later by Lori Gardi. Rendering method. Mathematically, the Mandelbrot set consists of the set of points formula_0 in the complex plane for which the iteratively defined sequence formula_1 does not tend to infinity as formula_2 goes to infinity for formula_3. The "Buddhabrot" image can be constructed by first creating a 2-dimensional array of boxes, each corresponding to a final pixel in the image. Each box formula_4 for formula_5 and formula_6 has size in complex coordinates of formula_7 and formula_8, where formula_9 and formula_10 for an image of width formula_11 and height formula_12. For each box, a corresponding counter is initialized to zero. Next, a random sampling of formula_0 points are iterated through the Mandelbrot function. For points which do escape within a chosen maximum number of iterations, and therefore are "not" in the Mandelbrot set, the counter for each box entered during the escape to infinity is incremented by 1. In other words, for each sequence corresponding to formula_0 that escapes, for each point formula_13 during the escape, the box that formula_14 lies within is incremented by 1. Points which do not escape within the maximum number of iterations (and considered to be in the Mandelbrot set) are discarded. After a large number of formula_0 values have been iterated, grayscale shades are then chosen based on the distribution of values recorded in the array. The result is a density plot highlighting regions where formula_13 values spend the most time on their way to infinity. Nuances. Rendering "Buddhabrot" images is typically more computationally intensive than standard Mandelbrot rendering techniques. This is partly due to requiring more random points to be iterated than pixels in the image in order to build up a sharp image. Rendering highly zoomed areas requires even more computation than for standard Mandelbrot images in which a given pixel can be computed directly regardless of zoom level. Conversely, a pixel in a zoomed region of a Buddhabrot image can be affected by initial points from regions far outside the one being rendered. Without resorting to more complex probabilistic techniques, rendering zoomed portions of "Buddhabrot" consists of merely cropping a large full sized rendering. The maximum number of iterations chosen affects the image – higher values give sparser more detailed appearance, as a few of the points pass through a large number of pixels before they escape, resulting in their paths being more prominent. If a lower maximum was used, these points would not escape in time and would be regarded as not escaping at all. The number of samples chosen also affects the image as not only do higher sample counts reduce the noise of the image, they can reduce the visibility of slowly moving points and small attractors, which can show up as visible streaks in a rendering of lower sample count. Some of these streaks are visible in the 1,000,000 iteration image below. Green later realized that this provided a natural way to create color Buddhabrot images by taking three such grayscale images, differing only by the maximum number of iterations used, and combining them into a single color image using the same method used by astronomers to create false color images of nebula and other celestial objects. For example, one could assign a 2,000 max iteration image to the red channel, a 200 max iteration image to the green channel, and a 20 max iteration image to the blue channel of an image in an RGB color space. Some have labelled Buddhabrot images using this technique "Nebulabrots". Relation to the logistic map. The relationship between the Mandelbrot set as defined by the iteration formula_15, and the logistic map formula_16 is well known. The two are related by the quadratic transformation: formula_17 The traditional way of illustrating this relationship is aligning the logistic map and the Mandelbrot set through the relation between formula_18 and formula_19, using a common x-axis and a different y-axis, showing a one-dimensional relationship. Melinda Green discovered that the Anti-Buddhabrot paradigm fully integrates the logistic map. Both are based on tracing paths from non-escaping points, iterated from a (random) starting point, and the iteration functions are related by the transformation given above. It is then easy to see that the Anti-Buddhabrot for formula_15, plotting paths with formula_20 and formula_21, simply generates the logistic map in the plane formula_22, when using the given transformation. For rendering purposes we use formula_23. In the logistic map, all formula_24 ultimately generate the same path. Because both the Mandelbrot set and the logistic map are an integral part of the Anti-Buddhabrot we can now show a 3D relationship between both, using the 3D axes formula_25. The animation shows the classic Anti-Buddhabrot with formula_26 and formula_21, this is the 2D Mandelbrot set in the plane formula_27, and also the Anti-Buddhabrot with formula_20 and formula_21, this is the 2D logistic map in the plane formula_22. We rotate the plane formula_28 around the formula_18-axis, first showing formula_27, then rotating 90° to show formula_22, then rotating an extra 90° to show formula_29. We could rotate an extra 180° but this gives the same images, mirrored around the formula_18-axis. The logistic map Anti-Buddhabrot is in fact a subset of the classic Anti-Buddhabrot, situated in the plane formula_22 (or formula_30) of 3D formula_25, perpendicular to the plane formula_27. We emphasize this by showing briefly, at 90° rotation, only the projected plane formula_30, not 'disturbed' by the projections of the planes with non-zero formula_31. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c" }, { "math_id": 1, "text": "z_{n+1} = z_{n}^2 + c" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "z_0 = 0" }, { "math_id": 4, "text": "(i,j)" }, { "math_id": 5, "text": "i =1,\\ldots,m" }, { "math_id": 6, "text": "j = 1,\\ldots,n" }, { "math_id": 7, "text": "\\Delta x" }, { "math_id": 8, "text": "\\Delta y" }, { "math_id": 9, "text": "\\Delta x = w/m" }, { "math_id": 10, "text": "\\Delta y = h/n" }, { "math_id": 11, "text": "w" }, { "math_id": 12, "text": "h" }, { "math_id": 13, "text": "z_n" }, { "math_id": 14, "text": "(\\text{Re}(z_n), \\text{Im}(z_n))" }, { "math_id": 15, "text": "z^2+c" }, { "math_id": 16, "text": "\\lambda x(1-x)" }, { "math_id": 17, "text": "\\begin{align}\nc_r&=\\frac{\\lambda(2-\\lambda)}{4}\\\\\nc_i&=0\\\\\nz_r&=-\\frac{\\lambda(2x-1)}{2}\\\\\nz_i&=0\n\\end{align}" }, { "math_id": 18, "text": "c_r" }, { "math_id": 19, "text": "\\lambda" }, { "math_id": 20, "text": "c=(\\text{random},0)" }, { "math_id": 21, "text": "z_0=(0,0)" }, { "math_id": 22, "text": "\\{c_r,z_r\\}" }, { "math_id": 23, "text": "z_0=(\\text{random},0)" }, { "math_id": 24, "text": "z_{r0}" }, { "math_id": 25, "text": "\\{c_r,c_i,z_r\\}" }, { "math_id": 26, "text": "c=(\\text{random},\\text{random})" }, { "math_id": 27, "text": "\\{c_r,c_i\\}" }, { "math_id": 28, "text": "\\{c_i,z_r\\}" }, { "math_id": 29, "text": "\\{c_r,-c_i\\}" }, { "math_id": 30, "text": "c_i=0" }, { "math_id": 31, "text": "c_i" } ]
https://en.wikipedia.org/wiki?curid=1147024
1147822
Current yield
Ratio of interest to price for a bond The current yield, interest yield, income yield, flat yield, market yield, mark to market yield or running yield is a financial term used in reference to bonds and other fixed-interest securities such as gilts. It is the ratio of the annual interest (coupon) payment and the bond's price: formula_0 Example. The current yield of a bond with a face value (F) of $100 and a coupon rate (r) of 5.00% that is selling at $95.00 (clean; not including accrued interest) (P) is calculated as follows. formula_1 Shortcomings of current yield. The current yield refers only to the yield of the bond at the current moment. It does not reflect the total return over the life of the bond, or the factors affecting total return, such as: Relationship between yield to maturity and coupon rate. The concept of current yield is closely related to other bond concepts, including yield to maturity (YTM), and coupon yield. When a coupon-bearing bond sells at; For zero-coupon bonds selling at a discount, the coupon yield and current yield are zero, and the YTM is positive. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\text{Current yield} = \\frac{\\text{Annual interest payment} }{\\text{Current price} }.\n" }, { "math_id": 1, "text": "\\text{Current Yield} = \\frac{F \\times r}{P} = \\frac{\\$100 \\times 5.00\\%}{\\$95.00} = \\frac{\\$5.00}{\\$95.00} = 5.2631\\%" } ]
https://en.wikipedia.org/wiki?curid=1147822