id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
11278902
Positively separated sets
In mathematics, two non-empty subsets "A" and "B" of a given metric space ("X", "d") are said to be positively separated if the infimum formula_0 For example, on the real line with the usual distance, the open intervals (0, 2) and (3, 4) are positively separated, while (3, 4) and (4, 5) are not. In two dimensions, the graph of "y" = 1/"x" for "x" > 0 and the "x"-axis are not positively separated.
[ { "math_id": 0, "text": "\\inf_{a \\in A, b \\in B} d(a, b) > 0." } ]
https://en.wikipedia.org/wiki?curid=11278902
11279549
Metric outer measure
In mathematics, a metric outer measure is an outer measure "μ" defined on the subsets of a given metric space ("X", "d") such that formula_0 for every pair of positively separated subsets "A" and "B" of "X". Construction of metric outer measures. Let "τ" : Σ → [0, +∞] be a set function defined on a class Σ of subsets of "X" containing the empty set ∅, such that "τ"(∅) = 0. One can show that the set function "μ" defined by formula_1 where formula_2 is not only an outer measure, but in fact a metric outer measure as well. (Some authors prefer to take a supremum over "δ" > 0 rather than a limit as "δ" → 0; the two give the same result, since "μ""δ"("E") increases as "δ" decreases.) For the function "τ" one can use formula_3 where "s" is a positive constant; this "τ" is defined on the power set of all subsets of "X". By Carathéodory's extension theorem, the outer measure can be promoted to a full measure; the associated measure "μ" is the "s"-dimensional Hausdorff measure. More generally, one could use any so-called dimension function. This construction is very important in fractal geometry, since this is how the Hausdorff measure is obtained. The packing measure is superficially similar, but is obtained in a different manner, by packing balls inside a set, rather than covering the set. Properties of metric outer measures. Let "μ" be a metric outer measure on a metric space ("X", "d"). formula_4 and such that "A""n" and "A" \ "A""n"+1 are positively separated, it follows that formula_5 formula_6
[ { "math_id": 0, "text": "\\mu (A \\cup B) = \\mu (A) + \\mu (B)" }, { "math_id": 1, "text": "\\mu (E) = \\lim_{\\delta \\to 0} \\mu_{\\delta} (E)," }, { "math_id": 2, "text": "\\mu_{\\delta} (E) = \\inf \\left\\{ \\left. \\sum_{i = 1}^{\\infty} \\tau (C_{i}) \\right| C_{i} \\in \\Sigma, \\operatorname{diam} (C_{i}) \\leq \\delta, \\bigcup_{i = 1}^{\\infty} C_{i} \\supseteq E \\right\\}," }, { "math_id": 3, "text": "\\tau(C) = \\operatorname{diam} (C)^s,\\," }, { "math_id": 4, "text": "A_{1} \\subseteq A_{2} \\subseteq \\dots \\subseteq A = \\bigcup_{n = 1}^{\\infty} A_{n}," }, { "math_id": 5, "text": "\\mu (A) = \\sup_{n \\in \\mathbb{N}} \\mu (A_{n})." }, { "math_id": 6, "text": "\\mu (A \\cup B) = \\mu (A) + \\mu (B)." } ]
https://en.wikipedia.org/wiki?curid=11279549
11281160
CIELUV
Color space In colorimetry, the CIE 1976 "L"*, "u"*, "v"* color space, commonly known by its abbreviation CIELUV, is a color space adopted by the International Commission on Illumination (CIE) in 1976, as a simple-to-compute transformation of the 1931 CIE XYZ color space, but which attempted perceptual uniformity. It is extensively used for applications such as computer graphics which deal with colored lights. Although additive mixtures of different colored lights will fall on a line in CIELUV's uniform chromaticity diagram (called the "CIE 1976 UCS"), such additive mixtures will not, contrary to popular belief, fall along a line in the CIELUV color space unless the mixtures are constant in lightness. Historical background. CIELUV is an Adams chromatic valence color space and is an update of the CIE 1964 ("U"*, "V"*, "W"*) color space (CIEUVW). The differences include a slightly modified lightness scale and a modified uniform chromaticity scale, in which one of the coordinates, "v"′, is 1.5 times as large as "v" in its 1960 predecessor. CIELUV and CIELAB were adopted simultaneously by the CIE when no clear consensus could be formed behind only one or the other of these two color spaces. CIELUV uses Judd-type (translational) white point adaptation (in contrast with CIELAB, which uses a von Kries transform). This can produce useful results when working with a single illuminant, but can predict imaginary colors (i.e., outside the spectral locus) when attempting to use it as a chromatic adaptation transform. The translational adaptation transform used in CIELUV has also been shown to perform poorly in predicting corresponding colors. XYZ → CIELUV and CIELUV → XYZ conversions. By definition, 0 ≤ "L"* ≤ 100. The forward transformation. CIELUV is based on CIEUVW and is another attempt to define an encoding with uniformity in the perceptibility of color differences. The non-linear relations for "L"*, "u"*, and "v"* are given below: formula_0 The quantities "u"′"n" and "v"′"n" are the ("u"′, "v"′) chromaticity coordinates of a "specified white object" – which may be termed the white point – and "Y""n" is its luminance. In reflection mode, this is often (but not always) taken as the ("u"′, "v"′) of the perfect reflecting diffuser under that illuminant. (For example, for the 2° observer and standard illuminant C, "u"′"n" = 0.2009, "v"′"n" = 0.4610.) Equations for "u"′ and "v"′ are given below: formula_1 The reverse transformation. The transformation from ("u"′, "v"′) to ("x", "y") is: formula_2 The transformation from CIELUV to XYZ is performed as follows: formula_3 Cylindrical representation (CIELCh). CIELChuv, or HCL color space (hue–chroma–luminance) is increasingly seen in the information visualization community as a way to help with presenting data without the bias implicit in using varying saturation. The cylindrical version of CIELUV is known as CIELChuv, or CIELChuv, CIELCh(uv) or CIEHLCuv, where "C"*"uv" is the chroma and "h""uv" is the hue: formula_4 formula_5 where atan2 function, a "two-argument arctangent", computes the polar angle from a Cartesian coordinate pair. Furthermore, the saturation correlate can be defined as formula_6 Similar correlates of chroma and hue, but not saturation, exist for CIELAB. See Colorfulness for more discussion on saturation. Color and hue difference. The color difference can be calculated using the Euclidean distance of the ("L"*, "u"*, "v"*) coordinates. It follows that a chromaticity distance of formula_7 corresponds to the same Δ"E"*"uv" as a lightness difference of Δ"L"* = 1, in direct analogy to CIEUVW. The Euclidean metric can also be used in CIELCh, with that component of Δ"E"*"uv" attributable to difference in hue as Δ"H"* = √"C"*1"C"*2 2 sin (Δ"h"/2), where Δ"h" = "h"2 − "h"1.
[ { "math_id": 0, "text": "\n\\begin{align}\nL^* &= \\begin{cases}\n \\left(\\frac{29}{3}\\right)^3 Y / Y_n,& Y / Y_n \\le \\left(\\frac{6}{29}\\right)^3 \\\\\n 116 \\left( Y / Y_n \\right)^{1/3} - 16,& Y / Y_n > \\left(\\frac{6}{29}\\right)^3 \n\\end{cases}\\\\\nu^* &= 13 L^*\\cdot (u^\\prime - u_n^\\prime) \\\\\nv^* &= 13 L^*\\cdot (v^\\prime - v_n^\\prime)\n\\end{align}" }, { "math_id": 1, "text": "\\begin{align}\nu^\\prime &= \\frac{4 X}{X + 15 Y + 3 Z} &= \\frac{4 x}{-2 x + 12 y + 3} \\\\\nv^\\prime &= \\frac{9 Y}{X + 15 Y + 3 Z} &= \\frac{9 y}{-2 x + 12 y + 3}\n\\end{align}" }, { "math_id": 2, "text": "\\begin{align}\n x &= \\frac{9u^\\prime}{6u^\\prime - 16v^\\prime + 12}\\\\\n y &= \\frac{4v^\\prime}{6u^\\prime - 16v^\\prime + 12}\n\\end{align}" }, { "math_id": 3, "text": "\\begin{align}\n u^\\prime&= \\frac{u^*}{13L^*} + u^\\prime_n \\\\\n v^\\prime&= \\frac{v^*}{13L^*} + v^\\prime_n \\\\\n Y &= \\begin{cases}\n Y_n \\cdot L^* \\cdot \\left(\\frac{3}{29}\\right)^3,& L^* \\le 8 \\\\\n Y_n \\cdot \\left(\\frac{L^* + 16}{116}\\right)^3,& L^* > 8\n \\end{cases}\\\\\n X &= Y \\cdot \\frac{9u^\\prime}{4v^\\prime} \\\\\n Z &= Y \\cdot \\frac{12 - 3u^\\prime - 20v^\\prime}{4v^\\prime} \\\\\n\\end{align}" }, { "math_id": 4, "text": "C_{uv}^* = \\operatorname{hypot}(u^*, v^*) = \\sqrt{(u^*)^2 + (v^*)^2}," }, { "math_id": 5, "text": "h_{uv} = \\operatorname{atan2}(v^*, u^*)," }, { "math_id": 6, "text": "s_{uv} = \\frac{C^*}{L^*} = 13 \\sqrt{(u' - u'_n)^2 + (v' - v'_n)^2}." }, { "math_id": 7, "text": "\\sqrt{(\\Delta u')^2 + (\\Delta v')^2} = 1/13" } ]
https://en.wikipedia.org/wiki?curid=11281160
11283
Falsifiability
Property of a statement that can be logically contradicted Falsifiability (or refutability) is a deductive standard of evaluation of scientific theories and hypotheses, introduced by the philosopher of science Karl Popper in his book "The Logic of Scientific Discovery" (1934). A theory or hypothesis is falsifiable (or refutable) if it can be "logically" contradicted by an empirical test. Popper emphasized the asymmetry created by the relation of a universal law with basic observation statements and contrasted falsifiability to the intuitively similar concept of verifiability that was then current in logical positivism. He argued that the only way to verify a claim such as "All swans are white" would be if one could theoretically observe all swans, which is not possible. On the other hand, the falsifiability requirement for an anomalous instance, such as the observation of a single black swan, is theoretically reasonable and sufficient to logically falsify the claim. Popper proposed falsifiability as the cornerstone solution to both the problem of induction and the problem of demarcation. He insisted that, as a logical criterion, his falsifiability is distinct from the related concept "capacity to be proven wrong" discussed in Lakatos's falsificationism. Even being a logical criterion, its purpose is to make the theory predictive and testable, and thus useful in practice. By contrast, the Duhem–Quine thesis says that definitive experimental falsifications are impossible and that no scientific hypothesis is by itself capable of making predictions, because an empirical test of the hypothesis requires one or more background assumptions. Popper's response is that falsifiability does not have the Duhem problem because it is a logical criterion. Experimental research has the Duhem problem and other problems, such as the problem of induction, but, according to Popper, statistical tests, which are only possible when a theory is falsifiable, can still be useful within a critical discussion. As a key notion in the separation of science from non-science and pseudoscience, falsifiability has featured prominently in many scientific controversies and applications, even being used as legal precedent. The problem of induction and demarcation. One of the questions in the scientific method is: how does one move from observations to scientific laws? This is the problem of induction. Suppose we want to put the hypothesis that all swans are white to the test. We come across a white swan. We cannot validly argue (or "induce") from "here is a white swan" to "all swans are white"; doing so would require a logical fallacy such as, for example, affirming the consequent. Popper's idea to solve this problem is that while it is impossible to verify that every swan is white, finding a single black swan shows that "not" every swan is white. Such falsification uses the valid inference "modus tollens": if from a law formula_0 we logically deduce formula_1, but what is observed is formula_2, we infer that the law formula_0 is false. For example, given the statement formula_3 "all swans are white", we can deduce formula_4 "the specific swan here is white", but if what is observed is formula_5 "the specific swan here is not white" (say black), then "all swans are white" is false. More accurately, the statement formula_1 that can be deduced is broken into an initial condition and a prediction as in formula_6 in which formula_7 "the thing here is a swan" and formula_8 "the thing here is a white swan". If what is observed is C being true while P is false (formally, formula_9), we can infer that the law is false. For Popper, induction is actually never needed in science. Instead, in Popper's view, laws are conjectured in a non-logical manner on the basis of expectations and predispositions. This has led David Miller, a student and collaborator of Popper, to write "the mission is to classify truths, not to certify them". In contrast, the logical empiricism movement, which included such philosophers as Moritz Schlick, Rudolf Carnap, Otto Neurath, and A.J. Ayer wanted to formalize the idea that, for a law to be scientific, it must be possible to argue on the basis of observations either in favor of its truth or its falsity. There was no consensus among these philosophers about how to achieve that, but the thought expressed by Mach's dictum that "where neither confirmation nor refutation is possible, science is not concerned" was accepted as a basic precept of critical reflection about science. Popper said that a demarcation criterion was possible, but we have to use the "logical possibility" of falsifications, which is falsifiability. He cited his encounter with psychoanalysis in the 1910s. It did not matter what observation was presented, psychoanalysis could explain it. Unfortunately, the reason it could explain everything is that it did not exclude anything also. For Popper, this was a failure, because it meant that it could not make any prediction. From a logical standpoint, if one finds an observation that does not contradict a law, it does not mean that the law is true. A verification has no value in itself. But, if the law makes risky predictions and these are corroborated, Popper says, there is a reason to prefer this law over another law that makes less risky predictions or no predictions at all. In the definition of falsifiability, contradictions with observations are not used to support eventual falsifications, but for "logical" "falsifications" that show that the law makes risky predictions, which is completely different. On the basic philosophical side of this issue, Popper said that some philosophers of the Vienna Circle had mixed two different problems, that of meaning and that of demarcation, and had proposed in verificationism a single solution to both: a statement that could not be verified was considered meaningless. In opposition to this view, Popper said that there are meaningful theories that are not scientific, and that, accordingly, a criterion of meaningfulness does not coincide with a criterion of demarcation. From Hume's problem to non problematic induction. The problem of induction is often called Hume's problem. David Hume studied how human beings obtain new knowledge that goes beyond known laws and observations, including how we can discover new laws. He understood that deductive logic could not explain this learning process and argued in favour of a mental or psychological process of learning that would not require deductive logic. He even argued that this learning process cannot be justified by any general rules, deductive or not. Popper accepted Hume's argument and therefore viewed progress in science as the result of quasi-induction, which does the same as induction, but has no inference rules to justify it. Philip N. Johnson-Laird, professor of psychology, also accepted Hume's conclusion that induction has no justification. For him induction does not require justification and therefore can exist in the same manner as Popper's quasi-induction does. When Johnson-Laird says that no justification is needed, he does not refer to a general inductive method of justification that, to avoid a circular reasoning, would not itself require any justification. On the contrary, in agreement with Hume, he means that there is no general method of justification for induction and that's ok, because the induction steps do not require justification. Instead, these steps use patterns of induction, which are not expected to have a general justification: they may or may not be applicable depending on the background knowledge. Johnson-Laird wrote: "[P]hilosophers have worried about which properties of objects warrant inductive inferences. The answer rests on knowledge: we don't infer that all the passengers on a plane are male because the first ten off the plane are men. We know that this observation doesn't rule out the possibility of a woman passenger." The reasoning pattern that was not applied here is enumerative induction. Popper was interested in the overall learning process in science, to quasi-induction, which he also called the "path of science". However, Popper did not show much interest in these reasoning patterns, which he globally referred to as psychologism. He did not deny the possibility of some kind of psychological explanation for the learning process, especially when psychology is seen as an extension of biology, but he felt that these biological explanations were not within the scope of epistemology. Popper proposed an evolutionary mechanism to explain the success of science, which is much in line with Johnson-Laird's view that "induction is just something that animals, including human beings, do to make life possible", but Popper did not consider it a part of his epistemology. He wrote that his interest was mainly in the "logic" of science and that epistemology should be concerned with logical aspects only. Instead of asking why science succeeds he considered the pragmatic problem of induction. This problem is not how to justify a theory or what is the global mechanism for the success of science but only what methodology do we use to pick one theory among theories that are already conjectured. His methodological answer to the latter question is that we pick the theory that is the most tested with the available technology: "the one, which in the light of our "critical discussion", appears to be the best so far". By his own account, because only a negative approach was supported by logic, Popper adopted a negative methodology. The purpose of his methodology is to prevent "the policy of immunizing our theories against refutation". It also supports some "dogmatic attitude" in defending theories against criticism, because this allows the process to be more complete. This negative view of science was much criticized and not only by Johnson-Laird. In practice, some steps based on observations can be justified under assumptions, which can be very natural. For example, Bayesian inductive logic is justified by theorems that make explicit assumptions. These theorems are obtained with deductive logic, not inductive logic. They are sometimes presented as steps of induction, because they refer to laws of probability, even though they do not go beyond deductive logic. This is yet a third notion of induction, which overlaps with deductive logic in the following sense that it is supported by it. These deductive steps are not really inductive, but the overall process that includes the creation of assumptions is inductive in the usual sense. In a fallibilist perspective, a perspective that is widely accepted by philosophers, including Popper, every logical step of learning only creates an assumption or reinstates one that was doubted—that is all that science logically does. The elusive distinction between the logic of science and its applied methodology. Popper distinguished between the logic of science and its applied "methodology". For example, the falsifiability of Newton's law of gravitation, as defined by Popper, depends purely on the logical relation it has with a statement such as "The brick fell upwards when released". A brick that falls upwards would not alone falsify Newton's law of gravitation. The capacity to verify the absence of conditions such as a hidden string attached to the brick is also needed for this state of affairs to eventually falsify Newton's law of gravitation. However, these applied methodological considerations are irrelevant in falsifiability, because it is a logical criterion. The empirical requirement on the potential falsifier, also called the "material requirement", is only that it is observable inter-subjectively with existing technologies. There is no requirement that the potential falsifier can actually show the law to be false. The purely logical contradiction, together with the material requirement, are sufficient. The logical part consists of theories, statements, and their purely logical relationship together with this material requirement, which is needed for a connection with the methodological part. The methodological part consists, in Popper's view, of informal rules, which are used to guess theories, accept observation statements as factual, etc. These include statistical tests: Popper is aware that observation statements are accepted with the help of statistical methods and that these involve methodological decisions. When this distinction is applied to the term "falsifiability", it corresponds to a distinction between two completely different meanings of the term. The same is true for the term "falsifiable". Popper said that he only uses "falsifiability" or "falsifiable" in reference to the logical side and that, when he refers to the methodological side, he speaks instead of "falsification" and its problems. Popper said that methodological problems require proposing methodological rules. For example, one such rule is that, if one refuses to go along with falsifications, then one has retired oneself from the game of science. The logical side does not have such methodological problems, in particular with regard to the falsifiability of a theory, because basic statements are not required to be possible. Methodological rules are only needed in the context of actual falsifications. So observations have two purposes in Popper's view. On the methodological side, observations can be used to show that a law is false, which Popper calls falsification. On the logical side, observations, which are purely logical constructions, do not show a law to be false, but contradict a law to show its falsifiability. Unlike falsifications and "free from the problems of falsification", these contradictions establish the value of the law, which may eventually be corroborated. Popper wrote that an entire literature exists because this distinction between the logical aspect and the methodological aspect was not observed. This is still seen in a more recent literature. For example, in their 2019 article "Evidence based medicine as science", Vere and Gibson wrote "[falsifiability has] been considered problematic because theories are not simply tested through falsification but in conjunction with auxiliary assumptions and background knowledge." Despite the fact that Popper insisted that he is aware that falsifications are impossible and added that this is not an issue for his falsifiability criterion because it has nothing to do with the possibility or impossibility of falsifications, Stove and others, often referring to Lakatos original criticism, continue to maintain that the problems of falsification are a failure of falsifiability. Basic statements and the definition of falsifiability. Basic statements. In Popper's view of science, statements of observation can be analyzed within a logical structure independently of any factual observations. The set of all purely logical observations that are considered constitutes the empirical basis. Popper calls them the "basic statements" or "test statements". They are the statements that can be used to show the falsifiability of a theory. Popper says that basic statements do not have to be possible in practice. It is sufficient that they are accepted by convention as belonging to the empirical language, a language that allows intersubjective verifiability: "they must be testable by intersubjective observation (the material requirement)". See the examples in section . In more than twelve pages of "The Logic of Scientific Discovery", Popper discusses informally which statements among those that are considered in the logical structure are basic statements. A logical structure uses universal classes to define laws. For example, in the law "all swans are white" the concept of swans is a universal class. It corresponds to a set of properties that every swan must have. It is not restricted to the swans that exist, existed or will exist. Informally, a basic statement is simply a statement that concerns only a finite number of specific instances in universal classes. In particular, an existential statement such as "there exists a black swan" is not a basic statement, because it is not specific about the instance. On the other hand, "this swan here is black" is a basic statement. Popper says that it is a singular existential statement or simply a singular statement. So, basic statements are singular (existential) statements. The definition of falsifiability. Thornton says that basic statements are statements that correspond to particular "observation-reports". He then gives Popper's definition of falsifiability: <templatestyles src="Template:Blockquote/styles.css" />"A theory is scientific if and only if it divides the class of basic statements into the following two non-empty sub-classes: (a) the class of all those basic statements with which it is inconsistent, or which it prohibits—this is the class of its potential falsifiers (i.e., those statements which, if true, falsify the whole theory), and (b) the class of those basic statements with which it is consistent, or which it permits (i.e., those statements which, if true, corroborate it, or bear it out)." As in the case of actual falsifiers, decisions must be taken by scientists to accept a logical structure and its associated empirical basis, but these are usually part of a background knowledge that scientists have in common and, often, no discussion is even necessary. The first decision described by Lakatos is implicit in this agreement, but the other decisions are not needed. This agreement, if one can speak of agreement when there is not even a discussion, exists only in principle. This is where the distinction between the logical and methodological sides of science becomes important. When an actual falsifier is proposed, the technology used is considered in detail and, as described in section , an actual agreement is needed. This may require using a deeper empirical basis, hidden within the current empirical basis, to make sure that the properties or values used in the falsifier were obtained correctly ( gives some examples). Popper says that despite the fact that the empirical basis can be shaky, more comparable to a swamp than to solid ground, the definition that is given above is simply the formalization of a natural requirement on scientific theories, without which the whole logical process of science would not be possible. Initial condition and prediction in falsifiers of laws. In his analysis of the scientific nature of universal laws, Popper arrived at the conclusion that laws must "allow us to deduce, roughly speaking, more "empirical" singular statements than we can deduce from the initial conditions alone." A singular statement that has one part only cannot contradict a universal law. A falsifier of a law has always two parts: the initial condition and the singular statement that contradicts the prediction. However, there is no need to require that falsifiers have two parts in the definition itself. This removes the requirement that a falsifiable statement must make prediction. In this way, the definition is more general and allows the basic statements themselves to be falsifiable. Criteria that require that "a law" must be predictive, just as is required by falsifiability (when applied to laws), Popper wrote, "have been put forward as criteria of the meaningfulness of sentences (rather than as criteria of demarcation applicable to theoretical systems) again and again after the publication of my book, even by critics who pooh-poohed my criterion of falsifiability." Falsifiability in model theory. Scientists such as the Nobel laureate Herbert A. Simon have studied the semantic aspects of the logical side of falsifiability. These studies were done in the perspective that a logic is a relation between formal sentences in languages and a collection of mathematical structures. The relation, usually denoted formula_10, says the formal sentence formula_11 is true when interpreted in the structure formula_12—it provides the semantic of the languages. According to Rynasiewicz, in this semantic perspective, falsifiability as defined by Popper means that in some observation structure (in the collection) there exists a set of observations which refutes the theory. An even stronger notion of falsifiability was considered, which requires, not only that there exists one structure with a contradicting set of observations, but also that all structures in the collection that cannot be expanded to a structure that satisfies formula_11 contain such a contradicting set of observations. Examples of demarcation and applications. Newton's theory. In response to Lakatos who suggested that Newton's theory was as hard to show falsifiable as Freud's psychoanalytic theory, Popper gave the example of an apple that moves from the ground up to a branch and then starts to dance from one branch to another. Popper thought that it was a basic statement that was a potential falsifier for Newton's theory, because the position of the apple at different times can be measured. Popper's claims on this point are controversial, since Newtonian physics does not deny that there could be forces acting on the apple that are stronger than Earth's gravity. Einstein's equivalence principle. Another example of a basic statement is "The inert mass of this object is ten times larger than its gravitational mass." This is a basic statement because the inert mass and the gravitational mass can both be measured separately, even though it never happens that they are different. It is, as described by Popper, a valid falsifier for Einstein's equivalence principle. Evolution. Industrial melanism. In a discussion of the theory of evolution, Popper mentioned industrial melanism as an example of a falsifiable law. A corresponding basic statement that acts as a potential falsifier is "In this industrial area, the relative fitness of the white-bodied peppered moth is high." Here "fitness" means "reproductive success over the next generation". It is a basic statement, because it is possible to separately determine the kind of environment, industrial vs natural, and the relative fitness of the white-bodied form (relative to the black-bodied form) in an area, even though it never happens that the white-bodied form has a high relative fitness in an industrial area. Precambrian rabbit. A famous example of a basic statement from J. B. S. Haldane is "[These are] fossil rabbits in the Precambrian era." This is a basic statement because it is possible to find a fossil rabbit and to determine that the date of a fossil is in the Precambrian era, even though it never happens that the date of a rabbit fossil is in the Precambrian era. Despite opinions to the contrary, sometimes wrongly attributed to Popper, this shows the scientific character of paleontology or the history of the evolution of life on Earth, because it contradicts the hypothesis in paleontology that all mammals existed in a much more recent era. Richard Dawkins adds that any other modern animal, such as a hippo, would suffice. Simple examples of unfalsifiable statements. Even if it is accepted that angels exist, "All angels have large wings" is not falsifiable, because no technology exists to identify and observe angels. A simple example of a non-basic statement is "This angel does not have large wings." It is not a basic statement, because though the absence of large wings can be observed, no technology (independent of the presence of wings) exists to identify angels. Even if it is accepted that angels exist, the sentence "All angels have large wings" is not falsifiable. Another example from Popper of a non-basic statement is "This human action is altruistic." It is not a basic statement, because no accepted technology allows us to determine whether or not an action is motivated by self-interest. Because no basic statement falsifies it, the statement that "All human actions are egotistic, motivated by self-interest" is thus not falsifiable. Omphalos hypothesis. Some adherents of young-Earth creationism make an argument (called the Omphalos hypothesis after the Greek word for navel) that the world was created with the appearance of age; e.g., the sudden appearance of a mature chicken capable of laying eggs. This ad hoc hypothesis introduced into young-Earth creationism is unfalsifiable because it says that the time of creation (of a species) measured by the accepted technology is illusory and no accepted technology is proposed to measure the claimed "actual" time of creation. Moreover, if the ad hoc hypothesis says that the world was created as we observe it today without stating further laws, by definition it cannot be contradicted by observations and thus is not falsifiable. This is discussed by Dienes in the case of a variation on the Omphalos hypothesis, which, in addition, specifies that God made the creation in this way to test our faith. Useful metaphysical statements. Grover Maxwell discussed statements such as "All men are mortal." This is not falsifiable, because it does not matter how old a man is, maybe he will die next year. Maxwell said that this statement is nevertheless useful, because it is often corroborated. He coined the term "corroboration without demarcation". Popper's view is that it is indeed useful, because Popper considers that metaphysical statements can be useful, but also because it is indirectly corroborated by the corroboration of the falsifiable law "All men die before the age of 150." For Popper, if no such falsifiable law exists, then the metaphysical law is less useful, because it is not indirectly corroborated. This kind of non-falsifiable statements in science was noticed by Carnap as early as 1937. Maxwell also used the example "All solids have a melting point." This is not falsifiable, because maybe the melting point will be reached at a higher temperature. The law is falsifiable and more useful if we specify an upper bound on melting points or a way to calculate this upper bound. Another example from Maxwell is "All beta decays are accompanied with a neutrino emission from the same nucleus." This is also not falsifiable, because maybe the neutrino can be detected in a different manner. The law is falsifiable and much more useful from a scientific point of view, if the method to detect the neutrino is specified. Maxwell said that most scientific laws are metaphysical statements of this kind, which, Popper said, need to be made more precise before they can be indirectly corroborated. In other words, specific technologies must be provided to make the statements inter-subjectively-verifiable, i.e., so that scientists know what the falsification or its failure actually means. In his critique of the falsifiability criterion, Maxwell considered the requirement for decisions in the falsification of, both, the emission of neutrinos (see ) and the existence of the melting point. For example, he pointed out that had no neutrino been detected, it could have been because some conservation law is false. Popper did not argue against the problems of falsification per se. He always acknowledged these problems. Popper's response was at the logical level. For example, he pointed out that, if a specific way is given to trap the neutrino, then, at the level of the language, the statement is falsifiable, because "no neutrino was detected after using this specific way" formally contradicts it (and it is inter-subjectively-verifiable—people can repeat the experiment). Natural selection. In the 5th and 6th editions of "On the Origin of Species", following a suggestion of Alfred Russel Wallace, Darwin used "Survival of the fittest", an expression first coined by Herbert Spencer, as a synonym for "Natural Selection". Popper and others said that, if one uses the most widely accepted definition of "fitness" in modern biology (see subsection ), namely reproductive success itself, the expression "survival of the fittest" is a tautology. Darwinist Ronald Fisher worked out mathematical theorems to help answer questions regarding natural selection. But, for Popper and others, there is no (falsifiable) law of Natural Selection in this, because these tools only apply to some rare traits. Instead, for Popper, the work of Fisher and others on Natural Selection is part of an important and successful metaphysical research program. Mathematics. Popper said that not all unfalsifiable statements are useless in science. Mathematical statements are good examples. Like all formal sciences, mathematics is not concerned with the validity of theories based on observations in the empirical world, but rather, mathematics is occupied with the theoretical, abstract study of such topics as quantity, structure, space and change. Methods of the mathematical sciences are, however, applied in constructing and testing scientific models dealing with observable reality. Albert Einstein wrote, "One reason why mathematics enjoys special esteem, above all other sciences, is that its laws are absolutely certain and indisputable, while those of other sciences are to some extent debatable and in constant danger of being overthrown by newly discovered facts." Historicism. Popper made a clear distinction between the original theory of Marx and what came to be known as Marxism later on. For Popper, the original theory of Marx contained genuine scientific laws. Though they could not make preordained predictions, these laws constrained how changes can occur in society. One of them was that changes in society cannot "be achieved by the use of legal or political means". In Popper's view, this was both testable and subsequently falsified. "Yet instead of accepting the refutations", Popper wrote, "the followers of Marx re-interpreted both the theory and the evidence in order to make them agree. ... They thus gave a 'conventionalist twist' to the theory; and by this stratagem they destroyed its much advertised claim to scientific status." Popper's attacks were not directed toward Marxism, or Marx's theories, which were falsifiable, but toward Marxists who he considered to have ignored the falsifications which had happened. Popper more fundamentally criticized 'historicism' in the sense of any preordained prediction of history, given what he saw as our right, ability and responsibility to control our own destiny. Use in courts of law. Falsifiability has been used in the "McLean v. Arkansas" case (in 1982), the "Daubert" case (in 1993) and other cases. A survey of 303 federal judges conducted in 1998 found that "[P]roblems with the nonfalsifiable nature of an expert's underlying theory and difficulties with an unknown or too-large error rate were cited in less than 2% of cases." "McLean v. Arkansas" case. In the ruling of the "McLean v. Arkansas" case, Judge William Overton used falsifiability as one of the criteria to determine that "creation science" was not scientific and should not be taught in Arkansas public schools as such (it can be taught as religion). In his testimony, philosopher Michael Ruse defined the characteristics which constitute science as (see , and ): In his conclusion related to this criterion Judge Overton stated that: <templatestyles src="Template:Blockquote/styles.css" />While anybody is free to approach a scientific inquiry in any fashion they choose, they cannot properly describe the methodology as scientific, if they start with the conclusion and refuse to change it regardless of the evidence developed during the course of the investigation. Daubert standard. In several cases of the United States Supreme Court, the court described scientific methodology using the five Daubert factors, which include falsifiability. The Daubert result cited Popper and other philosophers of science: <templatestyles src="Template:Blockquote/styles.css" />Ordinarily, a key question to be answered in determining whether a theory or technique is scientific knowledge that will assist the trier of fact will be whether it can be (and has been) tested. "Scientific methodology today is based on generating hypotheses and testing them to see if they can be falsified; indeed, this methodology is what distinguishes science from other fields of human inquiry." Green 645. See also C. Hempel, Philosophy of Natural Science 49 (1966) ("[T]he statements constituting a scientific explanation must be capable of empirical test"); K. Popper, Conjectures and Refutations: The Growth of Scientific Knowledge 37 (5th ed. 1989) ("[T]he criterion of the scientific status of a theory is its falsifiability, or refutability, or testability") (emphasis deleted). David H. Kaye said that references to the Daubert majority opinion confused falsifiability and falsification and that "inquiring into the existence of meaningful attempts at falsification is an appropriate and crucial consideration in admissibility determinations." Connections between statistical theories and falsifiability. Considering the specific detection procedure that was used in the neutrino experiment, without mentioning its probabilistic aspect, Popper wrote "it provided a test of the much more significant "falsifiable" theory that such emitted neutrinos could be trapped in a certain way". In this manner, in his discussion of the neutrino experiment, Popper did not raise at all the probabilistic aspect of the experiment. Together with Maxwell, who raised the problems of falsification in the experiment, he was aware that some convention must be adopted to fix what it means to detect or not a neutrino in this probabilistic context. This is the third kind of decisions mentioned by Lakatos. For Popper and most philosophers, observations are theory impregnated. In this example, the theory that impregnates observations (and justifies that we conventionally accept the potential falsifier "no neutrino was detected") is statistical. In statistical language, the potential falsifier that can be statistically accepted (not rejected to say it more correctly) is typically the null hypothesis, as understood even in popular accounts on falsifiability. Different ways are used by statisticians to draw conclusions about hypotheses on the basis of available evidence. Fisher, Neyman and Pearson proposed approaches that require no prior probabilities on the hypotheses that are being studied. In contrast, Bayesian inference emphasizes the importance of prior probabilities. But, as far as falsification as a yes/no procedure in Popper's methodology is concerned, any approach that provides a way to accept or not a potential falsifier can be used, including approaches that use Bayes' theorem and estimations of prior probabilities that are made using critical discussions and reasonable assumptions taken from the background knowledge. There is no general rule that considers as falsified an hypothesis with small Bayesian revised probability, because as pointed out by Mayo and argued before by Popper, the individual outcomes described in detail will easily have very small probabilities under available evidence without being genuine anomalies. Nevertheless, Mayo adds, "they can indirectly falsify hypotheses by adding a methodological falsification rule". In general, Bayesian statistic can play a role in critical rationalism in the context of inductive logic, which is said to be inductive because implications are generalized to conditional probabilities. According to Popper and other philosophers such as Colin Howson, Hume's argument precludes inductive logic, but only when the logic makes no use "of additional assumptions: in particular, about what is to be assigned positive prior probability". Inductive logic itself is not precluded, especially not when it is a deductively valid application of Bayes' theorem that is used to evaluate the probabilities of the hypotheses using the observed data and what is assumed about the priors. Gelman and Shalizi mentioned that Bayes' statisticians do not have to disagree with the non-inductivists. Because statisticians often associate statistical inference with induction, Popper's philosophy is often said to have a hidden form of induction. For example, Mayo wrote "The falsifying hypotheses ... necessitate an evidence-transcending (inductive) statistical inference. This is hugely problematic for Popper". Yet, also according to Mayo, Popper [as a non-inductivist] acknowledged the useful role of statistical inference in the falsification problems: she mentioned that Popper wrote her (in the context of falsification based on evidence) "I regret not studying statistics" and that her thought was then "not as much as I do". Lakatos's falsificationism. Imre Lakatos divided the problems of falsification in two categories. The first category corresponds to decisions that must be agreed upon by scientists before they can falsify a theory. The other category emerges when one tries to use falsifications and corroborations to explain progress in science. Lakatos described four kind of falsificationisms in view of how they address these problems. Dogmatic falsificationism ignores both types of problems. Methodological falsificationism addresses the first type of problems by accepting that decisions must be taken by scientists. Naive methodological falsificationism or naive falsificationism does not do anything to address the second type of problems. Lakatos used dogmatic and naive falsificationism to explain how Popper's philosophy changed over time and viewed sophisticated falsificationism as his own improvement on Popper's philosophy, but also said that Popper some times appears as a sophisticated falsificationist. Popper responded that Lakatos misrepresented his intellectual history with these terminological distinctions. Dogmatic falsificationism. A dogmatic falsificationist ignores that every observation is theory-impregnated. Being theory-impregnated means that it goes beyond direct experience. For example, the statement "Here is a glass of water" goes beyond experience, because the concepts of glass and water "denote physical bodies which exhibit a certain law-like behaviour" (Popper). This leads to the critique that it is unclear which theory is falsified. Is it the one that is being studied or the one behind the observation? This is sometimes called the 'Duhem–Quine problem'. An example is Galileo's refutation of the theory that celestial bodies are faultless crystal balls. Many considered that it was the optical theory of the telescope that was false, not the theory of celestial bodies. Another example is the theory that neutrinos are emitted in beta decays. Had they not been observed in the Cowan–Reines neutrino experiment, many would have considered that the strength of the beta-inverse reaction used to detect the neutrinos was not sufficiently high. At the time, Grover Maxwell wrote, the possibility that this strength was sufficiently high was a "pious hope". A dogmatic falsificationist ignores the role of auxiliary hypotheses. The assumptions or auxiliary hypotheses of a particular test are all the hypotheses that are assumed to be accurate in order for the test to work as planned. The predicted observation that is contradicted depends on the theory and these auxiliary hypotheses. Again, this leads to the critique that it cannot be told if it is the theory or one of the required auxiliary hypotheses that is false. Lakatos gives the example of the path of a planet. If the path contradicts Newton's law, we will not know if it is Newton's law that is false or the assumption that no other body influenced the path. Lakatos says that Popper's solution to these criticisms requires that one relaxes the assumption that an observation can show a theory to be false: <templatestyles src="Template:Blockquote/styles.css" />If a theory is falsified [in the usual sense], it is proven false; if it is 'falsified' [in the technical sense], it may still be true. Methodological falsificationism replaces the contradicting observation in a falsification with a "contradicting observation" accepted by convention among scientists, a convention that implies four kinds of decisions that have these respective goals: the selection of all "basic statements" (statements that correspond to logically possible observations), selection of the "accepted basic statements" among the basic statements, making statistical laws falsifiable and applying the refutation to the specific theory (instead of an auxiliary hypothesis). The experimental falsifiers and falsifications thus depend on decisions made by scientists in view of the currently accepted technology and its associated theory. Naive falsificationism. According to Lakatos, naive falsificationism is the claim that methodological falsifications can by themselves explain how scientific knowledge progresses. Very often a theory is still useful and used even after it is found in contradiction with some observations. Also, when scientists deal with two or more competing theories which are both corroborated, considering only falsifications, it is not clear why one theory is chosen above the other, even when one is corroborated more often than the other. In fact, a stronger version of the Quine-Duhem thesis says that it is not always possible to rationally pick one theory over the other using falsifications. Considering only falsifications, it is not clear why often a corroborating experiment is seen as a sign of progress. Popper's critical rationalism uses both falsifications and corroborations to explain progress in science. How corroborations and falsifications can explain progress in science was a subject of disagreement between many philosophers, especially between Lakatos and Popper. Popper distinguished between the creative and informal process from which theories and accepted basic statements emerge and the logical and formal process where theories are falsified or corroborated. The main issue is whether the decision to select a theory among competing theories in the light of falsifications and corroborations could be justified using some kind of formal logic. It is a delicate question, because this logic would be inductive: it justifies a universal law in view of instances. Also, falsifications, because they are based on methodological decisions, are useless in a strict justification perspective. The answer of Lakatos and many others to that question is that it should. In contradistinction, for Popper, the creative and informal part is guided by methodological rules, which naturally say to favour theories that are corroborated over those that are falsified, but this methodology can hardly be made rigorous. Popper's way to analyze progress in science was through the concept of verisimilitude, a way to define how close a theory is to the truth, which he did not consider very significant, except (as an attempt) to describe a concept already clear in practice. Later, it was shown that the specific definition proposed by Popper cannot distinguish between two theories that are false, which is the case for all theories in the history of science. Today, there is still on going research on the general concept of verisimilitude. From the problem of induction to falsificationism. Hume explained induction with a theory of the mind that was in part inspired by Newton's theory of gravitation. Popper rejected Hume's explanation of induction and proposed his own mechanism: science progresses by trial and error within an evolutionary epistemology. Hume believed that his psychological induction process follows laws of nature, but, for him, this does not imply the existence of a method of justification based on logical rules. In fact, he argued that any induction mechanism, including the mechanism described by his theory, could not be justified logically. Similarly, Popper adopted an evolutionary epistemology, which implies that some laws explain progress in science, but yet insists that the process of trial and error is hardly rigorous and that there is always an element of irrationality in the creative process of science. The absence of a method of justification is a built-in aspect of Popper's trial and error explanation. As rational as they can be, these explanations that refer to laws, but cannot be turned into methods of justification (and thus do not contradict Hume's argument or its premises), were not sufficient for some philosophers. In particular, Russell once expressed the view that if Hume's problem cannot be solved, “there is no intellectual difference between sanity and insanity” and actually proposed a method of justification. He rejected Hume's premise that there is a need to justify any principle that is itself used to justify induction. It might seem that this premise is hard to reject, but to avoid circular reasoning we do reject it in the case of deductive logic. It makes sense to also reject this premise in the case of principles to justify induction. Lakatos's proposal of sophisticated falsificationism was very natural in that context. Therefore, Lakatos urged Popper to find an inductive principle behind the trial and error learning process and sophisticated falsificationism was his own approach to address this challenge. Kuhn, Feyerabend, Musgrave and others mentioned and Lakatos himself acknowledged that, as a method of justification, this attempt failed, because there was no normative methodology to justify—Lakatos's methodology was anarchy in disguise. Falsificationism in Popper's philosophy. Popper's philosophy is sometimes said to fail to recognize the Quine-Duhem thesis, which would make it a form of dogmatic falsificationism. For example, Watkins wrote "apparently forgetting that he had once said 'Duhem is right [...]', Popper set out to devise potential falsifiers just for Newton's fundamental assumptions". But, Popper's philosophy is not always qualified of falsificationism in the pejorative manner associated with dogmatic or naive falsificationism. The problems of falsification are acknowledged by the falsificationists. For example, Chalmers points out that falsificationists freely admit that observation is theory impregnated. Thornton, referring to Popper's methodology, says that the predictions inferred from conjectures are not directly compared with the facts simply because all observation-statements are theory-laden. For the critical rationalists, the problems of falsification are not an issue, because they do not try to make experimental falsifications logical or to logically justify them, nor to use them to logically explain progress in science. Instead, their faith rests on critical discussions around these experimental falsifications. Lakatos made a distinction between a "falsification" (with quotation marks) in Popper's philosophy and a falsification (without quotation marks) that can be used in a systematic methodology where rejections are justified. He knew that Popper's philosophy is not and has never been about this kind of justification, but he felt that it should have been. Sometimes, Popper and other falsificationists say that when a theory is falsified it is rejected, which appears as dogmatic falsificationism, but the general context is always critical rationalism in which all decisions are open to critical discussions and can be revised. Controversies. Methodless creativity versus inductive methodology. As described in section , Lakatos and Popper agreed that universal laws cannot be logically deduced (except from laws that say even more). But unlike Popper, Lakatos felt that if the explanation for new laws cannot be deductive, it must be inductive. He urged Popper explicitly to adopt some inductive principle and sets himself the task to find an inductive methodology. However, the methodology that he found did not offer any exact inductive rules. In a response to Kuhn, Feyerabend and Musgrave, Lakatos acknowledged that the methodology depends on the good judgment of the scientists. Feyerabend wrote in "Against Method" that Lakatos's methodology of scientific research programmes is epistemological anarchism in disguise and Musgrave made a similar comment. In more recent work, Feyerabend says that Lakatos uses rules, but whether or not to follow any of these rules is left to the judgment of the scientists. This is also discussed elsewhere. Popper also offered a methodology with rules, but these rules are also not-inductive rules, because they are not by themselves used to accept laws or establish their validity. They do that through the creativity or "good judgment" of the scientists only. For Popper, the required non deductive component of science never had to be an inductive methodology. He always viewed this component as a creative process beyond the explanatory reach of any rational methodology, but yet used to decide which theories should be studied and applied, find good problems and guess useful conjectures. Quoting Einstein to support his view, Popper said that this renders obsolete the need for an inductive methodology or logical path to the laws. For Popper, no inductive methodology was ever proposed to satisfactorily explain science. Ahistorical versus historiographical. Section says that both Lakatos's and Popper's methodology are not inductive. Yet Lakatos's methodology extended importantly Popper's methodology: it added a historiographical component to it. This allowed Lakatos to find corroborations for his methodology in the history of science. The basic units in his methodology, which can be abandoned or pursued, are research programmes. Research programmes can be degenerative or progressive and only degenerative research programmes must be abandoned at some point. For Lakatos, this is mostly corroborated by facts in history. In contradistinction, Popper did not propose his methodology as a tool to reconstruct the history of science. Yet, some times, he did refer to history to corroborate his methodology. For example, he remarked that theories that were considered great successes were also the most likely to be falsified. Zahar's view was that, with regard to corroborations found in the history of science, there was only a difference of emphasis between Popper and Lakatos. As an anecdotal example, in one of his articles Lakatos challenged Popper to show that his theory was falsifiable: he asked "Under what conditions would you give up your demarcation criterion?". Popper replied "I shall give up my theory if Professor Lakatos succeeds in showing that Newton's theory is no more falsifiable by 'observable states of affairs' than is Freud's." According to David Stove, Lakatos succeeded, since Lakatos showed there is no such thing as a "non-Newtonian" behaviour of an observable object. Stove argued that Popper's counterexamples to Lakatos were either instances of begging the question, such as Popper's example of missiles moving in a "non-Newtonian track", or consistent with Newtonian physics, such as objects not falling to the ground without "obvious" countervailing forces against Earth's gravity. Normal science versus revolutionary science. Thomas Kuhn analyzed what he calls periods of normal science as well as revolutions from one period of normal science to another, whereas Popper's view is that only revolutions are relevant. For Popper, the role of science, mathematics and metaphysics, actually the role of any knowledge, is to solve puzzles. In the same line of thought, Kuhn observes that in periods of normal science the scientific theories, which represent some paradigm, are used to routinely solve puzzles and the validity of the paradigm is hardly in question. It is only when important new puzzles emerge that cannot be solved by accepted theories that a revolution might occur. This can be seen as a viewpoint on the distinction made by Popper between the informal and formal process in science (see section ). In the big picture presented by Kuhn, the routinely solved puzzles are corroborations. Falsifications or otherwise unexplained observations are unsolved puzzles. All of these are used in the informal process that generates a new kind of theory. Kuhn says that Popper emphasizes formal or logical falsifications and fails to explain how the social and informal process works. Unfalsifiability versus falsity of astrology. Popper often uses astrology as an example of a pseudoscience. He says that it is not falsifiable because both the theory itself and its predictions are too imprecise. Kuhn, as an historian of science, remarked that many predictions made by astrologers in the past were quite precise and they were very often falsified. He also said that astrologers themselves acknowledged these falsifications. Epistemological anarchism vs the scientific method. Paul Feyerabend rejected any prescriptive methodology at all. He rejected Lakatos's argument for "ad hoc" hypothesis, arguing that science would not have progressed without making use of any and all available methods to support new theories. He rejected any reliance on a scientific method, along with any special authority for science that might derive from such a method. He said that if one is keen to have a universally valid methodological rule, epistemological anarchism or "anything goes" would be the only candidate. For Feyerabend, any special status that science might have, derives from the social and physical value of the results of science rather than its method. Sokal and Bricmont. In their book "Fashionable Nonsense" (from 1997, published in the UK as "Intellectual Impostures") the physicists Alan Sokal and Jean Bricmont criticised falsifiability. They include this critique in the "Intermezzo" chapter, where they expose their own views on truth in contrast to the extreme epistemological relativism of postmodernism. Even though Popper is clearly not a relativist, Sokal and Bricmont discuss falsifiability because they see postmodernist epistemological relativism as a reaction to Popper's description of falsifiability, and more generally, to his theory of science. See also. <templatestyles src="Div col/styles.css"/> Notes. <templatestyles src="Reflist/styles.css" /> Abbreviated references. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Refbegin/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "L" }, { "math_id": 1, "text": "Q" }, { "math_id": 2, "text": "\\neg Q" }, { "math_id": 3, "text": "L =" }, { "math_id": 4, "text": "Q =" }, { "math_id": 5, "text": "\\neg Q =" }, { "math_id": 6, "text": "C \\Rightarrow P" }, { "math_id": 7, "text": "C =" }, { "math_id": 8, "text": "P =" }, { "math_id": 9, "text": " C \\wedge \\neg P" }, { "math_id": 10, "text": "{\\mathfrak A} \\models \\phi" }, { "math_id": 11, "text": "\\phi" }, { "math_id": 12, "text": "{\\mathfrak A}" } ]
https://en.wikipedia.org/wiki?curid=11283
1128455
Square pyramid
Pyramid with a square base In geometry, a square pyramid is a pyramid with a square base, having a total of five faces. If the apex of the pyramid is directly above the center of the square, it is a "right square pyramid" with four isosceles triangles; otherwise, it is an "oblique square pyramid". When all of the pyramid's edges are equal in length, its triangles are all equilateral. It is called an "equilateral square pyramid", an example of a Johnson solid. Square pyramids have appeared throughout the history of architecture, with examples being Egyptian pyramids and many other similar buildings. They also occur in chemistry in square pyramidal molecular structures. Square pyramids are often used in the construction of other polyhedra. Many mathematicians in ancient times discovered the formula for the volume of a square pyramid with different approaches. Special cases. Right square pyramid. A square pyramid has five vertices, eight edges, and five faces. One face, called the "base" of the pyramid, is a square; the four other faces are triangles. Four of the edges make up the square by connecting its four vertices. The other four edges are known as the lateral edges of the pyramid; they meet at the fifth vertex, called the apex. If the pyramid's apex lies on a line erected perpendicularly from the center of the square, it is called a "right square pyramid", and the four triangular faces are isosceles triangles. Otherwise, the pyramid has two or more non-isosceles triangular faces and is called an "oblique square pyramid". The "slant height" formula_0 of a right square pyramid is defined as the height of one of its isosceles triangles. It can be obtained via the Pythagorean theorem: formula_1 where formula_2 is the length of the triangle's base, also one of the square's edges, and formula_3 is the length of the triangle's legs, which are lateral edges of the pyramid. The height formula_4 of a right square pyramid can be similarly obtained, with a substitution of the slant height formula giving: formula_5 A polyhedron's surface area is the sum of the areas of its faces. The surface area formula_6 of a right square pyramid can be expressed as formula_7, where formula_8 and formula_9 are the areas of one of its triangles and its base, respectively. The area of a triangle is half of the product of its base and side, with the area of a square being the length of the side squared. This gives the expression: formula_10 In general, the volume formula_11 of a pyramid is equal to one-third of the area of its base multiplied by its height. Expressed in a formula for a square pyramid, this is: formula_12 Many mathematicians have discovered the formula for calculating the volume of a square pyramid in ancient times. In the Moscow Mathematical Papyrus, Egyptian mathematicians demonstrated knowledge of the formula for calculating the volume of a truncated square pyramid, suggesting that they were also acquainted with the volume of a square pyramid, but it is unknown how the formula was derived. Beyond the discovery of the volume of a square pyramid, the problem of finding the slope and height of a square pyramid can be found in the Rhind Mathematical Papyrus. The Babylonian mathematicians also considered the volume of a frustum, but gave an incorrect formula for it. One Chinese mathematician Liu Hui also discovered the volume by the method of dissecting a rectangular solid into pieces. Equilateral square pyramid. If all triangular edges are of equal length, the four triangles are equilateral, and the pyramid's faces are all regular polygons, it is an "equilateral square pyramid." The dihedral angles between adjacent triangular faces are formula_13, and that between the base and each triangular face being half of that, formula_14. A convex polyhedron in which all of the faces are regular polygons is called a Johnson solid. The equilateral square pyramid is among them, enumerated as the first Johnson solid formula_15. Because its edges are all equal in length (that is, formula_16), its slant, height, surface area, and volume can be derived by substituting the formulas of a right square pyramid: formula_17 Like other right pyramids with a regular polygon as a base, a right square pyramid has pyramidal symmetry. For the square pyramid, this is the symmetry of cyclic group formula_18: the pyramid is left invariant by rotations of one-, two-, and three-quarters of a full turn around its axis of symmetry, the line connecting the apex to the center of the base; and is also mirror symmetric relative to any perpendicular plane passing through a bisector of the base. It can be represented as the wheel graph formula_19; more generally, a wheel graph formula_20 is the representation of the skeleton of a formula_21-sided pyramid. It is self-dual, meaning its dual polyhedron is the square pyramid itself. An equilateral square pyramid is an elementary polyhedron. This means it cannot be separated by a plane to create two small convex polyhedrons with regular faces. Applications. In architecture, the pyramids built in ancient Egypt are examples of buildings shaped like square pyramids. Pyramidologists have put forward various suggestions for the design of the Great Pyramid of Giza, including a theory based on the Kepler triangle and the golden ratio. However, modern scholars favor descriptions using integer ratios, as being more consistent with the knowledge of Egyptian mathematics and proportion. The Mesoamerican pyramids are also ancient pyramidal buildings similar to the Egyptian; they differ in having flat tops and stairs ascending their faces. Modern buildings whose designs imitate the Egyptian pyramids include the Louvre Pyramid and the casino hotel Luxor Las Vegas. In stereochemistry, an atom cluster can have a square pyramidal geometry. A square pyramidal molecule has a main-group element with one active lone pair, which can be described by a model that predicts the geometry of molecules known as VSEPR theory. Examples of molecules with this structure include chlorine pentafluoride, bromine pentafluoride, and iodine pentafluoride. The base of a square pyramid can be attached to a square face of another polyhedron to construct new polyhedra, an example of augmentation. For example, a tetrakis hexahedron can be constructed by attaching the base of an equilateral square pyramid onto each face of a cube. Attaching prisms or antiprisms to pyramids is known as elongation or gyroelongation, respectively. Some of the other Johnson solids can be constructed by either augmenting square pyramids or augmenting other shapes with square pyramids: elongated square pyramid formula_22, gyroelongated square pyramid formula_23, elongated square bipyramid formula_24, gyroelongated square bipyramid formula_25, augmented triangular prism formula_26, biaugmented triangular prism formula_27, triaugmented triangular prism formula_28, augmented pentagonal prism formula_29, biaugmented pentagonal prism formula_30, augmented hexagonal prism formula_31, parabiaugmented hexagonal prism formula_32, metabiaugmented hexagonal prism formula_33, triaugmented hexagonal prism formula_34, and augmented sphenocorona formula_35. References. Notes. <templatestyles src="Reflist/styles.css" /> Works cited. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "s" }, { "math_id": 1, "text": "s = \\sqrt{b^2 - \\frac{l^2}{4}}," }, { "math_id": 2, "text": "l" }, { "math_id": 3, "text": "b" }, { "math_id": 4, "text": "h" }, { "math_id": 5, "text": "h = \\sqrt{s^2 - \\frac{l^2}{4}} = \\sqrt{b^2 - \\frac{l^2}{2}}." }, { "math_id": 6, "text": "A" }, { "math_id": 7, "text": "A = 4T + S" }, { "math_id": 8, "text": "T" }, { "math_id": 9, "text": "S" }, { "math_id": 10, "text": " A = 4\\left(\\frac{1}{2}ls\\right) + l^2 = 2ls + l^2." }, { "math_id": 11, "text": "V" }, { "math_id": 12, "text": " V = \\frac{1}{3}l^2h." }, { "math_id": 13, "text": "\\arccos \\left(-1/3 \\right) \\approx 109.47^\\circ " }, { "math_id": 14, "text": "\\arctan \\left(\\sqrt{2}\\right) \\approx 54.74^\\circ " }, { "math_id": 15, "text": "J_1" }, { "math_id": 16, "text": " b = l " }, { "math_id": 17, "text": "\n\\begin{align}\n s = \\frac{\\sqrt{3}}{2}l \\approx 0.866l, &\\qquad h = \\frac{1}{\\sqrt{2}}l \\approx 0.707l,\\\\\n A = (1 + \\sqrt{3})l^2 \\approx 2.732l^2, &\\qquad V = \\frac{\\sqrt{2}}{6}l^3 \\approx 0.236l^3.\n\\end{align}\n" }, { "math_id": 18, "text": "C_{4\\mathrm{v}}" }, { "math_id": 19, "text": " W_4 " }, { "math_id": 20, "text": " W_n " }, { "math_id": 21, "text": "n" }, { "math_id": 22, "text": " J_8 " }, { "math_id": 23, "text": " J_{10} " }, { "math_id": 24, "text": " J_{15} " }, { "math_id": 25, "text": " J_{17} " }, { "math_id": 26, "text": " J_{49} " }, { "math_id": 27, "text": " J_{50}" }, { "math_id": 28, "text": " J_{51} " }, { "math_id": 29, "text": " J_{52} " }, { "math_id": 30, "text": " J_{53} " }, { "math_id": 31, "text": " J_{54} " }, { "math_id": 32, "text": " J_{55} " }, { "math_id": 33, "text": " J_{56} " }, { "math_id": 34, "text": " J_{57} " }, { "math_id": 35, "text": " J_{87} " } ]
https://en.wikipedia.org/wiki?curid=1128455
1128461
Pentagonal pyramid
Pyramid with a pentagon base In geometry, pentagonal pyramid is a pyramid with a pentagon base and five triangular faces, having a total of six faces. It is categorized as Johnson solid if all of the edges are equal in length, forming equilateral triangular faces and a regular pentagonal base. The pentagonal pyramid can be found in many polyhedrons, including their construction. It also occurs in stereochemistry in pentagonal pyramidal molecular geometry. Properties. A pentagonal pyramid has six vertices, ten edges, and six faces. One of its faces is pentagon, a "base" of the pyramid; five others are triangles. Five of the edges make up the pentagon by connecting its five vertices, and the other five edges are known as the lateral edges of the pyramid, meeting at the sixth vertex called the apex. A pentagonal pyramid is said to be "regular" if its base is circumscribed in a circle that forms a regular pentagon, and it is said to be "right" if its altitude is erected perpendicularly to the base's center. Like other right pyramids with a regular polygon as a base, this pyramid has pyramidal symmetry of cyclic group formula_0: the pyramid is left invariant by rotations of one, two, three, and four in five of a full turn around its axis of symmetry, the line connecting the apex to the center of the base. It is also mirror symmetric relative to any perpendicular plane passing through a bisector of the base. It can be represented as the wheel graph formula_1; more generally, a wheel graph formula_2 is the representation of the skeleton of a formula_3-sided pyramid. It is self-dual, meaning its dual polyhedron is the pentagonal pyramid itself. When all edges are equal in length, the five triangular faces are equilateral and the base is a regular pentagon. Because this pyramid remains convex and all of its faces are regular polygons, it is classified as the second Johnson solid formula_4. The dihedral angle between two adjacent triangular faces is approximately 138.19° and that between the triangular face and the base is 37.37°. It is elementary polyhedra, meaning it cannot be separated by a plane to create two small convex polyhedrons with regular faces. Given that formula_5 is the length of all edges of the pentagonal pyramid. A polyhedron's surface area is the sum of the areas of its faces. Therefore, the surface area of a pentagonal pyramid is the sum of the four triangles and one pentagon area. The volume of every pyramid equals one-third of the area of its base multiplied by its height. That is, the volume of a pentagonal pyramid is one-third of the product of the height and a pentagonal pyramid's area. In the case of Johnson solid with edge length formula_5, its surface area formula_6 and volume formula_7 are: formula_8 Applications. In polyhedron. Pentagonal pyramids can be found as components of many polyhedrons. Attaching its base to the pentagonal face of another polyhedron is an example of the construction process known as augmentation, and attaching it to prisms or antiprisms is known as elongation or gyroelongation, respectively. Examples of polyhedrons are the pentakis dodecahedron is constructed from the dodecahedron by attaching the base of pentagonal pyramids onto each pentagonal face, small stellated dodecahedron is constructed from a regular dodecahedron stellated by pentagonal pyramids, and regular icosahedron constructed from a pentagonal antiprism by attaching two pentagonal pyramids onto its pentagonal bases. Some Johnson solids are constructed by either augmenting pentagonal pyramids or augmenting other shapes with pentagonal pyramids: elongated pentagonal pyramid formula_9, gyroelongated pentagonal pyramid formula_10, pentagonal bipyramid formula_11, elongated pentagonal bipyramid formula_12, augmented dodecahedron formula_13, parabiaugmented dodecahedron formula_14, metabiaugmented dodecahedron formula_15, and triaugmented dodecahedron formula_16. Relatedly, the removal of a pentagonal pyramid from polyhedra is an example known as diminishment; metabidiminished icosahedron formula_17 and tridiminished icosahedron formula_18 are the examples in which their constructions begin by removing pentagonal pyramids from a regular icosahedron. Stereochemistry. In stereochemistry, an atom cluster can have a pentagonal pyramidal geometry. This molecule has a main-group element with one active lone pair, which can be described by a model that predicts the geometry of molecules known as VSEPR theory. An example of a molecule with this structure include nido-cage carbonate CB5H9. References. Notes. <templatestyles src="Reflist/styles.css" /> Works cited. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "C_{5\\mathrm{v}}" }, { "math_id": 1, "text": " W_5 " }, { "math_id": 2, "text": " W_n " }, { "math_id": 3, "text": "n" }, { "math_id": 4, "text": " J_2 " }, { "math_id": 5, "text": " a " }, { "math_id": 6, "text": " A " }, { "math_id": 7, "text": "V" }, { "math_id": 8, "text": " \\begin{align}\n A &= \\frac{a^2}{2}\\sqrt{\\frac{5}{2}\\left(10+\\sqrt{5}+\\sqrt{75+30\\sqrt{5}}\\right)} \\approx 3.88554a^2, \\\\\n V &= \\frac{5 + \\sqrt{5}}{24} a^3 \\approx 0.30150a^3.\n\\end{align} " }, { "math_id": 9, "text": " J_9 " }, { "math_id": 10, "text": " J_{11} " }, { "math_id": 11, "text": " J_{13} " }, { "math_id": 12, "text": " J_{16} " }, { "math_id": 13, "text": " J_{58} " }, { "math_id": 14, "text": " J_{59} " }, { "math_id": 15, "text": " J_{60} " }, { "math_id": 16, "text": " J_{61} " }, { "math_id": 17, "text": " J_{62} " }, { "math_id": 18, "text": " J_{63} " } ]
https://en.wikipedia.org/wiki?curid=1128461
1128465
Triangular cupola
Cupola with hexagonal base In geometry, the triangular cupola is the cupola with hexagon as its base and triangle as its top. If the edges are equal in length, the triangular cupola is the Johnson solid. It can be seen as half a cuboctahedron. The triangular cupola can be applied to construct many polyhedrons. Properties. The triangular cupola has 4 triangles, 3 squares, and 1 hexagon as their faces; the hexagon is the base and one of the four triangles is the top. If all of the edges are equal in length, the triangles and the hexagon becomes regular. The dihedral angle between each triangle and the hexagon is approximately 70.5°, that between each square and the hexagon is 54.7°, and that between square and triangle is 125.3°. A convex polyhedron in which all of the faces are regular is a Johnson solid, and the triangular cupola is among them, enumerated as the third Johnson solid formula_0. Given that formula_1 is the edge length of a triangular cupola. Its surface area formula_2 can be calculated by adding the area of four equilateral triangles, three squares, and one hexagon: formula_3 Its height formula_4 and volume formula_5 is: formula_6 It has an axis of symmetry passing through the center of its both top and base, which is symmetrical by rotating around it at one- and two-thirds of a full-turn angle. It is also mirror-symmetric relative to any perpendicular plane passing through a bisector of the hexagonal base. Therefore, it has pyramidal symmetry, the cyclic group formula_7 of order 6. Related polyhedra. The triangular cupola can be found in the construction of many polyhedrons. An example is the cuboctahedron in which the triangular cupola may be considered as its hemisphere. A construction that involves the attachment of its base to another polyhedron is known as augmentation; attaching it to prisms or antiprisms is known as elongation or gyroelongation. Some of the other Johnson solids constructed in such a way are elongated triangular cupola formula_8, gyroelongated triangular cupola formula_9, triangular orthobicupola formula_10, elongated triangular orthobicupola formula_11, elongated triangular gyrobicupola formula_12, gyroelongated triangular bicupola formula_13, augmented truncated tetrahedron formula_14. The triangular cupola may also be applied in constructing truncated tetrahedron, although it leaves some hollows and a regular tetrahedron as its interior. constructed such polyhedron in a similar way as the rhombic dodecahedron constructed by attaching six square pyramids outwards, each of which apices are in the cube's center. That being said, such truncated tetrahedron is constructed by attaching four triangular cupolas rectangle-by-rectangle; those cupolas in which the alternating sides of both right isosceles triangle and rectangle have the edges in terms of ratio formula_15. The truncated octahedron can be constructed by attaching eight of those same triangular cupolas triangle-by-triangle. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " J_{3} " }, { "math_id": 1, "text": " a " }, { "math_id": 2, "text": " A " }, { "math_id": 3, "text": " A = \\left(3+\\frac{5\\sqrt{3}}{2} \\right) a^2 \\approx 7.33a^2. " }, { "math_id": 4, "text": " h " }, { "math_id": 5, "text": " V " }, { "math_id": 6, "text": " \\begin{align}\n h &= \\frac{\\sqrt{6}}{3} a\\approx 0.82a, \\\\\n V &= \\left(\\frac{5}{3\\sqrt{2}}\\right)a^3 \\approx 1.18a^3.\n\\end{align}\n" }, { "math_id": 7, "text": " C_{3\\mathrm{v}} " }, { "math_id": 8, "text": " J_{18} " }, { "math_id": 9, "text": " J_{22} " }, { "math_id": 10, "text": " J_{27} " }, { "math_id": 11, "text": " J_{35} " }, { "math_id": 12, "text": " J_{36} " }, { "math_id": 13, "text": " J_{44} " }, { "math_id": 14, "text": " J_{65} " }, { "math_id": 15, "text": " 1 : \\frac{1}{2}\\sqrt{2} " } ]
https://en.wikipedia.org/wiki?curid=1128465
1128479
Square cupola
Cupola with octagonal base In geometry, the square cupola (sometimes called lesser dome) is the cupola with octagonal base. In the case of edges are equal in length, it is the Johnson solid, a convex polyhedron with faces are regular. It can be used to construct many polyhedrons, particularly in other Johnson solids. Properties. The square cupola has 4 triangles, 5 squares, and 1 octagon as their faces; the octagon is the base, and one of the squares is the top. If the edges are equal in length, the triangles and octagon become regular, and the edge length of the octagon is equal to the edge length of both triangles and squares. The dihedral angle between both square and triangle is approximately formula_1, that between both triangle and octagon is formula_2, that between both square and octagon is precisely formula_3, and that between two adjacent squares is formula_4. A convex polyhedron in which all the faces are regular is a Johnson solid, and the square cupola is enumerated as formula_5, the fourth Johnson solid. Given that the edge length of formula_6, the surface area of a square cupola formula_7 can be calculated by adding the area of all faces: formula_8 Its height formula_9, circumradius formula_10, and volume formula_11 are: formula_12 It has an axis of symmetry passing through the center of its both top and base, which is symmetrical by rotating around it at one-, two-, and three-quarters of a full-turn angle. It is also mirror-symmetric relative to any perpendicular plane passing through a bisector of the base. Therefore, it has pyramidal symmetry, the cyclic group formula_0 of order 8. Related polyhedra and honeycombs. The square cupola can be found in many constructions of polyhedrons. An example is the rhombicuboctahedron, which can be seen as eight overlapping cupolae. A construction that involves the attachment of its base to another polyhedron is known as augmentation; attaching it to prisms or antiprisms is known as elongation or gyroelongation. Some of the other Johnson solids are elongated square cupola formula_13, gyroelongated square cupola formula_14, square orthobicupola formula_15, square gyrobicupola formula_16, elongated square gyrobicupola formula_17, and gyroelongated square bicupola formula_18. The crossed square cupola is one of the nonconvex Johnson solid isomorphs, being topologically identical to the convex square cupola. It can be obtained as a slice of the nonconvex great rhombicuboctahedron or quasirhombicuboctahedron, analogously to how the square cupola may be obtained as a slice of the rhombicuboctahedron. As in all cupolae, the base polygon has twice as many edges and vertices as the top; in this case the base polygon is an octagram. It may be seen as a cupola with a retrograde square base, so that the squares and triangles connect across the bases in the opposite way to the square cupola, hence intersecting each other. The square cupola is a component of several nonuniform space-filling lattices: References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " C_{4v} " }, { "math_id": 1, "text": " 144.7^\\circ " }, { "math_id": 2, "text": " 54.7^\\circ " }, { "math_id": 3, "text": " 45^\\circ " }, { "math_id": 4, "text": " 135^\\circ " }, { "math_id": 5, "text": " J_{4} " }, { "math_id": 6, "text": " a " }, { "math_id": 7, "text": " A " }, { "math_id": 8, "text": " A = \\left(7+2\\sqrt{2}+\\sqrt{3}\\right)a^2 \\approx 11.560a^2. " }, { "math_id": 9, "text": " h " }, { "math_id": 10, "text": " C " }, { "math_id": 11, "text": " V " }, { "math_id": 12, "text": " \\begin{align}\n h &= \\frac{\\sqrt{2}}{2}a \\approx 0.707a, \\\\\n C &= \\left(\\frac{1}{2}\\sqrt{5+2\\sqrt{2}}\\right)a \\approx 1.399a, \\\\\n V &= \\left(1+\\frac{2\\sqrt{2}}{3}\\right)a^3 \\approx 1.943a^3.\n\\end{align} " }, { "math_id": 13, "text": " J_{19} " }, { "math_id": 14, "text": " J_{23} " }, { "math_id": 15, "text": " J_{28} " }, { "math_id": 16, "text": " J_{29} " }, { "math_id": 17, "text": " J_{37} " }, { "math_id": 18, "text": " J_{45} " } ]
https://en.wikipedia.org/wiki?curid=1128479
1128491
Pentagonal rotunda
6th Johnson solid (17 faces) In geometry, the pentagonal rotunda is one of the Johnson solids ("J"6). It can be seen as half of an icosidodecahedron, or as half of a pentagonal orthobirotunda. It has a total of 17 faces. A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Formulae. The following formulae for volume, surface area, circumradius, and height are valid if all faces are regular, with edge length "a": formula_0 formula_1 formula_2 formula_3 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "V=\\left(\\frac{1}{12}\\left(45+17\\sqrt{5}\\right)\\right)a^3\\approx6.91776...a^3" }, { "math_id": 1, "text": "\\begin{align}\nA&=\\left(\\frac{1}{2}\\sqrt{5\\left(145+58\\sqrt{5}+2\\sqrt{30\\left(65+29\\sqrt{5}\\right)}\\right)}\\right)a^2 \\\\\n&=\\left(\\frac{1}{2}\\left(5\\sqrt{3}+\\sqrt{10\\left(65+29\\sqrt{5}\\right)}\\right)\\right)a^2\\approx22.3472...a^2\n\\end{align}" }, { "math_id": 2, "text": "R=\\left(\\frac{1}{2}\\left(1+\\sqrt{5}\\right)\\right)a\\approx1.61803...a" }, { "math_id": 3, "text": "H=\\left(\\sqrt{1+\\frac{2}{\\sqrt{5}}}\\right)a\\approx1.37638...a" } ]
https://en.wikipedia.org/wiki?curid=1128491
1128506
Elongated square cupola
19th Johnson solid In geometry, the elongated square cupola is a polyhedron constructed from an octagonal prism by attaching square cupola onto its base. It is an example of Johnson solid. Construction. The elongated square cupola is constructed from an octagonal prism by attaching a square cupola onto one of its bases, a process known as the elongation. This cupola covers the octagonal face so that the resulting polyhedron has four equilateral triangles, thirteen squares, and one regular octagon. A convex polyhedron in which all of the faces are regular polygons is the Johnson solid. The elongated square cupola is one of them, enumerated as the nineteenth Johnson solid formula_0. A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Properties. The surface area of an elongated square cupola formula_1 is the sum of all polygonal faces' area. Its volume formula_2 can be ascertained by dissecting it into both square cupola and regular octagon, and then adding their volume. Given the elongated triangular cupola with edge length formula_3, its surface area and volume are: formula_4 The dual polyhedron of an elongated square cupola has 20 faces: 8 isosceles triangles, 4 kites, 8 quadrilaterals. Related polyhedra and honeycombs. The elongated square cupola forms space-filling honeycombs with tetrahedra and cubes; with cubes and cuboctahedra; and with tetrahedra, elongated square pyramids, and elongated square bipyramids. (The latter two units can be decomposed into cubes and square pyramids.)<br> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " J_{19} " }, { "math_id": 1, "text": " A " }, { "math_id": 2, "text": " V " }, { "math_id": 3, "text": " a " }, { "math_id": 4, "text": " \\begin{align}\n A &= \\left(15+2\\sqrt{2}+\\sqrt{3}\\right)a^2 \\approx 19.561a^2, \\\\\n V &= \\left(3+\\frac{8\\sqrt{2}}{3}\\right)a^3 \\approx 6.771a^3.\n\\end{align} " } ]
https://en.wikipedia.org/wiki?curid=1128506
1128551
Elongated square gyrobicupola
37th Johnson solid In geometry, the elongated square gyrobicupola is a polyhedron constructed by two square cupolas attaching onto the bases of octagonal prism, with one of them rotated. It was once mistakenly considered a rhombicuboctahedron by many mathematicians. It is not considered to be an Archimedean solid because it lacks a set of global symmetries that map every vertex to every other vertex, unlike the 13 Archimedean solids. It is also a canonical polyhedron. For this reason, it is also known as pseudo-rhombicuboctahedron, Miller solid, or Miller–Askinuze solid. Construction. The elongated square gyrobicupola can be constructed similarly to the rhombicuboctahedron, by attaching two regular square cupolas onto the bases of octagonal prism, a process known as elongation. The difference between these two polyhedrons is that one of two square cupolas of the elongated square gyrobicupola is twisted by 45 degrees, a process known as "gyration", making the triangular faces staggered vertically. The resulting polyhedron has 8 equilateral triangles and 18 squares. A convex polyhedron in which all of the faces are regular polygons is the Johnson solid, and the elongated square gyrobicupola is among them, enumerated as the 37th Johnson solid formula_1. The elongated square gyrobicupola may have been discovered by Johannes Kepler in his enumeration of the Archimedean solids, but its first clear appearance in print appears to be the work of Duncan Sommerville in 1905. It was independently rediscovered by J. C. P. Miller in 1930 by mistake while attempting to construct a model of the rhombicuboctahedron. This solid was discovered again by V. G. Ashkinuse in 1957. Properties. An elongated square gyrobicupola with edge length formula_2 has a surface area: formula_3 by adding the area of 8 equilateral triangles and 10 squares. Its volume can be calculated by slicing it into two square cupolas and one octagonal prism: formula_4 The elongated square gyrobicupola possesses three-dimensional symmetry group formula_0 of order 16. It is locally vertex-regular – the arrangement of the four faces incident on any vertex is the same for all vertices; this is unique among the Johnson solids. However, the manner in which it is "twisted" gives it a distinct "equator" and two distinct "poles", which in turn divides its vertices into 8 "polar" vertices (4 per pole) and 16 "equatorial" vertices. It is therefore not vertex-transitive, and consequently not usually considered to be the 14th Archimedean solid. The dihedral angle of an elongated square gyrobicupola can be ascertained in a similar way as the rhombicuboctahedron, by adding the dihedral angle of a square cupola and an octagonal prism: Related polyhedra and honeycombs. The elongated square gyrobicupola can form a space-filling honeycomb with the regular tetrahedron, cube, and cuboctahedron. It can also form another honeycomb with the tetrahedron, square pyramid and various combinations of cubes, elongated square pyramids, and elongated square bipyramids.<br> The pseudo great rhombicuboctahedron is a nonconvex analog of the pseudo-rhombicuboctahedron, constructed in a similar way from the nonconvex great rhombicuboctahedron. In chemistry. The polyvanadate ion [V18O42]12− has a pseudo-rhombicuboctahedral structure, where each square face acts as the base of a VO5 pyramid. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " D_{4\\mathrm{d}} " }, { "math_id": 1, "text": " J_{37} " }, { "math_id": 2, "text": " a " }, { "math_id": 3, "text": " \\left(18+2\\sqrt{3}\\right)a^2 \\approx 21.464a^2, " }, { "math_id": 4, "text": " \\frac{12+10\\sqrt{2}}{3}a^3 \\approx 8.714a^3. " } ]
https://en.wikipedia.org/wiki?curid=1128551
11285518
Problem of points
Problem in probability theory The problem of points, also called the problem of division of the stakes, is a classical problem in probability theory. One of the famous problems that motivated the beginnings of modern probability theory in the 17th century, it led Blaise Pascal to the first explicit reasoning about what today is known as an expected value. The problem concerns a game of chance with two players who have equal chances of winning each round. The players contribute equally to a prize pot, and agree in advance that the first player to have won a certain number of rounds will collect the entire prize. Now suppose that the game is interrupted by external circumstances before either player has achieved victory. How does one then divide the pot fairly? It is tacitly understood that the division should depend somehow on the number of rounds won by each player, such that a player who is close to winning will get a larger part of the pot. But the problem is not merely one of calculation; it also involves deciding what a "fair" division actually is. Early solutions. Luca Pacioli considered such a problem in his 1494 textbook "Summa de arithmetica, geometrica, proportioni et proportionalità". His method was to divide the stakes in proportion to the number of rounds won by each player, and the number of rounds needed to win did not enter his calculations at all. In the mid-16th century Niccolò Tartaglia noticed that Pacioli's method leads to counterintuitive results if the game is interrupted when only one round has been played. In that case, Pacioli's rule would award the entire pot to the winner of that single round, though a one-round lead early in a long game is far from decisive. Tartaglia constructed a method that avoids that particular problem by basing the division on the ratio between the size of the lead and the length of the game. This solution is still not without problems, however; in a game to 100 it divides the stakes in the same way for a 65–55 lead as for a 99–89 lead, even though the former is still a relatively open game whereas in the latter situation victory for the leading player is almost certain. Tartaglia himself was unsure whether the problem was solvable at all in a way that would convince both players of its fairness: "in whatever way the division is made there will be cause for litigation". Pascal and Fermat. The problem arose again around 1654 when Chevalier de Méré posed it to Blaise Pascal. Pascal discussed the problem in his ongoing correspondence with Pierre de Fermat. Through this discussion, Pascal and Fermat not only provided a convincing, self-consistent solution to this problem, but also developed concepts that are still fundamental to probability theory. The starting insight for Pascal and Fermat was that the division should not depend so much on the history of the part of the interrupted game that actually took place, as on the possible ways the game might have continued, were it not interrupted. It is intuitively clear that a player with a 7–5 lead in a game to 10 has the same chance of eventually winning as a player with a 17–15 lead in a game to 20, and Pascal and Fermat therefore thought that interruption in either of the two situations ought to lead to the same division of the stakes. In other words, what is important is not the number of rounds each player has won so far, but the number of rounds each player still needs to win in order to achieve overall victory. Fermat now reasoned thus: If one player needs formula_0 more rounds to win and the other needs formula_1, the game will surely have been won by someone after formula_2 additional rounds. Therefore, imagine that the players were to play formula_2 more rounds; in total these rounds have formula_3 different possible outcomes. In some of these possible futures the game will actually have been decided in fewer than formula_2 rounds, but it does no harm to imagine the players continuing to play with no purpose. Considering only equally long futures has the advantage that one easily convinces oneself that each of the formula_3 possibilities is equally likely. Fermat was thus able to compute the odds for each player to win, simply by writing down a table of all formula_3 possible continuations and counting how many of them would lead to each player winning. Fermat now considered it obviously fair to divide the stakes in proportion to those odds. Fermat's solution, certainly "correct" by today's standards, was improved by Pascal in two ways. First, Pascal produced a more elaborate argument why the resulting division should be considered fair. Second, he showed how to calculate the correct division more efficiently than Fermat's tabular method, which becomes completely impractical (without modern computers) if formula_2 is more than about 10. Instead of just considering the probability of winning the "entire" remaining game, Pascal devised a principle of smaller steps: Suppose that the players had been able to play just "one" more round before being interrupted, and that we already had decided how to fairly divide the stakes after that one more round (possibly because that round lets one of the players win). The imagined extra round may lead to one of two possible futures with different fair divisions of the stakes, but since the two players have even chances of winning the next round, they should split the difference between the two future divisions evenly. In this way knowledge of the fair solutions in games with fewer rounds remaining can be used to calculate fair solutions for games with more rounds remaining. It is easier to convince oneself that this principle is fair than it is for Fermat's table of possible futures, which are doubly hypothetical because one must imagine that the game sometimes continues after having been won. Pascal's analysis here is one of the earliest examples of using expected values instead of odds when reasoning about probability. Shortly after, this idea would become a basis for the first systematic treatise on probability by Christiaan Huygens. Later the modern concept of probability grew out of the use of expectation values by Pascal and Huygens. The direct application of Pascal's step-by-step rule is significantly quicker than Fermat's method when many rounds remain. However, Pascal was able to use it as a starting point for developing more advanced computational methods. Through clever manipulation of identities involving what is today known as Pascal's triangle (including several of the first explicit proofs by induction) Pascal finally showed that in a game where player "a" needs "r" points to win and player "b" needs "s" points to win, the correct division of the stakes between player "a" (left side) and "b" (right side) is (using modern notation): formula_4 where the formula_5 term represents the combination operator. The problem of dividing the stakes became a major motivating example for Pascal in his "Treatise on the arithmetic triangle". Though Pascal's derivation of this result was independent of Fermat's tabular method, it is clear that it also describes exactly the counting of different outcomes of formula_2 additional rounds that Fermat suggested. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "r" }, { "math_id": 1, "text": "s" }, { "math_id": 2, "text": "r+s-1" }, { "math_id": 3, "text": "2^{r+s-1}" }, { "math_id": 4, "text": "\\sum_{k=0}^{s-1} \\binom{r+s-1}{k} \\mbox{ : } \\sum_{k=s}^{r+s-1} \\binom{r+s-1}{k}" }, { "math_id": 5, "text": "\\binom{r+s-1}{k}" } ]
https://en.wikipedia.org/wiki?curid=11285518
11285905
Quartile coefficient of dispersion
In statistics, the quartile coefficient of dispersion is a descriptive statistic which measures dispersion and is used to make comparisons within and between data sets. Since it is based on quantile information, it is less sensitive to outliers than measures such as the coefficient of variation. As such, it is one of several robust measures of scale. The statistic is easily computed using the first ("Q"1) and third ("Q"3) quartiles for each data set. The quartile coefficient of dispersion is: formula_0 Example. Consider the following two data sets: "A" = {2, 4, 6, 8, 10, 12, 14} "n" = 7, range = 12, mean = 8, median = 8, "Q"1 = 4, "Q"3 = 12, quartile coefficient of dispersion = 0.5 "B" = {1.8, 2, 2.1, 2.4, 2.6, 2.9, 3} "n" = 7, range = 1.2, mean = 2.4, median = 2.4, "Q"1 = 2, "Q"3 = 2.9, quartile coefficient of dispersion = 0.18 The quartile coefficient of dispersion of data set "A" is 2.7 times as great (0.5 / 0.18) as that of data set "B". References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "{Q_3 - Q_1 \\over Q_3 + Q_1}." } ]
https://en.wikipedia.org/wiki?curid=11285905
1128592
Elongated pentagonal rotunda
In geometry, the elongated pentagonal rotunda is one of the Johnson solids ("J"21). As the name suggests, it can be constructed by elongating a pentagonal rotunda ("J"6) by attaching a decagonal prism to its base. It can also be seen as an elongated pentagonal orthobirotunda ("J"42) with one pentagonal rotunda removed. A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Formulae. The following formulae for volume and surface area can be used if all faces are regular, with edge length "a": formula_0 formula_1 Dual polyhedron. The dual of the elongated pentagonal rotunda has 30 faces: 10 isosceles triangles, 10 rhombi, and 10 quadrilaterals. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "V=\\frac{1}{12}\\left(45+17\\sqrt{5}+30\\sqrt{5+2\\sqrt{5}}\\right)a^3\\approx14.612...a^3" }, { "math_id": 1, "text": "A=\\frac{1}{2}\\left(20+\\sqrt{5\\left(145+58\\sqrt{5}+2\\sqrt{30\\left(65+29\\sqrt{5}\\right)}\\right)}\\right)a^2\\approx32.3472...a^2" } ]
https://en.wikipedia.org/wiki?curid=1128592
1128599
Gyroelongated square cupola
In geometry, the gyroelongated square cupola is one of the Johnson solids ("J"23). As the name suggests, it can be constructed by gyroelongating a square cupola ("J"4) by attaching an octagonal antiprism to its base. It can also be seen as a gyroelongated square bicupola ("J"45) with one square bicupola removed. A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Area and Volume. The surface area is, formula_0 The volume is the sum of the volume of a square cupola and the volume of an octagonal prism, formula_1 Dual polyhedron. The dual of the gyroelongated square cupola has 20 faces: 8 kites, 4 rhombi, and 8 pentagons.
[ { "math_id": 0, "text": "A=\\left(7+2\\sqrt{2}+5\\sqrt{3}\\right)a^2\\approx 18.4886811...a^2." }, { "math_id": 1, "text": "V=\\left(1+\\frac{2}{3}\\sqrt{2} + \\frac{2}{3}\\sqrt{4+2\\sqrt{2}+2\\sqrt{146+103\\sqrt{2}}}\\right)a^3\\approx6.2107658...a^3." } ]
https://en.wikipedia.org/wiki?curid=1128599
1128605
Gyroelongated pentagonal rotunda
In geometry, the gyroelongated pentagonal rotunda is one of the Johnson solids ("J"25). As the name suggests, it can be constructed by gyroelongating a pentagonal rotunda ("J"6) by attaching a decagonal antiprism to its base. It can also be seen as a gyroelongated pentagonal birotunda ("J"48) with one pentagonal rotunda removed. A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Area and Volume. With edge length a, the surface area is formula_0 and the volume is formula_1 Dual polyhedron. The dual of the gyroelongated pentagonal rotunda has 30 faces: 10 pentagons, 10 rhombi, and 10 quadrilaterals.
[ { "math_id": 0, "text": "A=\\frac{1}{2}\\left( 15\\sqrt{3}+\\left(5+3\\sqrt{5}\\right)\\sqrt{5+2\\sqrt{5}}\\right)a^2\\approx31.007454303...a^2," }, { "math_id": 1, "text": "V=\\left(\\frac{45}{12}+\\frac{17}{12}\\sqrt{5} + \\frac{5}{6}\\sqrt{2\\sqrt{650+290\\sqrt{5}}-2\\sqrt{5}-2}\\right) a^3\\approx13.667050844...a^3." } ]
https://en.wikipedia.org/wiki?curid=1128605
1128615
Square gyrobicupola
29th Johnson solid; 2 square cupolae joined base-to-base In geometry, the square gyrobicupola is one of the Johnson solids ("J"29). Like the square orthobicupola ("J"28), it can be obtained by joining two square cupolae ("J"4) along their bases. The difference is that in this solid, the two halves are rotated 45 degrees with respect to one another. A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. The "square gyrobicupola" is the second in an infinite set of gyrobicupolae. Related to the square gyrobicupola is the elongated square gyrobicupola. This polyhedron is created when an octagonal prism is inserted between the two halves of the square gyrobicupola. Formulae. The following formulae for volume and surface area can be used if all faces are regular, with edge length "a": formula_0 formula_1 Related polyhedra and honeycombs. The square gyrobicupola forms space-filling honeycombs with tetrahedra, cubes and cuboctahedra; and with tetrahedra, square pyramids, and elongated square bipyramids. (The latter unit can be decomposed into elongated square pyramids, cubes, and/or square pyramids).<br> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "V=\\left(2+\\frac{4\\sqrt{2}}{3}\\right)a^3\\approx3.88562...a^3" }, { "math_id": 1, "text": "A=2\\left(5+\\sqrt{3}\\right)a^2\\approx13.4641...a^2" } ]
https://en.wikipedia.org/wiki?curid=1128615
1128619
Pentagonal orthobirotunda
34th Johnson solid; 2 pentagonal rotundae joined base-to-base In geometry, the pentagonal orthobirotunda is a polyhedron constructed by attaching two pentagonal rotundae along their decagonal faces, matching like faces. It is an example of Johnson solid. Construction. The pentagonal orthobirotunda is constructed by attaching two pentagonal rotundas to their base, covering decagon faces. The resulting polyhedron has 32 faces, 30 vertices, and 60 edges. This construction is similar to icosidodecahedron (or pentagonal gyrobirotunda), an Archimedean solid: the difference is one of its rotundas twisted around 36°, making the pentagonal faces connect to the triangular one, a process known as gyration. A convex polyhedron in which all of the faces are regular polygons is the Johnson solid. The pentagonal orthobirotunda is one of them, enumerated as the 34th Johnson solid formula_0. Properties. The surface area of an icosidodecahedron formula_1 can be determined by calculating the area of all pentagonal faces. The volume of an icosidodecahedron formula_2 can be determined by slicing it off into two pentagonal rotunda, after which summing up their volumes. Therefore, its surface area and volume can be formulated as: formula_3 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " J_{34} " }, { "math_id": 1, "text": " A " }, { "math_id": 2, "text": " V " }, { "math_id": 3, "text": "\\begin{align}\nA &= \\left(5\\sqrt{3}+3\\sqrt{25+10\\sqrt{5}}\\right) a^2 &\\approx 29.306a^2 \\\\\nV &= \\frac{45+17\\sqrt{5}}{6}a^3 &\\approx 13.836a^3.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=1128619
1128634
Elongated pentagonal gyrobirotunda
43rd Johnson solid In geometry, the elongated pentagonal gyrobirotunda or elongated icosidodecahedron is one of the Johnson solids ("J"43). As the name suggests, it can be constructed by elongating a "pentagonal gyrobirotunda," or icosidodecahedron (one of the Archimedean solids), by inserting a decagonal prism between its congruent halves. Rotating one of the pentagonal rotundae ("J"6) through 36 degrees before inserting the prism yields an elongated pentagonal orthobirotunda ("J"42). A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Formulae. The following formulae for volume and surface area can be used if all faces are regular, with edge length "a": formula_0 formula_1 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "V=\\frac{1}{6}\\left(45+17\\sqrt{5}+15\\sqrt{5+2\\sqrt{5}}\\right)a^3 \\approx 21.5297 a^3" }, { "math_id": 1, "text": "A=\\left(10+\\sqrt{30\\left(10+3\\sqrt{5}+\\sqrt{75+30\\sqrt{5}}\\right)}\\right)a^2 \\approx 39.306 a^2" } ]
https://en.wikipedia.org/wiki?curid=1128634
1128635
Elongated pentagonal orthobirotunda
42nd Johnson solid In geometry, the elongated pentagonal orthobirotunda is one of the Johnson solids ("J"42). Its Conway polyhedron notation is at5jP5. As the name suggests, it can be constructed by elongating a pentagonal orthobirotunda ("J"34) by inserting a decagonal prism between its congruent halves. Rotating one of the pentagonal rotundae ("J"6) through 36 degrees before inserting the prism yields the elongated pentagonal gyrobirotunda ("J"43). A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Formulae. The following formulae for volume and surface area can be used if all faces are regular, with edge length "a": formula_0 formula_1 References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "V=\\frac{1}{6}\\left(45+17\\sqrt{5}+15\\sqrt{5+2\\sqrt{5}}\\right)a^3\\approx21.5297...a^3" }, { "math_id": 1, "text": "A=\\left(10+\\sqrt{30\\left(10+3\\sqrt{5}+\\sqrt{75+30\\sqrt{5}}\\right)}\\right)a^2\\approx39.306...a^2" } ]
https://en.wikipedia.org/wiki?curid=1128635
1128638
Gyroelongated pentagonal birotunda
48th Johnson solid In geometry, the gyroelongated pentagonal birotunda is one of the Johnson solids ("J"48). As the name suggests, it can be constructed by gyroelongating a pentagonal birotunda (either "J"34 or the icosidodecahedron) by inserting a decagonal antiprism between its two halves. A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. The gyroelongated pentagonal birotunda is one of five Johnson solids which are chiral, meaning that they have a "left-handed" and a "right-handed" form. In the illustration to the right, each pentagonal face on the bottom half of the figure is connected by a path of two triangular faces to a pentagonal face above it and to the left. In the figure of opposite chirality (the mirror image of the illustrated figure), each bottom pentagon would be connected to a pentagonal face above it and to the right. The two chiral forms of "J"48 are not considered different Johnson solids. Area and Volume. With edge length a, the surface area is formula_0 and the volume is formula_1
[ { "math_id": 0, "text": "A=\\left(10\\sqrt{3} + 3\\sqrt{25+10\\sqrt{5}}\\right) a^2\\approx37.966236883...a^2," }, { "math_id": 1, "text": "V=\\left(\\frac{45}{6}+\\frac{17}{6}\\sqrt{5} + \\frac{5}{6}\\sqrt{2\\sqrt{650+290\\sqrt{5}}-2\\sqrt{5}-2}\\right) a^3\\approx20.584813812...a^3." } ]
https://en.wikipedia.org/wiki?curid=1128638
1128650
Gyroelongated square bicupola
45th Johnson solid In geometry, the gyroelongated square bicupola is the Johnson solid constructed by attaching two square cupolae on each base of octagonal antiprism. It has the property of chirality. Construction. The gyroelongated square bicupola is constructed by attaching two square cupolae on each base of octagonal antiprism, a process known as gyroelongation. This construction involves the removal of octagons, and replacing them with cupolae. As a result, this polyhedron has twenty triangular and ten square faces. The Johnson solid is the convex polyhedron with all of its faces are regular, and the gyroelongated square bicupola is one of them, enumerated as formula_1. Properties. Given that the edge length formula_2, the surface area is: formula_3 the total area of twenty equilateral triangles and ten squares. Its volume is: formula_4 the total volume of two square cupolae and an octagonal antiprism. Its dihedral angles can be calculated by adding the components of cupolae and antiprism. The dihedral angle of antiprism between two adjacent triangles is approximately formula_5. The dihedral angle of each cupola between two squares is formula_6, and that between triangle and square is formula_7. The dihedral angle of the cupolae and antiprism between two adjacent triangles and triangle-square is formula_8 and formula_9, respectively. The gyroelongated square bicupola is one of five Johnson solids, which is chiral, meaning that they have a "left-handed" and a "right-handed" form. In the following illustration, each square face on the left half of the figure is connected by a path of two triangular faces to a square face below it and on the left. In the figure of opposite chirality (the mirror image of the illustrated figure), each square on the left would be connected to a square face above it and on the right. These two chiral forms are not considered different Johnson solids. It has the symmetry of dihedral group formula_0. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " D_4 " }, { "math_id": 1, "text": " J_{45} " }, { "math_id": 2, "text": " a " }, { "math_id": 3, "text": " \\left(10+6\\sqrt{3}\\right) a^2 \\approx 20.392a^2," }, { "math_id": 4, "text": " \\left(2+\\frac{4}{3}\\sqrt{2} + \\frac{2}{3}\\sqrt{4+2\\sqrt{2}+2\\sqrt{146+103\\sqrt{2}}}\\right) a^3 \\approx 8.154a^3, " }, { "math_id": 5, "text": " 153.9^\\circ " }, { "math_id": 6, "text": " 135^\\circ " }, { "math_id": 7, "text": " 144.7^\\circ " }, { "math_id": 8, "text": " 151.3^\\circ " }, { "math_id": 9, "text": " 141.6^\\circ " } ]
https://en.wikipedia.org/wiki?curid=1128650
11286809
Implicit curve
Plane curve defined by an implicit equation In mathematics, an implicit curve is a plane curve defined by an implicit equation relating two coordinate variables, commonly "x" and "y". For example, the unit circle is defined by the implicit equation formula_1. In general, every implicit curve is defined by an equation of the form formula_2 for some function "F" of two variables. Hence an implicit curve can be considered as the set of zeros of a function of two variables. "Implicit" means that the equation is not expressed as a solution for either "x" in terms of "y" or vice versa. If formula_3 is a polynomial in two variables, the corresponding curve is called an "algebraic curve", and specific methods are available for studying it. Plane curves can be represented in Cartesian coordinates ("x", "y" coordinates) by any of three methods, one of which is the implicit equation given above. The graph of a function is usually described by an equation formula_4 in which the functional form is explicitly stated; this is called an "explicit" representation. The third essential description of a curve is the "parametric" one, where the "x"- and "y"-coordinates of curve points are represented by two functions "x"("t"), "y"("t") both of whose functional forms are explicitly stated, and which are dependent on a common parameter formula_5 Examples of implicit curves include: The first four examples are algebraic curves, but the last one is not algebraic. The first three examples possess simple parametric representations, which is not true for the fourth and fifth examples. The fifth example shows the possibly complicated geometric structure of an implicit curve. The implicit function theorem describes conditions under which an equation formula_2 can be "solved implicitly" for "x" and/or "y" – that is, under which one can validly write formula_10 or formula_4. This theorem is the key for the computation of essential geometric features of the curve: tangents, normals, and curvature. In practice implicit curves have an essential drawback: their visualization is difficult. But there are computer programs enabling one to display an implicit curve. Special properties of implicit curves make them essential tools in geometry and computer graphics. An implicit curve with an equation formula_2 can be considered as the level curve of level 0 of the surface formula_11 (see third diagram). Slope and curvature. In general, implicit curves fail the vertical line test (meaning that some values of "x" are associated with more than one value of "y") and so are not necessarily graphs of functions. However, the implicit function theorem gives conditions under which an implicit curve "locally" is given by the graph of a function (so in particular it has no self-intersections). If the defining relations are sufficiently smooth then, in such regions, implicit curves have well defined slopes, tangent lines, normal vectors, and curvature. There are several possible ways to compute these quantities for a given implicit curve. One method is to use implicit differentiation to compute the derivatives of "y" with respect to "x". Alternatively, for a curve defined by the implicit equation formula_2, one can express these formulas directly in terms of the partial derivatives of formula_12. In what follows, the partial derivatives are denoted formula_13 (for the derivative with respect to "x"), formula_14, formula_15 (for the second partial with respect to "x"), formula_16 (for the mixed second partial), formula_17 Tangent and normal vector. A curve point formula_18 is "regular" if the first partial derivatives formula_19 and formula_20 are not both equal to 0. The equation of the tangent line at a regular point formula_21 is formula_22 so the slope of the tangent line, and hence the slope of the curve at that point, is formula_23 If formula_24 at formula_25 the curve is vertical at that point, while if both formula_26 and formula_27 at that point then the curve is not differentiable there, but instead is a singular point – either a cusp or a point where the curve intersects itself. A normal vector to the curve at the point is given by formula_28 (here written as a row vector). Curvature. For readability of the formulas, the arguments formula_21 are omitted. The curvature formula_29 at a regular point is given by the formula formula_30. Derivation of the formulas. The implicit function theorem guarantees within a neighborhood of a point formula_21 the existence of a function formula_31 such that formula_32. By the chain rule, the derivatives of function formula_31 are formula_33 and formula_34 (where the arguments formula_35 on the right side of the second formula are omitted for ease of reading). Inserting the derivatives of function formula_31 into the formulas for a tangent and curvature of the graph of the explicit equation formula_36 yields formula_37 (tangent) formula_38 (curvature). Advantage and disadvantage of implicit curves. Disadvantage. The essential disadvantage of an implicit curve is the lack of an easy possibility to calculate single points which is necessary for visualization of an implicit curve (see next section). Applications of implicit curves. Within mathematics implicit curves play a prominent role as algebraic curves. In addition, implicit curves are used for designing curves of desired geometrical shapes. Here are two examples. Smooth approximations. Convex polygons. A smooth approximation of a convex polygon can be achieved in the following way: Let formula_41 be the equations of the lines containing the edges of the polygon such that for an inner point of the polygon formula_42 is positive. Then a subset of the implicit curve formula_43 with suitable small parameter formula_44 is a smooth (differentiable) approximation of the polygon. For example, the curves formula_45 for formula_46 contain smooth approximations of a polygon with 5 edges (see diagram). Pairs of lines. In case of two lines formula_47 one gets a pencil of "parallel lines", if the given lines are parallel or the pencil of hyperbolas, which have the given lines as asymptotes. For example, the product of the coordinate axes variables yields the pencil of hyperbolas formula_48, which have the coordinate axes as asymptotes. Others. If one starts with simple implicit curves other than lines (circles, parabolas...) one gets a wide range of interesting new curves. For example, formula_49 (product of a circle and the x-axis) yields smooth approximations of one half of a circle (see picture), and formula_50 (product of two circles) yields smooth approximations of the intersection of two circles (see diagram). Blending curves. In CAD one uses implicit curves for the generation of blending curves, which are special curves establishing a smooth transition between two given curves. For example, formula_51 generates blending curves between the two circles formula_52 formula_53 The method guarantees the continuity of the tangents and curvatures at the points of contact (see diagram). The two lines formula_54 determine the points of contact at the circles. Parameter formula_55 is a design parameter. In the diagram, formula_56. Equipotential curves of two point charges. Equipotential curves of two equal point charges at the points formula_57 can be represented by the equation formula_58 formula_59 The curves are similar to Cassini ovals, but they are not such curves. Visualization of an implicit curve. To visualize an implicit curve one usually determines a polygon on the curve and displays the polygon. For a parametric curve this is an easy task: One just computes the points of a sequence of parametric values. For an implicit curve one has to solve two subproblems: In both cases it is reasonable to assume formula_60. In practice this assumption is violated at single isolated points only. Point algorithm. For the solution of both tasks mentioned above it is essential to have a computer program (which we will call formula_61), which, when given a point formula_62 near an implicit curve, finds a point formula_63 that is exactly on the curve, up to the accuracy of computation: (P1) for the start point is formula_64 (P2) repeat formula_65 (Newton step for function formula_66) (P3) until the distance between the points formula_67 is small enough. (P4) formula_68 is the curve point near the start point formula_69. Tracing algorithm. In order to generate a nearly equally spaced polygon on the implicit curve one chooses a step length formula_70 and (T1) chooses a suitable starting point in the vicinity of the curve (T2) determines a first curve point formula_71 using program formula_61 (T3) determines the tangent (see above), chooses a starting point on the tangent using step length formula_72 (see diagram) and determines a second curve point formula_73 using program formula_61 . formula_74 Because the algorithm traces the implicit curve it is called a "tracing algorithm". The algorithm traces only connected parts of the curve. If the implicit curve consists of several parts it has to be started several times with suitable starting points. Raster algorithm. If the implicit curve consists of several or even unknown parts, it may be better to use a rasterisation algorithm. Instead of exactly following the curve, a raster algorithm covers the entire curve in so many points that they blend together and look like the curve. (R1) Generate a net of points (raster) on the area of interest of the x-y-plane. (R2) For every point formula_63 in the raster, run the point algorithm formula_61 starting from P, then mark its output. If the net is dense enough, the result approximates the connected parts of the implicit curve. If for further applications polygons on the curves are needed one can trace parts of interest by the tracing algorithm. Implicit space curves. Any space curve which is defined by two equations formula_75 is called an "implicit space curve". A curve point formula_76 is called "regular "if the cross product of the gradients formula_12 and formula_77 is not formula_78 at this point: formula_79 otherwise it is called "singular". Vector formula_80 is a "tangent vector" of the curve at point formula_81 "Examples:" formula_82 is a line. formula_83 is a plane section of a sphere, hence a circle. formula_84 is an ellipse (plane section of a cylinder). formula_85 is the intersection curve between a sphere and a cylinder. For the computation of curve points and the visualization of an implicit space curve see Intersection. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\sin(x+y)-\\cos(xy)+1=0" }, { "math_id": 1, "text": "x^2+y^2=1" }, { "math_id": 2, "text": "F(x,y)=0" }, { "math_id": 3, "text": "F(x,y)" }, { "math_id": 4, "text": "y=f(x)" }, { "math_id": 5, "text": "t." }, { "math_id": 6, "text": "x+2y-3=0 ," }, { "math_id": 7, "text": "x^2+y^2-4=0 ," }, { "math_id": 8, "text": "x^3-y^2=0 ," }, { "math_id": 9, "text": "(x^2+y^2)^2-2c^2(x^2-y^2)-(a^4-c^4)=0 " }, { "math_id": 10, "text": "x=g(y)" }, { "math_id": 11, "text": "z=F(x,y)" }, { "math_id": 12, "text": "F" }, { "math_id": 13, "text": "F_x" }, { "math_id": 14, "text": "F_y" }, { "math_id": 15, "text": "F_{xx}" }, { "math_id": 16, "text": "F_{xy}" }, { "math_id": 17, "text": "F_{yy}." }, { "math_id": 18, "text": "(x_0, y_0)" }, { "math_id": 19, "text": "F_x(x_0,y_0)" }, { "math_id": 20, "text": "F_y(x_0,y_0)" }, { "math_id": 21, "text": "(x_0,y_0)" }, { "math_id": 22, "text": "F_x(x_0,y_0)(x-x_0)+F_y(x_0,y_0)(y-y_0)=0," }, { "math_id": 23, "text": "\\text{slope} =-\\frac{F_x(x_0,y_0)}{F_y(x_0,y_0)}." }, { "math_id": 24, "text": "F_y(x,y)=0 \\ne F_x(x,y)" }, { "math_id": 25, "text": "(x_0,y_0)," }, { "math_id": 26, "text": "F_y(x,y)=0" }, { "math_id": 27, "text": "F_x(x,y)=0" }, { "math_id": 28, "text": " \\mathbf{n}(x_0,y_0) = (F_x(x_0,y_0), F_y(x_0,y_0))" }, { "math_id": 29, "text": "\\kappa" }, { "math_id": 30, "text": "\\kappa = \\frac{-F_y^2F_{xx}+2F_xF_yF_{xy}-F_x^2F_{yy}}{(F_x^2+F_y^2)^{3/2}}" }, { "math_id": 31, "text": "f" }, { "math_id": 32, "text": "F(x,f(x))=0" }, { "math_id": 33, "text": " f'(x)=-\\frac{F_x(x,f(x))}{F_y(x,f(x))}" }, { "math_id": 34, "text": " f''(x)=\\frac{-F_y^2F_{xx}+2F_xF_yF_{xy}-F_x^2F_{yy}}{F_y^3}" }, { "math_id": 35, "text": "(x, f(x))" }, { "math_id": 36, "text": " y = f(x) " }, { "math_id": 37, "text": " y=f(x_0)+f'(x_0)(x-x_0)" }, { "math_id": 38, "text": " \\kappa(x_0)=\\frac{f''(x_0)}{(1+f'(x_0)^2)^{3/2}}" }, { "math_id": 39, "text": "F(x,y)=0," }, { "math_id": 40, "text": "F(x,y)-c=0" }, { "math_id": 41, "text": "g_i(x,y)=a_ix+b_iy+c_i=0, \\ i=1,\\dotsc,n" }, { "math_id": 42, "text": "g_i" }, { "math_id": 43, "text": "F(x,y)=g_1(x,y)\\cdots g_n(x,y)-c=0" }, { "math_id": 44, "text": "c" }, { "math_id": 45, "text": "F(x,y)=(x+1)(-x+1)y(-x-y+2)(x-y+2)-c=0 " }, { "math_id": 46, "text": "c= 0.03, \\dotsc, 0.6" }, { "math_id": 47, "text": "F(x,y)=g_1(x,y)g_2(x,y)-c=0" }, { "math_id": 48, "text": "xy-c=0, \\ c\\ne 0" }, { "math_id": 49, "text": "F(x,y)=y(-x^2-y^2+1)-c=0" }, { "math_id": 50, "text": "F(x,y)=(-x^2-(y+1)^2+4)(-x^2-(y-1)^2+4)-c=0" }, { "math_id": 51, "text": "F(x,y)=(1-\\mu)f_1f_2-\\mu (g_1g_2)^3 =0 " }, { "math_id": 52, "text": "f_1(x,y)=(x-x_1)^2+y^2-r_1^2=0 ," }, { "math_id": 53, "text": "f_2(x,y)=(x-x_2)^2+y^2-r_2^2=0 ." }, { "math_id": 54, "text": "g_1(x,y)=x-x_1=0 , \\ g_2(x,y)=x-x_2=0" }, { "math_id": 55, "text": "\\mu" }, { "math_id": 56, "text": "\\mu= 0.05, \\dotsc, 0.2 " }, { "math_id": 57, "text": "P_1=(1,0), \\; P_2=(-1,0)" }, { "math_id": 58, "text": "f(x,y)=\\frac{1}{|PP_1|}+\\frac{1}{|PP_2|}-c" }, { "math_id": 59, "text": " =\\frac{1}{\\sqrt{(x-1)^2+y^2}}+\\frac{1}{\\sqrt{(x+1)^2+y^2}}-c=0 ." }, { "math_id": 60, "text": "\\operatorname{grad} F \\ne (0,0) " }, { "math_id": 61, "text": "\\mathsf{CPoint}" }, { "math_id": 62, "text": "Q_0=(x_0,y_0)" }, { "math_id": 63, "text": "P" }, { "math_id": 64, "text": "j=0" }, { "math_id": 65, "text": "(x_{j+1},y_{j+1})= (x_j,y_j)- \\frac{F(x_j,y_j)}{F_x(x_j,y_j)^2+F_y(x_j,y_j)^2}\\, \\left( F_x(x_j,y_j),F_y(x_j,y_j)\\right). " }, { "math_id": 66, "text": "g(t)=F\\left(x_j+tF_x(x_j,y_j),y_j+tF_y(x_j,y_j)\\right) \\ . " }, { "math_id": 67, "text": "(x_{j+1},y_{j+1}),\\, (x_j,y_j)" }, { "math_id": 68, "text": "P=(x_{j+1},y_{j+1})" }, { "math_id": 69, "text": "Q_0" }, { "math_id": 70, "text": " s" }, { "math_id": 71, "text": "P_1" }, { "math_id": 72, "text": "s" }, { "math_id": 73, "text": "P_2" }, { "math_id": 74, "text": " \\cdots " }, { "math_id": 75, "text": "\\begin{matrix}\nF(x,y,z)=0, \\\\\nG(x,y,z)=0 \\end{matrix} " }, { "math_id": 76, "text": "(x_0,y_0,z_0)" }, { "math_id": 77, "text": "G" }, { "math_id": 78, "text": "(0,0,0)" }, { "math_id": 79, "text": "\\mathbf t(x_0,y_0,z_0)=\\operatorname{grad}F(x_0,y_0,z_0)\\times \\operatorname{grad}G(x_0,y_0,z_0)\\ne (0,0,0);" }, { "math_id": 80, "text": "\\mathbf t(x_0,y_0,z_0)" }, { "math_id": 81, "text": "(x_0,y_0,z_0)." }, { "math_id": 82, "text": " (1)\\quad x+y+z-1=0 \\ ,\\ x-y+z-2=0 " }, { "math_id": 83, "text": "(2)\\quad x^2+y^2+z^2-4=0 \\ , \\ x+y+z-1=0 " }, { "math_id": 84, "text": " (3)\\quad x^2+y^2-1=0 \\ , \\ x+y+z-1=0 " }, { "math_id": 85, "text": "(4)\\quad x^2+y^2+z^2-16=0 \\ , \\ (y-y_0)^2+z^2-9=0 " } ]
https://en.wikipedia.org/wiki?curid=11286809
1128719
Temperature coefficient
Differential equation parameter in thermal physics A temperature coefficient describes the relative change of a physical property that is associated with a given change in temperature. For a property "R" that changes when the temperature changes by "dT", the temperature coefficient α is defined by the following equation: formula_0 Here α has the dimension of an inverse temperature and can be expressed e.g. in 1/K or K−1. If the temperature coefficient itself does not vary too much with temperature and formula_1, a linear approximation will be useful in estimating the value "R" of a property at a temperature "T", given its value "R"0 at a reference temperature "T"0: formula_2 where Δ"T" is the difference between "T" and "T"0. For strongly temperature-dependent α, this approximation is only useful for small temperature differences Δ"T". Temperature coefficients are specified for various applications, including electric and magnetic properties of materials as well as reactivity. The temperature coefficient of most of the reactions lies between 2 and 3. Negative temperature coefficient. Most ceramics exhibit negative temperature dependence of resistance behaviour. This effect is governed by an Arrhenius equation over a wide range of temperatures: formula_3 where "R" is resistance, "A" and "B" are constants, and "T" is absolute temperature (K). The constant "B" is related to the energies required to form and move the charge carriers responsible for electrical conduction – hence, as the value of "B" increases, the material becomes insulating. Practical and commercial NTC resistors aim to combine modest resistance with a value of "B" that provides good sensitivity to temperature. Such is the importance of the "B" constant value, that it is possible to characterize NTC thermistors using the B parameter equation: formula_4 where formula_5 is resistance at temperature formula_6. Therefore, many materials that produce acceptable values of formula_5 include materials that have been alloyed or possess variable negative temperature coefficient (NTC), which occurs when a physical property (such as thermal conductivity or electrical resistivity) of a material lowers with increasing temperature, typically in a defined temperature range. For most materials, electrical resistivity will decrease with increasing temperature. Materials with a negative temperature coefficient have been used in floor heating since 1971. The negative temperature coefficient avoids excessive local heating beneath carpets, bean bag chairs, mattresses, etc., which can damage wooden floors, and may infrequently cause fires. Reversible temperature coefficient. Residual magnetic flux density or Br changes with temperature and it is one of the important characteristics of magnet performance. Some applications, such as inertial gyroscopes and traveling-wave tubes (TWTs), need to have constant field over a wide temperature range. The reversible temperature coefficient (RTC) of Br is defined as: formula_7 To address these requirements, temperature compensated magnets were developed in the late 1970s. For conventional SmCo magnets, Br decreases as temperature increases. Conversely, for GdCo magnets, Br increases as temperature increases within certain temperature ranges. By combining samarium and gadolinium in the alloy, the temperature coefficient can be reduced to nearly zero. Electrical resistance. The temperature dependence of electrical resistance and thus of electronic devices (wires, resistors) has to be taken into account when constructing devices and circuits. The temperature dependence of conductors is to a great degree linear and can be described by the approximation below. formula_8 where formula_9 formula_10 just corresponds to the specific resistance temperature coefficient at a specified reference value (normally "T" = 0 °C) That of a semiconductor is however exponential: formula_11 where formula_12 is defined as the cross sectional area and formula_13 and formula_14 are coefficients determining the shape of the function and the value of resistivity at a given temperature. For both, formula_13 is referred to as the "temperature coefficient of resistance" (TCR). This property is used in devices such as thermistors. Positive temperature coefficient of resistance. A positive temperature coefficient (PTC) refers to materials that experience an increase in electrical resistance when their temperature is raised. Materials which have useful engineering applications usually show a relatively rapid increase with temperature, i.e. a higher coefficient. The higher the coefficient, the greater an increase in electrical resistance for a given temperature increase. A PTC material can be designed to reach a maximum temperature for a given input voltage, since at some point any further increase in temperature would be met with greater electrical resistance. Unlike linear resistance heating or NTC materials, PTC materials are inherently self-limiting. On the other hand, NTC material may also be inherently self-limiting if constant current power source is used. Some materials even have exponentially increasing temperature coefficient. Example of such a material is PTC rubber. Negative temperature coefficient of resistance. A negative temperature coefficient (NTC) refers to materials that experience a decrease in electrical resistance when their temperature is raised. Materials which have useful engineering applications usually show a relatively rapid decrease with temperature, i.e. a lower coefficient. The lower the coefficient, the greater a decrease in electrical resistance for a given temperature increase. NTC materials are used to create inrush current limiters (because they present higher initial resistance until the current limiter reaches quiescent temperature), temperature sensors and thermistors. Negative temperature coefficient of resistance of a semiconductor. An increase in the temperature of a semiconducting material results in an increase in charge-carrier concentration. This results in a higher number of charge carriers available for recombination, increasing the conductivity of the semiconductor. The increasing conductivity causes the resistivity of the semiconductor material to decrease with the rise in temperature, resulting in a negative temperature coefficient of resistance. Temperature coefficient of elasticity. The elastic modulus of elastic materials varies with temperature, typically decreasing with higher temperature. Temperature coefficient of reactivity. In nuclear engineering, the temperature coefficient of reactivity is a measure of the change in reactivity (resulting in a change in power), brought about by a change in temperature of the reactor components or the reactor coolant. This may be defined as formula_15 Where formula_16 is reactivity and "T" is temperature. The relationship shows that formula_17 is the value of the partial differential of reactivity with respect to temperature and is referred to as the "temperature coefficient of reactivity". As a result, the temperature feedback provided by formula_17 has an intuitive application to passive nuclear safety. A negative formula_17 is broadly cited as important for reactor safety, but wide temperature variations across real reactors (as opposed to a theoretical homogeneous reactor) limit the usability of a single metric as a marker of reactor safety. In water moderated nuclear reactors, the bulk of reactivity changes with respect to temperature are brought about by changes in the temperature of the water. However each element of the core has a specific temperature coefficient of reactivity (e.g. the fuel or cladding). The mechanisms which drive fuel temperature coefficients of reactivity are different from water temperature coefficients. While water expands as temperature increases, causing longer neutron travel times during moderation, fuel material will not expand appreciably. Changes in reactivity in fuel due to temperature stem from a phenomenon known as doppler broadening, where resonance absorption of fast neutrons in fuel filler material prevents those neutrons from thermalizing (slowing down). Mathematical derivation of temperature coefficient approximation. In its more general form, the temperature coefficient differential law is: formula_18 Where is defined: formula_19 And formula_13 is independent of formula_20. Integrating the temperature coefficient differential law: formula_21 Applying the Taylor series approximation at the first order, in the proximity of formula_22, leads to: formula_23 Units. The thermal coefficient of electrical circuit parts is sometimes specified as ppm/°C, or ppm/K. This specifies the fraction (expressed in parts per million) that its electrical characteristics will deviate when taken to a temperature above or below the operating temperature. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\frac{dR}{R} = \\alpha\\,dT" }, { "math_id": 1, "text": "\\alpha\\Delta T \\ll 1" }, { "math_id": 2, "text": "R(T) = R(T_0)(1 + \\alpha\\Delta T)," }, { "math_id": 3, "text": "R = Ae^{\\frac{B}{T}}" }, { "math_id": 4, "text": "R = r^{\\infty}e^{\\frac{B}{T}} = R_{0}e^{-\\frac{B}{T_{0}}}e^{\\frac{B}{T}}" }, { "math_id": 5, "text": "R_{0}" }, { "math_id": 6, "text": "T_{0}" }, { "math_id": 7, "text": "\\text{RTC} = \\frac{|\\Delta\\mathbf{B}_r|}{|\\mathbf{B}_r|\\Delta T} \\times 100\\%" }, { "math_id": 8, "text": "\\operatorname{\\rho}(T) = \\rho_{0}\\left[1 + \\alpha_{0}\\left(T - T_{0}\\right)\\right]" }, { "math_id": 9, "text": "\\alpha_{0} = \\frac{1}{\\rho_{0}}\\left[ \\frac{\\delta \\rho}{\\delta T} \\right]_{T=T_{0}}" }, { "math_id": 10, "text": "\\rho_{0}" }, { "math_id": 11, "text": "\\operatorname{\\rho}(T) = S \\alpha^{\\frac{B}{T}}" }, { "math_id": 12, "text": "S" }, { "math_id": 13, "text": "\\alpha" }, { "math_id": 14, "text": "B" }, { "math_id": 15, "text": "\\alpha_{T} = \\frac{\\partial \\rho}{\\partial T}" }, { "math_id": 16, "text": "\\rho" }, { "math_id": 17, "text": "\\alpha_{T}" }, { "math_id": 18, "text": "\\frac{dR}{dT} = \\alpha\\,R" }, { "math_id": 19, "text": "R_0 = R(T_0)" }, { "math_id": 20, "text": "T" }, { "math_id": 21, "text": "\n \\int_{R_0}^{R(T)}\\frac{dR}{R} = \\int_{T_0}^{T} \\alpha\\,dT ~\\Rightarrow~\n \\ln(R)\\Bigg\\vert_{R_0}^{R(T)} = \\alpha(T - T_0) ~\\Rightarrow~\n \\ln\\left( \\frac{R(T)}{R_0} \\right) = \\alpha(T - T_0) ~\\Rightarrow~\n R(T) = R_0 e^{\\alpha(T-T_0)}\n" }, { "math_id": 22, "text": "T_0" }, { "math_id": 23, "text": "R(T) = R_0(1 + \\alpha(T - T_0))" } ]
https://en.wikipedia.org/wiki?curid=1128719
11288691
Heparosan-N-sulfate-glucuronate 5-epimerase
Heparosan-N-sulfate-glucuronate 5-epimerase (EC 5.1.3.17, "heparosan epimerase", "heparosan-N-sulfate-D-glucuronosyl 5-epimerase", "C-5 uronosyl epimerase", "polyglucuronate epimerase", "D-glucuronyl C-5 epimerase", "poly[(1,4)-beta-D-glucuronosyl-(1,4)-N-sulfo-alpha-D-glucosaminyl] glucurono-5-epimerase") is an enzyme with systematic name "poly((1->4)-beta-D-glucuronosyl-(1->4)-N-sulfo-alpha-D-glucosaminyl) glucurono-5-epimerase". This enzyme catalyses the following chemical reaction heparosan-N-sulfate D-glucuronate formula_0 heparosan-N-sulfate L-iduronate This enzyme acts on D-glucuronosyl residues adjacent to sulfated D-glucosamine units in the heparin precursor. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=11288691
11288874
GMP synthase
Guanosine monophosphate synthetase, (EC 6.3.5.2) also known as GMPS is an enzyme that converts xanthosine monophosphate to guanosine monophosphate. In the de novo synthesis of purine nucleotides, IMP is the branch point metabolite at which point the pathway diverges to the synthesis of either guanine or adenine nucleotides. In the guanine nucleotide pathway, there are 2 enzymes involved in converting IMP to GMP, namely IMP dehydrogenase (IMPD1), which catalyzes the oxidation of IMP to XMP, and GMP synthetase, which catalyzes the amination of XMP to GMP. Enzymology. In enzymology, a GMP synthetase (glutamine-hydrolysing) (EC 6.3.5.2) is an enzyme that catalyzes the chemical reaction ATP + xanthosine 5'-phosphate + -glutamine + H2O formula_0 AMP + diphosphate + GMP + -glutamate The 4 substrates of this enzyme are ATP, xanthosine 5'-phosphate, -glutamine, and H2O, whereas its 4 products are AMP, diphosphate, GMP, and -glutamate. This enzyme belongs to the family of ligases, specifically those forming carbon-nitrogen bonds carbon-nitrogen ligases with glutamine as amido-N-donor. The systematic name of this enzyme class is xanthosine-5'-phosphate:-glutamine amido-ligase (AMP-forming). This enzyme participates in purine metabolism and glutamate metabolism. At least one compound, Psicofuranin is known to inhibit this enzyme. Structural studies. As of late 2007, 5 structures have been solved for this class of enzymes, with PDB accession codes 1GPM, 1WL8, 2A9V, 2D7J, and 2DPL. Role in metabolism. Purine metabolism. GMP synthase is the second step in the generation of GMP from IMP; the first step occurs when IMP dehydrogenase generates XMP, and then GMP synthetase is able to react with glutamine and ATP to generate GMP. IMP may also be generated into AMP by adenylosuccinate synthetase and then adenylosuccinate lyase. Amino acid metabolism. GMP synthase is also involved in amino acid metabolism because it generates L-glutamate from L-glutamine. Organismal involvement. This enzyme is widely distributed and a number of crystal structures have been solved, including in "Escherichia coli", "Pyrococcus Horikoshii", "Thermoplasma acidophil", "Homo sapiens", "Thermus thermophilus" and "Mycobacterium tuberculosis". The most extensive structural studies have been done in E. coli. Structure and function. GMP synthase forms a tetramer in an open box shape, which is a dimer of dimers. The R interfaces are held together with a hydrophobic core and a beta sheet, while the P dimer interfaces do not have a hydrophobic core and are more variable than the R interfaces. This enzyme also binds several ligands, including phosphate, pyrophosphate, AMP, citrate and Magnesium. Class I Amidotransferase Domain. The amidotransferase domain is responsible for removal of the amide nitrogen from the glutamine substrate. The class I amidotransferase domain is made of the N terminal 206 residues of the enzyme, and consists of 12 beta strands and 5 alpha helices; the core of this domain is an open 7-stranded mixed beta sheet. Its catalytic triad includes Cys86, His181 and Glu183. His181 is a base and Glu183 is a Hydrogen bond acceptor from the Histidine imidazole ring. Cys86 is the catalytic residue and is conserved. It falls into a nucleophile elbow, where it is at the end of a beta strand and the beginning of an alpha helix, and has little flexibility in its phi and psi angles; thus, Gly84 and Gly88 are conserved and allow for the tight packing of amino acids surrounding the catalytic residue. Synthetase Domain: ATP Pyrophosphatase domain. The synthetase domain is responsible for the addition of the abstracted Nitrogen to the acceptor substrate. The ATP Pyrophosphatase domain consists of a beta sheet containing 5 parallel strands with several alpha helices on each side. The P loop is the nucleotide binding motif; residues 235-241 make up the P loop which specifically binds to pyrophosphate. The structure of this domain is what creates the specificity of this enzyme for ATP. The binding pocket forms hydrophobic interactions with the adenine ring, and the backbone of Val260 forms H bonds with multiple Nitrogens in the ring of AMP, which excludes substituents on the C2 purine ring. This creates extreme specificity for adenine and ATP binding. References. <templatestyles src="Reflist/styles.css" /> Further reading. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "\\rightleftharpoons" } ]
https://en.wikipedia.org/wiki?curid=11288874
11288998
Alternating decision tree
An alternating decision tree (ADTree) is a machine learning method for classification. It generalizes decision trees and has connections to boosting. An ADTree consists of an alternation of decision nodes, which specify a predicate condition, and prediction nodes, which contain a single number. An instance is classified by an ADTree by following all paths for which all decision nodes are true, and summing any prediction nodes that are traversed. History. ADTrees were introduced by Yoav Freund and Llew Mason. However, the algorithm as presented had several typographical errors. Clarifications and optimizations were later presented by Bernhard Pfahringer, Geoffrey Holmes and Richard Kirkby. Implementations are available in Weka and JBoost. Motivation. Original boosting algorithms typically used either decision stumps or decision trees as weak hypotheses. As an example, boosting decision stumps creates a set of formula_0 weighted decision stumps (where formula_0 is the number of boosting iterations), which then vote on the final classification according to their weights. Individual decision stumps are weighted according to their ability to classify the data. Boosting a simple learner results in an unstructured set of formula_0 hypotheses, making it difficult to infer correlations between attributes. Alternating decision trees introduce structure to the set of hypotheses by requiring that they build off a hypothesis that was produced in an earlier iteration. The resulting set of hypotheses can be visualized in a tree based on the relationship between a hypothesis and its "parent." Another important feature of boosted algorithms is that the data is given a different distribution at each iteration. Instances that are misclassified are given a larger weight while accurately classified instances are given reduced weight. Alternating decision tree structure. An alternating decision tree consists of decision nodes and prediction nodes. Decision nodes specify a predicate condition. Prediction nodes contain a single number. ADTrees always have prediction nodes as both root and leaves. An instance is classified by an ADTree by following all paths for which all decision nodes are true and summing any prediction nodes that are traversed. This is different from binary classification trees such as CART (Classification and regression tree) or C4.5 in which an instance follows only one path through the tree. Example. The following tree was constructed using JBoost on the spambase dataset (available from the UCI Machine Learning Repository). In this example, spam is coded as and regular email is coded as . The following table contains part of the information for a single instance. The instance is scored by summing all of the prediction nodes through which it passes. In the case of the instance above, the score is calculated as The final score of is positive, so the instance is classified as spam. The magnitude of the value is a measure of confidence in the prediction. The original authors list three potential levels of interpretation for the set of attributes identified by an ADTree: Care must be taken when interpreting individual nodes as the scores reflect a re weighting of the data in each iteration. Description of the algorithm. The inputs to the alternating decision tree algorithm are: The fundamental element of the ADTree algorithm is the rule. A single rule consists of a precondition, a condition, and two scores. A condition is a predicate of the form "attribute <comparison> value." A precondition is simply a logical conjunction of conditions. Evaluation of a rule involves a pair of nested if statements: 1 if (precondition) 2 if (condition) 3 return score_one 4 else 5 return score_two 6 end if 7 else 8 return 0 9 end if Several auxiliary functions are also required by the algorithm: The algorithm is as follows: 1 function ad_tree 2 input Set of m training instances 3 4 "wi" = 1/"m" for all i 5 formula_9 6 "R"0 = a rule with scores a and 0, precondition "true" and condition "true." 7 formula_10 8 formula_11 the set of all possible conditions 9 for formula_12 10 formula_13 get values that minimize formula_14 11 formula_15 12 formula_16 13 formula_17 14 "Rj" = new rule with precondition p, condition c, and weights "a"1 and "a"2 15 formula_18 16 end for 17 return set of "Rj" The set formula_19 grows by two preconditions in each iteration, and it is possible to derive the tree structure of a set of rules by making note of the precondition that is used in each successive rule. Empirical results. Figure 6 in the original paper demonstrates that ADTrees are typically as robust as boosted decision trees and boosted decision stumps. Typically, equivalent accuracy can be achieved with a much simpler tree structure than recursive partitioning algorithms. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "T" }, { "math_id": 1, "text": "(x_1,y_1),\\ldots,(x_m,y_m)" }, { "math_id": 2, "text": "x_i" }, { "math_id": 3, "text": "y_i" }, { "math_id": 4, "text": "w_i" }, { "math_id": 5, "text": "W_+(c)" }, { "math_id": 6, "text": "c" }, { "math_id": 7, "text": "W_-(c)" }, { "math_id": 8, "text": "W(c) = W_+(c) + W_-(c)" }, { "math_id": 9, "text": "a = \\frac 1 2 \\textrm{ln}\\frac{W_+(true)}{W_-(true)}" }, { "math_id": 10, "text": "\\mathcal{P} = \\{true\\}" }, { "math_id": 11, "text": "\\mathcal{C} = " }, { "math_id": 12, "text": "j = 1 \\dots T" }, { "math_id": 13, "text": "p \\in \\mathcal{P}, c \\in \\mathcal{C} " }, { "math_id": 14, "text": " z = 2 \\left( \\sqrt{W_+(p \\wedge c) W_-(p \\wedge c)} + \\sqrt{W_+(p \\wedge \\neg c) W_-(p \\wedge \\neg c)} \\right) +W(\\neg p) " }, { "math_id": 15, "text": "\\mathcal{P} += p \\wedge c + p \\wedge \\neg c" }, { "math_id": 16, "text": "a_1=\\frac{1}{2}\\textrm{ln}\\frac{W_+(p\\wedge c)+1}{W_-(p \\wedge c)+1}" }, { "math_id": 17, "text": "a_2=\\frac{1}{2}\\textrm{ln}\\frac{W_+(p\\wedge \\neg c)+1}{W_-(p \\wedge \\neg c)+1}" }, { "math_id": 18, "text": "w_i = w_i e^{ -y_i R_j(x_i) }" }, { "math_id": 19, "text": "\\mathcal{P}" } ]
https://en.wikipedia.org/wiki?curid=11288998
1129074
Grand canonical ensemble
Statistical ensemble of particles in thermodynamic equilibrium In statistical mechanics, the grand canonical ensemble (also known as the macrocanonical ensemble) is the statistical ensemble that is used to represent the possible states of a mechanical system of particles that are in thermodynamic equilibrium (thermal and chemical) with a reservoir. The system is said to be open in the sense that the system can exchange energy and particles with a reservoir, so that various possible states of the system can differ in both their total energy and total number of particles. The system's volume, shape, and other external coordinates are kept the same in all possible states of the system. The thermodynamic variables of the grand canonical ensemble are chemical potential (symbol: "µ") and absolute temperature (symbol: "T"). The ensemble is also dependent on mechanical variables such as volume (symbol: "V"), which influence the nature of the system's internal states. This ensemble is therefore sometimes called the "µVT" ensemble, as each of these three quantities are constants of the ensemble. Basics. In simple terms, the grand canonical ensemble assigns a probability "P" to each distinct microstate given by the following exponential: formula_0 where "N" is the number of particles in the microstate and "E" is the total energy of the microstate. "k" is the Boltzmann constant. The number Ω is known as the grand potential and is constant for the ensemble. However, the probabilities and Ω will vary if different "µ", "V", "T" are selected. The grand potential Ω serves two roles: to provide a normalization factor for the probability distribution (the probabilities, over the complete set of microstates, must add up to one); and, many important ensemble averages can be directly calculated from the function Ω("µ", "V", "T"). In the case where more than one kind of particle is allowed to vary in number, the probability expression generalizes to formula_1 where "µ"1 is the chemical potential for the first kind of particles, "N"1 is the number of that kind of particle in the microstate, "µ"2 is the chemical potential for the second kind of particles and so on ("s" is the number of distinct kinds of particles). However, these particle numbers should be defined carefully (see the note on particle number conservation below). The distribution of the grand canonical ensemble is called generalized Boltzmann distribution by some authors. Grand ensembles are apt for use when describing systems such as the electrons in a conductor, or the photons in a cavity, where the shape is fixed but the energy and number of particles can easily fluctuate due to contact with a reservoir (e.g., an electrical ground or a dark surface, in these cases). The grand canonical ensemble provides a natural setting for an exact derivation of the Fermi–Dirac statistics or Bose–Einstein statistics for a system of non-interacting quantum particles (see examples below). An alternative formulation for the same concept writes the probability as formula_2, using the grand partition function formula_3 rather than the grand potential. The equations in this article (in terms of grand potential) may be restated in terms of the grand partition function by simple mathematical manipulations. Applicability. The grand canonical ensemble is the ensemble that describes the possible states of an isolated system that is in thermal and chemical equilibrium with a reservoir (the derivation proceeds along lines analogous to the heat bath derivation of the normal canonical ensemble, and can be found in Reif). The grand canonical ensemble applies to systems of any size, small or large; it is only necessary to assume that the reservoir with which it is in contact is much larger (i.e., to take the macroscopic limit). The condition that the system is isolated is necessary in order to ensure it has well-defined thermodynamic quantities and evolution. In practice, however, it is desirable to apply the grand canonical ensemble to describe systems that are in direct contact with the reservoir, since it is that contact that ensures the equilibrium. The use of the grand canonical ensemble in these cases is usually justified either 1) by assuming that the contact is weak, or 2) by incorporating a part of the reservoir connection into the system under analysis, so that the connection's influence on the region of interest is correctly modeled. Alternatively, theoretical approaches can be used to model the influence of the connection, yielding an open statistical ensemble. Another case in which the grand canonical ensemble appears is when considering a system that is large and thermodynamic (a system that is "in equilibrium with itself"). Even if the exact conditions of the system do not actually allow for variations in energy or particle number, the grand canonical ensemble can be used to simplify calculations of some thermodynamic properties. The reason for this is that various thermodynamic ensembles (microcanonical, canonical) become equivalent in some aspects to the grand canonical ensemble, once the system is very large. Of course, for small systems, the different ensembles are no longer equivalent even in the mean. As a result, the grand canonical ensemble can be highly inaccurate when applied to small systems of fixed particle number, such as atomic nuclei. Grand potential, ensemble averages, and exact differentials. The partial derivatives of the function Ω("µ"1, …, "µ""s", "V", "T") give important grand canonical ensemble average quantities: "Exact differential": From the above expressions, it can be seen that the function Ω has the exact differential formula_4 "First law of thermodynamics": Substituting the above relationship for ⟨"E"⟩ into the exact differential of Ω, an equation similar to the first law of thermodynamics is found, except with average signs on some of the quantities: formula_5 "Thermodynamic fluctuations": The variances in energy and particle numbers are formula_6 formula_7 "Correlations in fluctuations": The covariances of particle numbers and energy are formula_8 formula_9 Example ensembles. The usefulness of the grand canonical ensemble is illustrated in the examples below. In each case the grand potential is calculated on the basis of the relationship formula_10 which is required for the microstates' probabilities to add up to 1. Statistics of noninteracting particles. Bosons and fermions (quantum). In the special case of a quantum system of many "non-interacting" particles, the thermodynamics are simple to compute. Since the particles are non-interacting, one can compute a series of single-particle stationary states, each of which represent a separable part that can be included into the total quantum state of the system. For now let us refer to these single-particle stationary states as "orbitals" (to avoid confusing these "states" with the total many-body state), with the provision that each possible internal particle property (spin or polarization) counts as a separate orbital. Each orbital may be occupied by a particle (or particles), or may be empty. Since the particles are non-interacting, we may take the viewpoint that "each orbital forms a separate thermodynamic system". Thus each orbital is a grand canonical ensemble unto itself, one so simple that its statistics can be immediately derived here. Focusing on just one orbital labelled "i", the total energy for a microstate of "N" particles in this orbital will be "Nϵ""i", where "ϵ""i" is the characteristic energy level of that orbital. The grand potential for the orbital is given by one of two forms, depending on whether the orbital is bosonic or fermionic: In each case the value formula_11 gives the thermodynamic average number of particles on the orbital: the Fermi–Dirac distribution for fermions, and the Bose–Einstein distribution for bosons. Considering again the entire system, the total grand potential is found by adding up the Ω"i" for all orbitals. Indistinguishable classical particles. In classical mechanics it is also possible to consider indistinguishable particles (in fact, indistinguishability is a prerequisite for defining a chemical potential in a consistent manner; all particles of a given kind must be interchangeable). We again consider placing multiple particles of the same kind into the same microstate of single-particle phase space, which we again call an "orbital". However, compared to quantum mechanics, the classical case is complicated by the fact that a microstate in classical mechanics does not refer to a single point in phase space but rather to an extended region in phase space: one microstate contains an infinite number of states, all distinct but of similar character. As a result, when multiple particles are placed into the same orbital, the overall collection of the particles (in the system phase space) does not count as one whole microstate but rather only a "fraction" of a microstate, because identical states (formed by permutation of identical particles) should not be overcounted. The overcounting correction factor is the factorial of the number of particles. The statistics in this case take the form of an exponential power series formula_12 the value formula_13 corresponding to Maxwell–Boltzmann statistics. Ionization of an isolated atom. The grand canonical ensemble can be used to predict whether an atom prefers to be in a neutral state or ionized state. An atom is able to exist in ionized states with more or fewer electrons compared to neutral. As shown below, ionized states may be thermodynamically preferred depending on the environment. Consider a simplified model where the atom can be in a neutral state or in one of two ionized states (a detailed calculation also includes the degeneracy factors of the states): Here Δ"E"I and Δ"E"A are the atom's ionization energy and electron affinity, respectively; "ϕ" is the local electrostatic potential in the vacuum nearby the atom, and −"q" is the electron charge. The grand potential in this case is thus determined by formula_14 The quantity −"qϕ" − "µ" is critical in this case, for determining the balance between the various states. This value is determined by the environment around the atom. If one of these atoms is placed into a vacuum box, then −"qϕ" − "µ" "W", the work function of the box lining material. Comparing the tables of work function for various solid materials with the tables of electron affinity and ionization energy for atomic species, it is clear that many combinations would result in a neutral atom, however some specific combinations would result in the atom preferring an ionized state: e.g., a halogen atom in a ytterbium box, or a cesium atom in a tungsten box. At room temperature this situation is not stable since the atom tends to adsorb to the exposed lining of the box instead of floating freely. At high temperatures, however, the atoms are evaporated from the surface in ionic form; this spontaneous surface ionization effect has been used as a cesium ion source. At room temperature, this example finds application in semiconductors, where the ionization of a dopant atom is well described by this ensemble. In the semiconductor, the conduction band edge "ϵ"C plays the role of the vacuum energy level (replacing −"qϕ"), and "µ" is known as the Fermi level. Of course, the ionization energy and electron affinity of the dopant atom are strongly modified relative to their vacuum values. A typical donor dopant in silicon, phosphorus, has Δ"E"I =; the value of "ϵ"C − "µ" in the intrinsic silicon is initially about , guaranteeing the ionization of the dopant. The value of "ϵ"C − "µ" depends strongly on electrostatics, however, so under some circumstances it is possible to de-ionize the dopant. Meaning of chemical potential, generalized "particle number". In order for a particle number to have an associated chemical potential, it must be conserved during the internal dynamics of the system, and only able to change when the system exchanges particles with an external reservoir. If the particles can be created out of energy during the dynamics of the system, then an associated "µN" term must not appear in the probability expression for the grand canonical ensemble. In effect, this is the same as requiring that "µ" 0 for that kind of particle. Such is the case for photons in a black cavity, whose number regularly change due to absorption and emission on the cavity walls. (On the other hand, photons in a highly reflective cavity can be conserved and caused to have a nonzero "µ".) In some cases the number of particles is not conserved and the "N" represents a more abstract conserved quantity: (particle number - antiparticle number) is conserved. As particle energies increase, there are more possibilities to convert between particle types, and so there are fewer numbers that are truly conserved. At the very highest energies, the only conserved numbers are electric charge, weak isospin, and baryon–lepton number difference. On the other hand, in some cases a single kind of particle may have multiple conserved numbers: Precise expressions for the ensemble. The precise mathematical expression for statistical ensembles has a distinct form depending on the type of mechanics under consideration (quantum or classical), as the notion of a "microstate" is considerably different. In quantum mechanics, the grand canonical ensemble affords a simple description since diagonalization provides a set of distinct microstates of a system, each with well-defined energy and particle number. The classical mechanical case is more complex as it involves not stationary states but instead an integral over canonical phase space. Quantum mechanical. A statistical ensemble in quantum mechanics is represented by a density matrix, denoted by formula_15. The grand canonical ensemble is the density matrix formula_16 where "Ĥ" is the system's total energy operator (Hamiltonian), "N̂"1 is the system's total particle number operator for particles of type 1, "N̂"2 is the total particle number operator for particles of type 2, and so on. exp is the matrix exponential operator. The grand potential Ω is determined by the probability normalization condition that the density matrix has a trace of one, formula_17: formula_18 Note that for the grand ensemble, the basis states of the operators "Ĥ", "N̂"1, etc. are all states with "multiple particles" in Fock space, and the density matrix is defined on the same basis. Since the energy and particle numbers are all separately conserved, these operators are mutually commuting. The grand canonical ensemble can alternatively be written in a simple form using bra–ket notation, since it is possible (given the mutually commuting nature of the energy and particle number operators) to find a complete basis of simultaneous eigenstates |"ψ""i"⟩, indexed by "i", where "Ĥ"|"ψ""i"⟩ "E""i"|"ψ""i"⟩, "N̂"1|"ψ""i"⟩ "N"1,"i"|"ψ""i"⟩, and so on. Given such an eigenbasis, the grand canonical ensemble is simply formula_19 formula_20 where the sum is over the complete set of states with state "i" having "E""i" total energy, "N"1,"i" particles of type 1, "N"2,"i" particles of type 2, and so on. Classical mechanical. In classical mechanics, a grand ensemble is instead represented by a joint probability density function defined over multiple phase spaces of varying dimensions, "ρ"("N"1, … "N""s", "p"1, … "p""n", "q"1, … "q""n"), where the "p"1, … "p""n" and "q"1, … "q""n" are the canonical coordinates (generalized momenta and generalized coordinates) of the system's internal degrees of freedom. The expression for the grand canonical ensemble is somewhat more delicate than the canonical ensemble since: In a system of particles, the number of degrees of freedom "n" depends on the number of particles in a way that depends on the physical situation. For example, in a three-dimensional gas of monoatoms "n" 3"N", however in molecular gases there will also be rotational and vibrational degrees of freedom. The probability density function for the grand canonical ensemble is: formula_21 where Again, the value of Ω is determined by demanding that "ρ" is a normalized probability density function: formula_22 This integral is taken over the entire available phase space for the given numbers of particles. Overcounting correction. A well-known problem in the statistical mechanics of fluids (gases, liquids, plasmas) is how to treat particles that are similar or identical in nature: should they be regarded as distinguishable or not? In the system's equation of motion each particle is forever tracked as a distinguishable entity, and yet there are also valid states of the system where the positions of each particle have simply been swapped: these states are represented at different places in phase space, yet would seem to be equivalent. If the permutations of similar particles are regarded to count as distinct states, then the factor "C" above is simply "C" 1. From this point of view, ensembles include every permuted state as a separate microstate. Although appearing benign at first, this leads to a problem of severely non-extensive entropy in the canonical ensemble, known today as the Gibbs paradox. In the grand canonical ensemble a further logical inconsistency occurs: the number of distinguishable permutations depends not only on how many particles are in the system, but also on how many particles are in the reservoir (since the system may exchange particles with a reservoir). In this case the entropy and chemical potential are non-extensive but also badly defined, depending on a parameter (reservoir size) that should be irrelevant. To solve these issues it is necessary that the exchange of two similar particles (within the system, or between the system and reservoir) must not be regarded as giving a distinct state of the system. In order to incorporate this fact, integrals are still carried over full phase space but the result is divided by formula_23 which is the number of different permutations possible. The division by "C" neatly corrects the overcounting that occurs in the integral over all phase space. It is of course possible to include distinguishable "types" of particles in the grand canonical ensemble—each distinguishable type formula_24 is tracked by a separate particle counter formula_25 and chemical potential formula_26. As a result, the only consistent way to include "fully distinguishable" particles in the grand canonical ensemble is to consider every possible distinguishable type of those particles, and to track each and every possible type with a separate particle counter and separate chemical potential. Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "P = e^{{(\\Omega + \\mu N - E)}/{(k T)}}," }, { "math_id": 1, "text": "P = e^{{(\\Omega + \\mu_1 N_1 + \\mu_2 N_2 + \\ldots + \\mu_s N_s - E)}/{(k T)}}," }, { "math_id": 2, "text": "\\textstyle P = \\frac{1}{\\mathcal Z} e^{(\\mu N-E)/(k T)}" }, { "math_id": 3, "text": "\\textstyle \\mathcal Z = e^{-\\Omega/(k T)}" }, { "math_id": 4, "text": " d\\Omega = - S dT - \\langle N_1 \\rangle d\\mu_1 \\ldots - \\langle N_s \\rangle d\\mu_s - \\langle p\\rangle dV ." }, { "math_id": 5, "text": " d\\langle E \\rangle = T dS + \\mu_1 d\\langle N_1 \\rangle \\ldots + \\mu_s d\\langle N_s \\rangle - \\langle p\\rangle dV ." }, { "math_id": 6, "text": " \\langle E^2 \\rangle - \\langle E \\rangle^2 = k T^2 \\frac{\\partial \\langle E \\rangle} {\\partial T} + k T \\mu_1 \\frac{\\partial \\langle E \\rangle} {\\partial \\mu_1} + k T \\mu_2 \\frac{\\partial \\langle E \\rangle} {\\partial \\mu_2} + \\ldots ," }, { "math_id": 7, "text": " \\langle N_1^2 \\rangle - \\langle N_1 \\rangle^2 = k T \\frac{\\partial \\langle N_1 \\rangle} {\\partial \\mu_1}." }, { "math_id": 8, "text": " \\langle N_1 N_2 \\rangle - \\langle N_1 \\rangle\\langle N_2\\rangle = k T \\frac{\\partial \\langle N_2 \\rangle} {\\partial \\mu_1} = k T \\frac{\\partial \\langle N_1 \\rangle} {\\partial \\mu_2}." }, { "math_id": 9, "text": " \\langle N_1 E \\rangle - \\langle N_1 \\rangle\\langle E\\rangle = k T \\frac{\\partial \\langle E \\rangle} {\\partial \\mu_1}," }, { "math_id": 10, "text": "\\Omega = -kT \\ln \\Big(\\sum_\\text{microstates} e^{{(\\mu N - E)}/{(kT)}} \\Big)" }, { "math_id": 11, "text": "\\textstyle\\langle N_i\\rangle = -\\tfrac{\\partial \\Omega_i}{\\partial \\mu}" }, { "math_id": 12, "text": "\\begin{align}\n\\Omega_{\\rm orb} & = -kT \\ln \\Big( \\sum_{N=0}^{\\infty} \\frac{1}{N!} e^{{(N\\mu - N\\epsilon_{\\rm orb})}/{(k T)}}\\Big) \\\\\n & = -kT \\ln \\Big( e^{e^{{(\\mu - \\epsilon_{\\rm orb})}/{(k T)}}}\\Big) \\\\\n & = - kT e^{\\frac{\\mu - \\epsilon_{\\rm orb}}{k T}},\n\\end{align}" }, { "math_id": 13, "text": "\\scriptstyle\\langle N_{\\rm orb}\\rangle = -\\tfrac{\\partial \\Omega_{\\rm orb}}{\\partial \\mu}" }, { "math_id": 14, "text": "\n\\begin{align}\n\\Omega\n& = -kT \\ln \\Big(e^{{(\\mu N_0 - E_0)}/{(k T)}} + e^{{(\\mu N_0 - \\mu - E_0 - \\Delta E_{\\rm I} - q\\phi)}/{(k T)}} + e^{{(\\mu N_0 + \\mu - E_0 + \\Delta E_{\\rm A} + q\\phi)}/{(k T)}}\\Big). \\\\\n& = E_0 - \\mu N_0 -kT \\ln \\Big( 1 + e^{{(-\\mu - \\Delta E_{\\rm I} - q\\phi)}/{(k T)}} + e^{{(\\mu + \\Delta E_{\\rm A} + q\\phi)}/{(k T)}}\\Big). \\\\\n\\end{align}\n" }, { "math_id": 15, "text": "\\hat \\rho" }, { "math_id": 16, "text": "\\hat \\rho = \\exp\\big(\\tfrac{1}{kT}(\\Omega + \\mu_1 \\hat N_1 + \\ldots + \\mu_s \\hat N_s - \\hat H)\\big)," }, { "math_id": 17, "text": " Tr \\hat \\rho = 1" }, { "math_id": 18, "text": "e^{-\\frac{\\Omega}{k T}} = \\operatorname{Tr} \\exp\\big(\\tfrac{1}{kT}(\\mu_1 \\hat N_1 + \\ldots + \\mu_s \\hat N_s - \\hat H)\\big)." }, { "math_id": 19, "text": "\\hat \\rho = \\sum_i e^{\\frac{\\Omega + \\mu_1 N_{1,i} + \\ldots + \\mu_s N_{s,i} - E_i}{k T}} |\\psi_i\\rangle \\langle \\psi_i | " }, { "math_id": 20, "text": "e^{-\\frac{\\Omega}{k T}} = \\sum_i e^{\\frac{\\mu_1 N_{1,i} + \\ldots + \\mu_s N_{s,i} - E_i}{k T}}." }, { "math_id": 21, "text": "\\rho = \\frac{1}{h^n C} e^{\\frac{\\Omega + \\mu_1 N_1 + \\ldots + \\mu_s N_s - E}{k T}}," }, { "math_id": 22, "text": "e^{-\\frac{\\Omega}{k T}} = \\sum_{N_1 = 0}^{\\infty} \\ldots \\sum_{N_s = 0}^{\\infty} \\int \\ldots \\int \\frac{1}{h^n C} e^{\\frac{\\mu_1 N_1 + \\ldots + \\mu_s N_s - E}{k T}} \\, dp_1 \\ldots dq_n " }, { "math_id": 23, "text": "C = N_1! N_2! \\ldots N_s!," }, { "math_id": 24, "text": "i" }, { "math_id": 25, "text": "N_i" }, { "math_id": 26, "text": "\\mu_i" } ]
https://en.wikipedia.org/wiki?curid=1129074
11291250
Monotonically normal space
Property of topological spaces stronger than normality In mathematics, specifically in the field of topology, a monotonically normal space is a particular kind of normal space, defined in terms of a monotone normality operator. It satisfies some interesting properties; for example metric spaces and linearly ordered spaces are monotonically normal, and every monotonically normal space is hereditarily normal. Definition. A topological space formula_0 is called monotonically normal if it satisfies any of the following equivalent definitions: Definition 1. The space formula_0 is T1 and there is a function formula_1 that assigns to each ordered pair formula_2 of disjoint closed sets in formula_0 an open set formula_3 such that: (i) formula_4; (ii) formula_5 whenever formula_6 and formula_7. Condition (i) says formula_0 is a normal space, as witnessed by the function formula_1. Condition (ii) says that formula_3 varies in a monotone fashion, hence the terminology "monotonically normal". The operator formula_1 is called a monotone normality operator. One can always choose formula_1 to satisfy the property formula_8, by replacing each formula_3 by formula_9. Definition 2. The space formula_0 is T1 and there is a function formula_1 that assigns to each ordered pair formula_2 of separated sets in formula_0 (that is, such that formula_10) an open set formula_3 satisfying the same conditions (i) and (ii) of Definition 1. Definition 3. The space formula_0 is T1 and there is a function formula_11 that assigns to each pair formula_12 with formula_13 open in formula_0 and formula_14 an open set formula_15 such that: (i) formula_16; (ii) if formula_17, then formula_18 or formula_19. Such a function formula_11 automatically satisfies formula_20. Definition 4. Let formula_28 be a base for the topology of formula_0. The space formula_0 is T1 and there is a function formula_11 that assigns to each pair formula_12 with formula_29 and formula_14 an open set formula_15 satisfying the same conditions (i) and (ii) of Definition 3. Definition 5. The space formula_0 is T1 and there is a function formula_11 that assigns to each pair formula_12 with formula_13 open in formula_0 and formula_14 an open set formula_15 such that: (i) formula_16; (ii) if formula_13 and formula_22 are open and formula_30, then formula_31; (iii) if formula_32 and formula_23 are distinct points, then formula_33. Such a function formula_11 automatically satisfies all conditions of Definition 3. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "(A,B)" }, { "math_id": 3, "text": "G(A,B)" }, { "math_id": 4, "text": "A\\subseteq G(A,B)\\subseteq \\overline{G(A,B)}\\subseteq X\\setminus B" }, { "math_id": 5, "text": "G(A,B)\\subseteq G(A',B')" }, { "math_id": 6, "text": "A\\subseteq A'" }, { "math_id": 7, "text": "B'\\subseteq B" }, { "math_id": 8, "text": "G(A,B)\\cap G(B,A)=\\emptyset" }, { "math_id": 9, "text": "G(A,B)\\setminus\\overline{G(B,A)}" }, { "math_id": 10, "text": "A\\cap\\overline{B}=B\\cap\\overline{A}=\\emptyset" }, { "math_id": 11, "text": "\\mu" }, { "math_id": 12, "text": "(x,U)" }, { "math_id": 13, "text": "U" }, { "math_id": 14, "text": "x\\in U" }, { "math_id": 15, "text": "\\mu(x,U)" }, { "math_id": 16, "text": "x\\in\\mu(x,U)" }, { "math_id": 17, "text": "\\mu(x,U)\\cap\\mu(y,V)\\ne\\emptyset" }, { "math_id": 18, "text": "x\\in V" }, { "math_id": 19, "text": "y\\in U" }, { "math_id": 20, "text": "x\\in\\mu(x,U)\\subseteq\\overline{\\mu(x,U)}\\subseteq U" }, { "math_id": 21, "text": "y\\in X\\setminus U" }, { "math_id": 22, "text": "V" }, { "math_id": 23, "text": "y" }, { "math_id": 24, "text": "x\\notin V" }, { "math_id": 25, "text": "\\mu(x,U)\\cap\\mu(y,V)=\\emptyset" }, { "math_id": 26, "text": "\\mu(y,V)" }, { "math_id": 27, "text": "y\\notin\\overline{\\mu(x,U)}" }, { "math_id": 28, "text": "\\mathcal{B}" }, { "math_id": 29, "text": "U\\in\\mathcal{B}" }, { "math_id": 30, "text": "x\\in U\\subseteq V" }, { "math_id": 31, "text": "\\mu(x,U)\\subseteq\\mu(x,V)" }, { "math_id": 32, "text": "x" }, { "math_id": 33, "text": "\\mu(x,X\\setminus\\{y\\})\\cap\\mu(y,X\\setminus\\{x\\})=\\emptyset" }, { "math_id": 34, "text": "[a,b)" }, { "math_id": 35, "text": "x\\in[a,b)" }, { "math_id": 36, "text": "\\mu(x,[a,b))=[x,b)" } ]
https://en.wikipedia.org/wiki?curid=11291250
11291348
Intel 8255
Programmable Peripheral Interface chip The Intel 8255 (or i8255) Programmable Peripheral Interface (PPI) chip was developed and manufactured by Intel in the first half of the 1970s for the Intel 8080 microprocessor. The 8255 provides 24 parallel input/output lines with a variety of programmable operating modes. The 8255 is a member of the MCS-85 family of chips, designed by Intel for use with their 8085 and 8086 microprocessors and their descendants. It was first available in a 40-pin DIP and later a 44-pin PLCC packages. It found wide applicability in digital processing systems and was later cloned by other manufacturers. The 82C55 is a CMOS version for higher speed and lower current consumption. The functionality of the 8255 is now mostly embedded in larger VLSI processing chips as a sub-function. A CMOS version of the 8255 is still being made by Renesas but mostly used to expand the I/O of microcontrollers. Similar chips. The 8255 has a similar function to the Motorola 6820 PIA (Peripheral Interface Adapter) from the Motorola 6800 family, also originally packaged as 40-pin DIL. The 8255 provides 24 I/O pins with four programmable direction bits: one for Port A(7:0) (i.e., all pins in the port), one for Port B(7:0), one for Port C(3:0) and one for Port C(7:4). By contrast, the Motorola and MOS chips provide only 16 I/O pins plus 4 control pins, but the Motorola/MOS chips allow the direction (input or output) of all I/O pins to be individually programmed. Both have configurations that will do a certain amount of automatic handshaking and interrupt generation. Other comparable microprocessor I/O chips are the 2655 Programmable Peripheral Interface from the Signetics 2650 family, the Z80 PIO, the Western Design Center WDC 65C21 (equivalent to the Motorola 6820/6821), and the MOS Technology 6522 VIA and 6526 CIA which had considerable additional functionality such as timers and shift registers. Variants. The industrial grade version of Intel ID8255A was available for US$17.55 in quantities of 100 and up. The available Intel 8255A-5 version was for USD $6.55 in quantities of 100 or more. The available 82C55A CMOS version was outsourced to Oki Electronic Industry Co., Ltd. The available package from Intel branded 82C55 in 44-pin PLCC of sampling at fourth quarter of 1985. In Eastern Europe, equivalent circuits were manufactured as the KR580VV55A in the Soviet Union and as the MHB8255A by Tesla in Czechoslovakia. Applications. The 8255 was widely used in many microcomputer/microcontroller systems and home computers such as the SV-328 and all MSX models. The 8255 was used in the original IBM-PC, PC/XT, PC/jr and clones, along with numerous homebuilt computers such as the N8VEM. Function. The 8255 gives a CPU or digital system access to programmable parallel I/O. The 8255 has 24 input/output pins. These are divided into three 8-bit ports (A, B, C). Port A and port B can be used as 8-bit input/output ports. Port C can be used as an 8-bit input/output port or as two 4-bit input/output ports or to produce handshake signals for ports A and B. The three ports are further grouped as follows: Eight data lines (D0–D7) are available (with an 8-bit data buffer) to read/write data into the ports or control register under the status of the formula_0RD (pin 5) and formula_0WR (pin 36), which are active-low signals for read and write operations respectively. Address lines A1 and A0 allow to access a data register for each port or a control register, as listed below: The control signal chip select formula_0CS (pin 6) is used to enable the 8255 chip. It is an active-low signal, i.e., when formula_0CS = 0, the 8255 is enabled. The RESET input (pin 35) is connected to the RESET line of system like 8085, 8086, etc., so that when the system is reset, all the ports are initialized as input lines. This is done to prevent 8255 and/or any peripheral connected to it from being destroyed due to mismatch of port direction settings. As an example, consider an input device connected to 8255 at port A. If from the previous operation, port A is initialized as an output port and if 8255 is not reset before using the current configuration, then there is a possibility of damage of either the input device connected or 8255 or both, since both 8255 and the device connected will be sending out data. The control register (or the control logic, or the command word register) is an 8-bit register used to select the modes of operation and input/output designation of the ports. Operational modes of 8255. There are two basic operational modes of 8255: The two modes are selected on the basis of the value present at the D7 bit of the control word register. When D7 = 1, 8255 operates in I/O mode, and when D7 = 0, it operates in the BSR mode. Bit Set/Reset (BSR) mode. The Bit Set/Reset (BSR) mode is available on port C only. Each line of port C (PC7 - PC0) can be set or reset by writing a suitable value to the control word register. BSR mode and I/O mode are independent and selection of BSR mode does not affect the operation of other ports in I/O mode. Selection of port C pin is determined as follows: As an example, if it is needed that PC5 be set, then in the control word, Thus, as per the above values, 0B (Hex) will be loaded into the Control Word Register (CWR). Input/Output mode. This mode is selected when D7 bit of the Control Word Register is 1. There are three I/O modes: Control Word format. For example, if port B and upper port C have to be initialized as input ports and lower port C and port A as output ports (all in mode 0): Hence, for the desired operation, the control word register will have to be loaded with "10001010" = 8A (hex). Mode 0 - simple I/O. In this mode, the ports can be used for simple I/O operations without handshaking signals. Port A, port B provide simple I/O operation. The two halves of port C can be either used together as an additional 8-bit port, or they can be used as individual 4-bit ports. Since the two halves of port C are independent, they may be used such that one-half is initialized as an input port while the other half is initialized as an output port. The input/output features in mode 0 are as follows: 'Latched' means the bits are put into a storage register (array of flip-flops) which holds its output constant even if the inputs change after being latched. The 8255's outputs are latched to hold the last data written to them. This is required because the data only stays on the bus for one cycle. So, without latching, the outputs would become invalid as soon as the write cycle finishes. The inputs are not latched because the CPU only has to read their current values, then store the data in a CPU register or memory if it needs to be referenced at a later time. If an input changes while the port is being read then the result may be indeterminate. Mode 1 - Strobed Input/output mode. When we wish to use port A or port B for handshake (strobed) input or output operation, we initialise that port in mode 1 (port A and port B can be initialised to operate in different modes, i.e., for e.g., port A can operate in mode 0 and port B in mode 1). Some of the pins of port C function as handshake lines. For port B in this mode (irrespective of whether is acting as an input port or output port), PC0, PC1 and PC2 pins function as handshake lines. If port A is initialised as mode 1 input port, then, PC3, PC4 and PC5 function as handshake signals. Pins PC6 and PC7 are available for use as input/output lines. The mode 1 which supports handshaking has following features: Input Handshaking signals 1. IBF (Input Buffer Full) - It is an output indicating that the input latch contains information. 2. STB (Strobed Input) - The strobe input loads data into the port latch, which holds the information until it is input to the microprocessor via the IN instruction. 3. INTR (Interrupt request) - It is an output that requests an interrupt. The INTR pin becomes a logic 1 when the STB input returns to a logic 1, and is cleared when the data are input from the port by the microprocessor. 4. INTE (Interrupt enable) - It is neither an input nor an output; it is an internal bit programmed via the port PC4(port A) or PC2(port B) bit position. Output Handshaking signals 1. OBF (Output Buffer Full) - It is an output that goes low whenever data are output(OUT) to the port A or port B latch. This signal is set to a logic 1 whenever the ACK pulse returns from the external device. 2. ACK (Acknowledge)-It causes the OBF pin to return to a logic 1 level. The ACK signal is a response from an external device, indicating that it has received the data from the 82C55A port. 3. INTR (Interrupt request) - It is a signal that often interrupts the microprocessor when the external device receives the data via the signal. this pin is qualified by the internal INTE(interrupt enable) bit. 4. INTE (Interrupt enable) - It is neither an input nor an output; it is an internal bit programmed to enable or disable the INTR pin. The INTE A bit is programmed using the PC6 bit and INTE B is programmed using the PC2 bit. Mode 2 - Strobed Bidirectional Input/Output mode. Only port A can be initialized in this mode. Port A can be used for "bidirectional handshake" data transfer. This means that data can be input or output on the same eight lines (PA0 - PA7). Pins PC3 - PC7 are used as handshake lines for port A. The remaining pins of port C (PC0 - PC2) can be used as input/output lines if group B is initialized in mode 0 or as handshaking for port B if group B is initialized in mode 1. In this mode, the 8255 may be used to extend the system bus to a slave microprocessor or to transfer data bytes to and from a floppy disk controller. Acknowledgement and handshaking signals are provided to maintain proper data flow and synchronisation between the data transmitter and receiver. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "{\\neg}" } ]
https://en.wikipedia.org/wiki?curid=11291348
1129156
Twelfth root of two
Algebraic irrational number The twelfth root of two or formula_0 (or equivalently formula_1) is an algebraic irrational number, approximately equal to 1.0594631. It is most important in Western music theory, where it represents the frequency ratio (musical interval) of a semitone () in twelve-tone equal temperament. This number was proposed for the first time in relationship to musical tuning in the sixteenth and seventeenth centuries. It allows measurement and comparison of different intervals (frequency ratios) as consisting of different numbers of a single interval, the equal tempered semitone (for example, a minor third is 3 semitones, a major third is 4 semitones, and perfect fifth is 7 semitones). A semitone itself is divided into 100 cents (1 cent = formula_2). Numerical value. The twelfth root of two to 20 significant figures is . Fraction approximations in increasing order of accuracy include , , , , and . The equal-tempered chromatic scale. A musical interval is a ratio of frequencies and the equal-tempered chromatic scale divides the octave (which has a ratio of 2:1) into twelve equal parts. Each note has a frequency that is 2 times that of the one below it. Applying this value successively to the tones of a chromatic scale, starting from A above middle C (known as A4) with a frequency of 440 Hz, produces the following sequence of pitches: The final A (A5: 880 Hz) is exactly twice the frequency of the lower A (A4: 440 Hz), that is, one octave higher. Other tuning scales. Other tuning scales use slightly different interval ratios: Pitch adjustment. Since the frequency ratio of a semitone is close to 106% (formula_3), increasing or decreasing the playback speed of a recording by 6% will shift the pitch up or down by about one semitone, or "half-step". Upscale reel-to-reel magnetic tape recorders typically have pitch adjustments of up to ±6%, generally used to match the playback or recording pitch to other music sources having slightly different tunings (or possibly recorded on equipment that was not running at quite the right speed). Modern recording studios utilize digital pitch shifting to achieve similar results, ranging from cents up to several half-steps. Reel-to-reel adjustments also affect the tempo of the recorded sound, while digital shifting does not. History. Historically this number was proposed for the first time in relationship to musical tuning in 1580 (drafted, rewritten 1610) by Simon Stevin. In 1581 Italian musician Vincenzo Galilei may be the first European to suggest twelve-tone equal temperament. The twelfth root of two was first calculated in 1584 by the Chinese mathematician and musician Zhu Zaiyu using an abacus to reach twenty four decimal places accurately, calculated circa 1605 by Flemish mathematician Simon Stevin, in 1636 by the French mathematician Marin Mersenne and in 1691 by German musician Andreas Werckmeister. Notes. <templatestyles src="Reflist/styles.css" /> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\sqrt[12]{2}" }, { "math_id": 1, "text": "2^{1/12}" }, { "math_id": 2, "text": "\\sqrt[1200]{2}=2^{1/1200}" }, { "math_id": 3, "text": "1.05946\\times100=105.946" } ]
https://en.wikipedia.org/wiki?curid=1129156
1129334
Equivalent airspeed
Airspeed corrected for the compressibility of air at high speeds In aviation, equivalent airspeed (EAS) is calibrated airspeed (CAS) corrected for the compressibility of air at a non-trivial Mach number. It is also the airspeed at sea level in the International Standard Atmosphere at which the dynamic pressure is the same as the dynamic pressure at the true airspeed (TAS) and altitude at which the aircraft is flying. In low-speed flight, it is the speed which would be shown by an airspeed indicator with zero error. It is useful for predicting aircraft handling, aerodynamic loads, stalling etc. formula_0 where ρ is actual air density and "ρ"0 is standard sea level density (1.225 kg/m3 or 0.00237 slug/ft3). EAS is a function of dynamic pressure: formula_1 where q is the dynamic pressure formula_2 EAS can also be obtained from the aircraft Mach number and static pressure. formula_3 where "a"0 is (the standard speed of sound at 15 °C), M is the Mach number, P is static pressure, and "P"0 is standard sea level pressure (1013.25 hPa). Combining the above with the expression for Mach number gives EAS as a function of impact pressure and static pressure (valid for subsonic flow): formula_4 where qc is impact pressure. At standard sea level, EAS is the same as calibrated airspeed (CAS) and true airspeed (TAS). At any other altitude, EAS may be obtained from CAS by correcting for compressibility error. The following simplified formula allows calculation of CAS from EAS: formula_5 where the pressure ratio formula_6 and CAS, EAS are airspeeds and can be measured in knots, km/h, mph or any other appropriate unit. The above formula is accurate within 1% up to Mach 1.2 and useful with acceptable error up to Mach 1.5. The 4th order Mach term can be neglected for speeds below Mach 0.85. See also. <templatestyles src="Div col/styles.css"/> References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathrm{EAS} = \\mathrm{TAS} \\times \\sqrt{\\frac{\\rho}{\\rho_0}}" }, { "math_id": 1, "text": "\\mathrm{EAS} = \\sqrt{\\frac{2q}{\\rho_0}}" }, { "math_id": 2, "text": "q = \\tfrac12\\, \\rho\\, v^{2}." }, { "math_id": 3, "text": "\\mathrm{EAS} ={a_0} M \\sqrt{P\\over P_0}" }, { "math_id": 4, "text": "\\mathrm{EAS} = a_0 \\sqrt{ \\frac{5P}{P_0} \\left[\\left( \\frac{q_c}P + 1 \\right)^\\frac{2}{7} - 1 \\right]}" }, { "math_id": 5, "text": "\\mathrm{CAS} = {\\mathrm{EAS} \\times \\left[ 1 + \\frac{1}{8}(1 - \\delta)M^2 + \\frac{3}{640}(1 - 10\\delta + 9\\delta^2)M^4 \\right]}" }, { "math_id": 6, "text": "\\delta = \\tfrac{P}{P_0}, " } ]
https://en.wikipedia.org/wiki?curid=1129334
1129436
Calibrated airspeed
Airspeed corrected for instrument and position error In aviation, calibrated airspeed (CAS) is indicated airspeed corrected for instrument and position error. When flying at sea level under International Standard Atmosphere conditions (15 °C, 1013 hPa, 0% humidity) calibrated airspeed is the same as equivalent airspeed (EAS) and true airspeed (TAS). If there is no wind it is also the same as ground speed (GS). Under any other conditions, CAS may differ from the aircraft's TAS and GS. Calibrated airspeed in knots is usually abbreviated as "KCAS", while indicated airspeed is abbreviated as "KIAS". In some applications, notably British usage, the expression "rectified airspeed" is used instead of calibrated airspeed. Practical applications of CAS. CAS has two primary applications in aviation: With the widespread use of GPS and other advanced navigation systems in cockpits, the first application is rapidly decreasing in importance – pilots are able to read groundspeed (and often true airspeed) directly, without calculating calibrated airspeed as an intermediate step. The second application remains critical, however – for example, at the same weight, an aircraft will rotate and climb at approximately the same calibrated airspeed at any elevation, even though the true airspeed and groundspeed may differ significantly. These V speeds are usually given as IAS rather than CAS, so that a pilot can read them directly from the airspeed indicator. Calculation from impact pressure. Since the airspeed indicator capsule responds to impact pressure, CAS is defined as a function of impact pressure alone. Static pressure and temperature appear as fixed coefficients defined by convention as standard sea level values. It so happens that the speed of sound is a direct function of temperature, so instead of a standard temperature, we can define a standard speed of sound. For subsonic speeds, CAS is calculated as: formula_0 where: For supersonic airspeeds, where a normal shock forms in front of the pitot probe, the Rayleigh formula applies: formula_4 The supersonic formula must be solved iteratively, by assuming an initial value for formula_5 equal to formula_6. These formulae work in any units provided the appropriate values for formula_2 and formula_6 are selected. For example, formula_2 = 1013.25 hPa, formula_6 = . The ratio of specific heats for air is assumed to be 1.4. These formulae can then be used to calibrate an airspeed indicator when impact pressure (formula_1) is measured using a water manometer or accurate pressure gauge. If using a water manometer to measure millimeters of water the reference pressure (formula_2) may be entered as 10333 mm formula_7. At higher altitudes CAS can be corrected for compressibility error to give equivalent airspeed (EAS). In practice compressibility error is negligible below about and . References. <templatestyles src="Reflist/styles.css" /> Bibliography. <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": "CAS=a_{0}\\sqrt{5\\left[\\left(\\frac{q_c}{P_{0}}+1\\right)^\\frac{2}{7}-1\\right]}" }, { "math_id": 1, "text": "q_c" }, { "math_id": 2, "text": "P_{0}" }, { "math_id": 3, "text": "{a_{0}}" }, { "math_id": 4, "text": "CAS=a_{0}\\left[\\left(\\frac{q_c}{P_{0}}+1\\right)\\times\\left(7\\left(\\frac{CAS}{a_{0}}\\right)^2-1\\right)^{2.5} / \\left(6^{2.5} \\times 1.2^{3.5} \\right) \\right]^{(1/7)} " }, { "math_id": 5, "text": "CAS" }, { "math_id": 6, "text": "a_{0}" }, { "math_id": 7, "text": "H_2O" } ]
https://en.wikipedia.org/wiki?curid=1129436
11295114
Bond order potential
Bond order potential is a class of empirical (analytical) interatomic potentials which is used in molecular dynamics and molecular statics simulations. Examples include the Tersoff potential, the EDIP potential, the Brenner potential, the Finnis–Sinclair potentials, ReaxFF, and the second-moment tight-binding potentials. They have the advantage over conventional molecular mechanics force fields in that they can, with the same parameters, describe several different bonding states of an atom, and thus to some extent may be able to describe chemical reactions correctly. The potentials were developed partly independently of each other, but share the common idea that the strength of a chemical bond depends on the bonding environment, including the number of bonds and possibly also angles and bond lengths. It is based on the Linus Pauling bond order concept and can be written in the form formula_0 This means that the potential is written as a simple pair potential depending on the distance between two atoms formula_1, but the strength of this bond is modified by the environment of the atom formula_2 via the bond order formula_3. formula_3 is a function that in Tersoff-type potentials depends inversely on the number of bonds to the atom formula_2, the bond angles between sets of three atoms formula_4, and optionally on the relative bond lengths formula_1, formula_5. In case of only one atomic bond (like in a diatomic molecule), formula_6 which corresponds to the strongest and shortest possible bond. The other limiting case, for increasingly many number of bonds within some interaction range, formula_7 and the potential turns completely repulsive (as illustrated in the figure to the right). Alternatively, the potential energy can be written in the embedded atom model form formula_8 where formula_9 is the electron density at the location of atom formula_2. These two forms for the energy can be shown to be equivalent (in the special case that the bond-order function formula_3 contains no angular dependence). A more detailed summary of how the bond order concept can be motivated by the second-moment approximation of tight binding and both of these functional forms derived from it can be found in. The original bond order potential concept has been developed further to include distinct bond orders for sigma bonds and pi bonds in the so-called BOP potentials. Extending the analytical expression for the bond order of the sigma bonds to include fourth moments of the exact tight binding bond order reveals contributions from both sigma- and pi- bond integrals between neighboring atoms. These pi-bond contributions to the sigma bond order are responsible to stabilize the asymmetric before the symmetric (2x1) dimerized reconstruction of the Si(100) surface. Also the ReaxFF potential can be considered a bond order potential, although the motivation of its bond order terms is different from that described here.
[ { "math_id": 0, "text": "\nV_{ij}(r_{ij}) = V_\\mathrm{repulsive}(r_{ij}) + b_{ijk} V_\\mathrm{attractive}(r_{ij}) \n" }, { "math_id": 1, "text": "r_{ij}" }, { "math_id": 2, "text": "i" }, { "math_id": 3, "text": "b_{ijk}" }, { "math_id": 4, "text": "ijk" }, { "math_id": 5, "text": "r_{ik}" }, { "math_id": 6, "text": "b_{ijk} = 1" }, { "math_id": 7, "text": "b_{ijk} \\to 0" }, { "math_id": 8, "text": "\nV_{ij}(r_{ij}) = V_\\mathrm{pair}(r_{ij}) - D \\sqrt{\\rho_i}\n" }, { "math_id": 9, "text": "\\rho_i" } ]
https://en.wikipedia.org/wiki?curid=11295114
1129919
Metallicity
Relative abundance of heavy elements in a star or other astronomical object In astronomy, metallicity is the abundance of elements present in an object that are heavier than hydrogen and helium. Most of the normal currently detectable (i.e. non-dark) matter in the universe is either hydrogen or helium, and astronomers use the word "metals" as convenient shorthand for "all elements except hydrogen and helium". This word-use is distinct from the conventional chemical or physical definition of a metal as an electrically conducting solid. Stars and nebulae with relatively high abundances of heavier elements are called "metal-rich" when discussing metallicity, even though many of those elements are called nonmetals in chemistry. <templatestyles src="Template:TOC limit/styles.css" /> Metals in early spectroscopy. In 1802, William Hyde Wollaston noted the appearance of a number of dark features in the solar spectrum. In 1814, Joseph von Fraunhofer independently rediscovered the lines and began to systematically study and measure their wavelengths, and they are now called Fraunhofer lines. He mapped over 570 lines, designating the most prominent with the letters A through K and weaker lines with other letters. About 45 years later, Gustav Kirchhoff and Robert Bunsen noticed that several Fraunhofer lines coincide with characteristic emission lines identifies in the spectra of heated chemical elements. They inferred that dark lines in the solar spectrum are caused by absorption by chemical elements in the solar atmosphere. Their observations were in the visible range where the strongest lines come from metals such as Na, K, Fe. In the early work on the chemical composition of the sun the only elements that were detected in spectra were hydrogen and various metals, with the term "metallic" frequently used when describing them.Part 2 In contemporary usage all the extra elements beyond just hydrogen and helium are termed metallic. Origin of metallic elements. The presence of heavier elements results from stellar nucleosynthesis, where the majority of elements heavier than hydrogen and helium in the Universe ("metals", hereafter) are formed in the cores of stars as they evolve. Over time, stellar winds and supernovae deposit the metals into the surrounding environment, enriching the interstellar medium and providing recycling materials for the birth of new stars. It follows that older generations of stars, which formed in the metal-poor early Universe, generally have lower metallicities than those of younger generations, which formed in a more metal-rich Universe. Stellar populations. Observed changes in the chemical abundances of different types of stars, based on the spectral peculiarities that were later attributed to metallicity, led astronomer Walter Baade in 1944 to propose the existence of two different populations of stars. These became commonly known as population I (metal-rich) and population II (metal-poor) stars. A third, earliest stellar population was hypothesized in 1978, known as population III stars. These "extremely metal-poor" (XMP) stars are theorized to have been the "first-born" stars created in the Universe. Common methods of calculation. Astronomers use several different methods to describe and approximate metal abundances, depending on the available tools and the object of interest. Some methods include determining the fraction of mass that is attributed to gas versus metals, or measuring the ratios of the number of atoms of two different elements as compared to the ratios found in the Sun. Mass fraction. Stellar composition is often simply defined by the parameters X, Y, and Z. Here X represents the mass fraction of hydrogen, Y is the mass fraction of helium, and Z is the mass fraction of all the remaining chemical elements. Thus formula_0 In most stars, nebulae, HII regions, and other astronomical sources, hydrogen and helium are the two dominant elements. The hydrogen mass fraction is generally expressed as formula_1 where M is the total mass of the system, and formula_2 is the mass of the hydrogen it contains. Similarly, the helium mass fraction is denoted as formula_3 The remainder of the elements are collectively referred to as "metals", and the metallicity – the mass fraction of elements heavier than helium – is calculated as formula_4 For the surface of the Sun (symbol formula_5), these parameters are measured to have the following values: Due to the effects of stellar evolution, neither the initial composition nor the present day bulk composition of the Sun is the same as its present-day surface composition. Chemical abundance ratios. The overall stellar metallicity is conventionally defined using the total hydrogen content, since its abundance is considered to be relatively constant in the Universe, or the iron content of the star, which has an abundance that is generally linearly increasing in time in the Universe. Hence, iron can be used as a chronological indicator of nucleosynthesis. Iron is relatively easy to measure with spectral observations in the star's spectrum given the large number of iron lines in the star's spectra (even though oxygen is the most abundant heavy element – see metallicities in HII regions below). The abundance ratio is the common logarithm of the ratio of a star's iron abundance compared to that of the Sun and is calculated thus: formula_6 where formula_7 and formula_8 are the number of iron and hydrogen atoms per unit of volume respectively, formula_5 is the standard symbol for the Sun, and formula_9 for a star (often omitted below). The unit often used for metallicity is the dex, contraction of "decimal exponent". By this formulation, stars with a higher metallicity than the Sun have a positive common logarithm, whereas those more dominated by hydrogen have a corresponding negative value. For example, stars with a formula_10 value of +1 have 10 times the metallicity of the Sun (10+1); conversely, those with a formula_10 value of −1 have , while those with a formula_10 value of 0 have the same metallicity as the Sun, and so on. Young population I stars have significantly higher iron-to-hydrogen ratios than older population II stars. Primordial population III stars are estimated to have metallicity less than −6, a millionth of the abundance of iron in the Sun. The same notation is used to express variations in abundances between other individual elements as compared to solar proportions. For example, the notation formula_11 represents the difference in the logarithm of the star's oxygen abundance versus its iron content compared to that of the Sun. In general, a given stellar nucleosynthetic process alters the proportions of only a few elements or isotopes, so a star or gas sample with certain formula_12 values may well be indicative of an associated, studied nuclear process. Photometric colors. Astronomers can estimate metallicities through measured and calibrated systems that correlate photometric measurements and spectroscopic measurements (see also Spectrophotometry). For example, the Johnson UVB filters can be used to detect an ultraviolet (UV) excess in stars, where a smaller UV excess indicates a larger presence of metals that absorb the UV radiation, thereby making the star appear "redder". The UV excess, δ(U−B), is defined as the difference between a star's U and B band magnitudes, compared to the difference between U and B band magnitudes of metal-rich stars in the Hyades cluster. Unfortunately, δ(U−B) is sensitive to both metallicity and temperature: If two stars are equally metal-rich, but one is cooler than the other, they will likely have different δ(U−B) values (see also Blanketing effect). To help mitigate this degeneracy, a star's B−V color index can be used as an indicator for temperature. Furthermore, the UV excess and B−V index can be corrected to relate the δ(U−B) value to iron abundances. Other photometric systems that can be used to determine metallicities of certain astrophysical objects include the Strӧmgren system, the Geneva system, the Washington system, and the DDO system. Metallicities in various astrophysical objects. Stars. At a given mass and age, a metal-poor star will be slightly warmer. Population II stars' metallicities are roughly to of the Sun's formula_13 but the group appears cooler than population I overall, as heavy population II stars have long since died. Above 40 solar masses, metallicity influences how a star will die: Outside the pair-instability window, lower metallicity stars will collapse directly to a black hole, while higher metallicity stars undergo a type Ib/c supernova and may leave a neutron star. Relationship between stellar metallicity and planets. A star's metallicity measurement is one parameter that helps determine whether a star may have a giant planet, as there is a direct correlation between metallicity and the presence of a giant planet. Measurements have demonstrated the connection between a star's metallicity and gas giant planets, like Jupiter and Saturn. The more metals in a star and thus its planetary system and protoplanetary disk, the more likely the system may have gas giant planets. Current models show that the metallicity along with the correct planetary system temperature and distance from the star are key to planet and planetesimal formation. For two stars that have equal age and mass but different metallicity, the less metallic star is bluer. Among stars of the same color, less metallic stars emit more ultraviolet radiation. The Sun, with eight planets and nine consensus dwarf planets, is used as the reference, with a formula_14 of 0.00. HII regions. Young, massive and hot stars (typically of spectral types O and B) in HII regions emit UV photons that ionize ground-state hydrogen atoms, knocking electrons and protons free; this process is known as photoionization. The free electrons can strike other atoms nearby, exciting bound metallic electrons into a metastable state, which eventually decay back into a ground state, emitting photons with energies that correspond to forbidden lines. Through these transitions, astronomers have developed several observational methods to estimate metal abundances in HII regions, where the stronger the forbidden lines in spectroscopic observations, the higher the metallicity. These methods are dependent on one or more of the following: the variety of asymmetrical densities inside HII regions, the varied temperatures of the embedded stars, and/or the electron density within the ionized region. Theoretically, to determine the total abundance of a single element in an HII region, all transition lines should be observed and summed. However, this can be observationally difficult due to variation in line strength. Some of the most common forbidden lines used to determine metal abundances in HII regions are from oxygen (e.g. [OII] λ = (3727, 7318, 7324) Å, and [OIII] λ = (4363, 4959, 5007) Å), nitrogen (e.g. [NII] λ = (5755, 6548, 6584) Å), and sulfur (e.g. [SII] λ = (6717, 6731) Å and [SIII] λ = (6312, 9069, 9531) Å) in the optical spectrum, and the [OIII] λ = (52, 88) μm and [NIII] λ = 57 μm lines in the infrared spectrum. Oxygen has some of the stronger, more abundant lines in HII regions, making it a main target for metallicity estimates within these objects. To calculate metal abundances in HII regions using oxygen flux measurements, astronomers often use the R23 method, in which formula_15 where formula_16 is the sum of the fluxes from oxygen emission lines measured at the rest frame λ = (3727, 4959 and 5007) Å wavelengths, divided by the flux from the Balmer series Hβ emission line at the rest frame λ = 4861 Å wavelength. This ratio is well defined through models and observational studies, but caution should be taken, as the ratio is often degenerate, providing both a low and high metallicity solution, which can be broken with additional line measurements. Similarly, other strong forbidden line ratios can be used, e.g. for sulfur, where formula_17 Metal abundances within HII regions are typically less than 1%, with the percentage decreasing on average with distance from the Galactic Center. References. <templatestyles src="Reflist/styles.css" /> <templatestyles src="Refbegin/styles.css" />
[ { "math_id": 0, "text": " X + Y + Z = 1 " }, { "math_id": 1, "text": "\\ X \\equiv \\tfrac{m_\\mathsf{H}}{M}\\ ," }, { "math_id": 2, "text": "\\ m_\\mathsf{H}\\ " }, { "math_id": 3, "text": "\\ Y \\equiv \\tfrac{m_\\mathsf{He}}{M} ~." }, { "math_id": 4, "text": " Z = \\sum_{e > \\mathsf{He}} \\tfrac{m_e}{M} = 1 - X - Y ~." }, { "math_id": 5, "text": "\\odot" }, { "math_id": 6, "text": " \\left[ \\frac{ \\mathsf{Fe} }{ \\mathsf{H} } \\right] ~=~ \\log_{10}{\\left( \\frac{N_{\\mathsf{Fe}}}{N_{\\mathsf{H}} } \\right)_\\star } -~ \\log_{10}{\\left(\\frac{N_{ \\mathsf{Fe}} }{ N_{\\mathsf{H}} } \\right)_\\odot}\\ ," }, { "math_id": 7, "text": "\\ N_{\\mathsf{Fe}}\\ " }, { "math_id": 8, "text": "\\ N_{\\mathsf{H}}\\ " }, { "math_id": 9, "text": "\\star" }, { "math_id": 10, "text": "\\ \\bigl[\\tfrac{ \\mathsf{Fe} }{ \\mathsf{H} } \\bigr]_\\star\\ " }, { "math_id": 11, "text": "\\ \\bigl[\\tfrac{ \\mathsf{O} }{ \\mathsf{Fe} } \\bigr]\\ " }, { "math_id": 12, "text": "\\ \\bigl[\\tfrac{ \\mathsf{?} }{ \\mathsf{Fe} } \\bigr]_\\star\\ " }, { "math_id": 13, "text": "\\left(\\ \\bigl[ \\tfrac{ \\mathsf{Fe} }{ \\mathsf{H} } \\bigr]\\ = {-3.0}\\ ...\\ {-1.0}\\ \\right)\\ ," }, { "math_id": 14, "text": "\\ \\bigl[\\tfrac{ \\mathsf{Fe} }{ \\mathsf{H} } \\bigr]\\ " }, { "math_id": 15, "text": "R_{23} = \\frac{\\ \\left[\\ \\mathsf{O}^\\mathsf{II} \\right]_{3727~\\AA} + \\left[\\ \\mathsf{O}^\\mathsf{III} \\right]_{4959~\\AA + 5007~\\AA}\\ }{\\Bigl[\\ \\mathsf{ H}_\\mathsf{\\beta} \\Bigr]_{4861 ~\\AA} }\\ ," }, { "math_id": 16, "text": "\\ \\left[\\ \\mathsf{O}^\\mathsf{II} \\right]_{3727~\\AA} + \\left[\\ \\mathsf{O}^\\mathsf{III} \\right]_{4959~\\AA + 5007~\\AA}\\ " }, { "math_id": 17, "text": "S_{23} = \\frac{\\ \\left[\\ \\mathsf{S}^\\mathsf{II} \\right]_{6716~\\AA + 6731~\\AA} + \\left[\\ \\mathsf{S}^\\mathsf{III} \\right]_{9069~\\AA + 9532~\\AA}\\ }{\\Bigl[\\ \\mathsf{H}_\\mathsf{\\beta} \\Bigr]_{4861 ~\\AA} } ~." } ]
https://en.wikipedia.org/wiki?curid=1129919
11301011
Chinese hypothesis
False conjecture of a test for prime numbers In number theory, the Chinese hypothesis is a disproven conjecture stating that an integer "n" is prime if and only if it satisfies the condition that formula_0 is divisible by "n"—in other words, that an integer "n" is prime if and only if formula_1. It is true that if "n" is prime, then formula_1 (this is a special case of Fermat's little theorem), however the converse (if formula_1 then "n" is prime) is false, and therefore the hypothesis as a whole is false. The smallest counterexample is "n" = 341 = 11×31. Composite numbers "n" for which formula_0 is divisible by "n" are called Poulet numbers. They are a special class of Fermat pseudoprimes. History. Once, and sometimes still, mistakenly thought to be of ancient Chinese origin, the Chinese hypothesis actually originates in the mid-19th century from the work of Qing dynasty mathematician Li Shanlan (1811–1882). He was later made aware his statement was incorrect and removed it from his subsequent work but it was not enough to prevent the false proposition from appearing elsewhere under his name; a later mistranslation in the 1898 work of Jeans dated the conjecture to Confucian times and gave birth to the ancient origin myth. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "2^n-2" }, { "math_id": 1, "text": "2^n \\equiv 2 \\bmod{n}" } ]
https://en.wikipedia.org/wiki?curid=11301011
113014
Autodesk Maya
3D computer graphics software Autodesk Maya, commonly shortened to just Maya (; ), is a 3D computer graphics application that runs on Windows, macOS, and Linux, originally developed by Alias and currently owned and developed by Autodesk. It is used to create assets for interactive 3D applications (including video games), animated films, TV series, and visual effects. History. Maya was originally an animation product based on codebase from The Advanced Visualizer by Wavefront Technologies, Thomson Digital Image (TDI) Explore, PowerAnimator by Alias, and "Alias Sketch!". The IRIX-based projects were combined and animation features were added; the project codename was Maya. Walt Disney Feature Animation collaborated closely with Maya's development during its production of "Dinosaur". Disney requested that the user interface of the application be customizable to allow for a personalized workflow. This was a particular influence in the open architecture of Maya, and partly responsible for its popularity in the animation industry. After Silicon Graphics Inc. has acquired both Alias and Wavefront Technologies, Inc. in 1995, Wavefront's technology (then under development) was merged into Maya. SGI's acquisition was a response to Microsoft Corporation acquiring Softimage 3D in 1994. The new wholly owned subsidiary was named "Aliasformula_0Wavefront". In the early days of development Maya started with Tcl as the scripting language, in order to leverage its similarity to a Unix shell script language, but after the merger with Wavefront it was replaced with Maya Embedded Language (MEL). Sophia, the scripting language in Wavefront's Dynamation, was chosen as the basis of MEL. Maya 1.0 was released in February 1998. Following a series of acquisitions, Maya was bought by Autodesk in October 2005. Under the name of the new parent company, Maya was renamed Autodesk Maya. However, the name "Maya" continues to be the dominant name used for the product. Overview. Maya is an application used to generate 3D assets for use in film, television, games, and commercials. The software was initially released for the IRIX operating system. However, this support was discontinued in August 2006 after the release of version 6.5. Maya was available in both "Complete" and "Unlimited" editions until August 2008, when it was turned into a single suite. Users define a virtual workspace ("scene") to implement and edit media of a particular project. Scenes can be saved in a variety of formats, the default being ".mb" (Maya D). Maya exposes a node graph architecture. Scene elements are node-based, each node having its own attributes and customization. As a result, the visual representation of a scene is based entirely on a network of interconnecting nodes, depending on each other's information. For the convenience of viewing these networks, there is a dependency and a directed acyclic graph. Nowadays, the 3D models can be imported to game engines such as Unreal Engine and Unity. Industry usage. The widespread use of Maya in the film industry is usually associated with its development on the film "Dinosaur", released by Disney and The Secret Lab on May 19, 2000. In 2003, when the company received an Academy Award for Technical Achievement, it was noted to be used in films such as ', "Spider-Man" (2002), "Ice Age", and '. By 2015, VentureBeat Magazine stated that all ten films in consideration for the Best Visual Effects Academy Award had used Autodesk Maya and that it had been "used on every winning film since 1997." The film studio Illumination Studios Paris uses Autodesk for their animated films. Awards. On March 1, 2003, Alias was given an Academy Award for Technical Achievement by the Academy of Motion Picture Arts and Sciences for scientific and technical achievement for their development of Maya software. In 2005, while working for Alias|Wavefront, Jos Stam shared an Academy Award for Technical Achievement with Edwin Catmull and Tony DeRose for their invention and application of subdivision surfaces. On February 8, 2008, Duncan Brinsmead, Jos Stam, Julia Pakalns and Martin Werner received an Academy Award for Technical Achievement for the design and implementation of the Maya Fluid Effects system. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "|" } ]
https://en.wikipedia.org/wiki?curid=113014
11301521
Brocard's problem
In mathematics, when is n!+1 a square <templatestyles src="Unsolved/styles.css" /> Unsolved problem in mathematics: Does formula_0 have integer solutions other than formula_1? Brocard's problem is a problem in mathematics that seeks integer values of formula_2 such that formula_3 is a perfect square, where formula_4 is the factorial. Only three values of formula_2 are known — 4, 5, 7 — and it is not known whether there are any more. More formally, it seeks pairs of integers formula_2 and formula_5 such thatformula_6The problem was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan. Brown numbers. Pairs of the numbers formula_7 that solve Brocard's problem were named Brown numbers by Clifford A. Pickover in his 1995 book "Keys to Infinity", after learning of the problem from Kevin S. Brown. As of October 2022, there are only three known pairs of Brown numbers: <templatestyles src="Block indent/styles.css"/>(4,5), (5,11), and (7,71), based on the equalities <templatestyles src="Block indent/styles.css"/>4! + 1 52 25, <templatestyles src="Block indent/styles.css"/>5! + 1 112 121, and <templatestyles src="Block indent/styles.css"/>7! + 1 712 5041. Paul Erdős conjectured that no other solutions exist. Computational searches up to one quadrillion have found no further solutions. Connection to the abc conjecture. It would follow from the abc conjecture that there are only finitely many Brown numbers. More generally, it would also follow from the abc conjecture that formula_8 has only finitely many solutions, for any given integer formula_9, and that formula_10 has only finitely many integer solutions, for any given polynomial formula_11 of degree at least 2 with integer coefficients. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "n!+1=m^2" }, { "math_id": 1, "text": "n=4,5,7" }, { "math_id": 2, "text": "n" }, { "math_id": 3, "text": "n!+1" }, { "math_id": 4, "text": "n!" }, { "math_id": 5, "text": "m" }, { "math_id": 6, "text": "n!+1 = m^2." }, { "math_id": 7, "text": "(n,m)" }, { "math_id": 8, "text": "n!+A = k^2" }, { "math_id": 9, "text": "A" }, { "math_id": 10, "text": "n! = P(x)" }, { "math_id": 11, "text": "P(x)" } ]
https://en.wikipedia.org/wiki?curid=11301521
11302552
Complex gain
In electronics, complex gain is the effect that circuitry has on the amplitude and phase of a sine wave signal. The term "complex" is used because mathematically this effect can be expressed as a complex number. LTI systems. Considering the general LTI system formula_0 where formula_1 is the input and formula_2 are given polynomial operators, while assuming that formula_3. In case that formula_4, a particular solution to given equation is formula_5 Consider the following concepts used in physics and signal processing mainly. formula_6 The amplitude of the input is formula_7. This has the same units as the input quantity. formula_6 The angular frequency of the input is formula_8. It has units of radian/time. Often we will be casual and refer to it as frequency, even though technically frequency should have units of cycles/time. formula_6 The amplitude of the response is formula_9. This has the same units as the response quantity. formula_6 The gain is formula_10. The gain is the factor that the input amplitude is multiplied by to get the amplitude of the response. It has the units needed to convert input units to output units. formula_6 The phase lag is formula_11. The phase lag has units of radians, i.e. it’s dimensionless. formula_6 The time lag is formula_12. This has units of time. It is the time that peak of the output lags behind that of the input. formula_6 The complex gain is formula_13. This is the factor that the complex input is multiplied by to get the complex output. Example. Suppose a circuit has an input voltage described by the equation formula_14 where ω equals 2π×100 Hz, i.e., the input signal is a 100 Hz sine wave with an amplitude of 1 volt. If the circuit is such that for this frequency it doubles the signal's amplitude and causes a 90 degrees forward phase shift, then its output signal can be described by formula_15 In complex notation, these signals can be described as, for this frequency, "j"·1 V and 2 V, respectively. The complex gain "G" of this circuit is then computed by dividing output by input: formula_16 This (unitless) complex number incorporates both the magnitude of the change in amplitude (as the absolute value) and the phase change (as the argument). References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": " P(D)x = Q(D)f(t)" }, { "math_id": 1, "text": "f(t)" }, { "math_id": 2, "text": "P(D), Q(D)" }, { "math_id": 3, "text": "P(s)\\neq 0" }, { "math_id": 4, "text": "f(r) = F_0\\cos(\\omega t)" }, { "math_id": 5, "text": "x_p(t) = \\operatorname{Re}\\Big ( F_0 \\frac{Q(i\\omega)}{P(i\\omega)}e^{i\\omega t}\\Big)." }, { "math_id": 6, "text": "\\bullet" }, { "math_id": 7, "text": "F_0" }, { "math_id": 8, "text": "\\omega" }, { "math_id": 9, "text": "A = F_0|Q(i\\omega)/P(i\\omega)|" }, { "math_id": 10, "text": "g(\\omega) = |Q(i\\omega)/P (i\\omega)|" }, { "math_id": 11, "text": "\\phi = -\\operatorname{Arg}(Q(i\\omega)/P(i\\omega))" }, { "math_id": 12, "text": " \\phi / \\omega" }, { "math_id": 13, "text": "Q(i\\omega)/P(i\\omega)" }, { "math_id": 14, "text": "V_{i}(t) = 1\\ V \\cdot \\sin (\\omega \\cdot t)" }, { "math_id": 15, "text": "V_{o}(t) = 2\\ V \\cdot \\cos (\\omega \\cdot t)" }, { "math_id": 16, "text": "G = \\frac {2\\ V}{j\\cdot1\\ V} = -2j." } ]
https://en.wikipedia.org/wiki?curid=11302552
11304514
Physical organic chemistry
Discipline of organic chemistry Physical organic chemistry, a term coined by Louis Hammett in 1940, refers to a discipline of organic chemistry that focuses on the relationship between chemical structures and reactivity, in particular, applying experimental tools of physical chemistry to the study of organic molecules. Specific focal points of study include the rates of organic reactions, the relative chemical stabilities of the starting materials, reactive intermediates, transition states, and products of chemical reactions, and non-covalent aspects of solvation and molecular interactions that influence chemical reactivity. Such studies provide theoretical and practical frameworks to understand how changes in structure in solution or solid-state contexts impact reaction mechanism and rate for each organic reaction of interest. Application. Physical organic chemists use theoretical and experimental approaches work to understand these foundational problems in organic chemistry, including classical and statistical thermodynamic calculations, quantum mechanical theory and computational chemistry, as well as experimental spectroscopy (e.g., NMR), spectrometry (e.g., MS), and crystallography approaches. The field therefore has applications to a wide variety of more specialized fields, including electro- and photochemistry, polymer and supramolecular chemistry, and bioorganic chemistry, enzymology, and chemical biology, as well as to commercial enterprises involving process chemistry, chemical engineering, materials science and nanotechnology, and pharmacology in drug discovery by design. Scope. Physical organic chemistry is the study of the relationship between structure and reactivity of organic molecules. More specifically, physical organic chemistry applies the experimental tools of physical chemistry to the study of the structure of organic molecules and provides a theoretical framework that interprets how structure influences both mechanisms and rates of organic reactions. It can be thought of as a subfield that bridges organic chemistry with physical chemistry. Physical organic chemists use both experimental and theoretical disciplines such as spectroscopy, spectrometry, crystallography, computational chemistry, and quantum theory to study both the rates of organic reactions and the relative chemical stability of the starting materials, transition states, and products. Chemists in this field work to understand the physical underpinnings of modern organic chemistry, and therefore physical organic chemistry has applications in specialized areas including polymer chemistry, supramolecular chemistry, electrochemistry, and photochemistry. History. The term "physical organic chemistry" was itself coined by Louis Hammett in 1940 when he used the phrase as a title for his textbook. Chemical structure and thermodynamics. Thermochemistry. Organic chemists use the tools of thermodynamics to study the bonding, stability, and energetics of chemical systems. This includes experiments to measure or determine the enthalpy (Δ"H"), entropy (Δ"S"), and Gibbs' free energy (Δ"G") of a reaction, transformation, or isomerization. Chemists may use various chemical and mathematical analyses, such as a Van 't Hoff plot, to calculate these values. Empirical constants such as bond dissociation energy, standard heat of formation (Δf"H"°), and heat of combustion (Δc"H"°) are used to predict the stability of molecules and the change in enthalpy (Δ"H") through the course of the reactions. For complex molecules, a Δf"H"° value may not be available but can be estimated using molecular fragments with known heats of formation. This type of analysis is often referred to as Benson group increment theory, after chemist Sidney Benson who spent a career developing the concept. The thermochemistry of reactive intermediates—carbocations, carbanions, and radicals—is also of interest to physical organic chemists. Group increment data are available for radical systems. Carbocation and carbanion stabilities can be assessed using hydride ion affinities and pKa values, respectively. Conformational analysis. One of the primary methods for evaluating chemical stability and energetics is conformational analysis. Physical organic chemists use conformational analysis to evaluate the various types of strain present in a molecule to predict reaction products. Strain can be found in both acyclic and cyclic molecules, manifesting itself in diverse systems as torsional strain, allylic strain, ring strain, and "syn"-pentane strain. A-values provide a quantitative basis for predicting the conformation of a substituted cyclohexane, an important class of cyclic organic compounds whose reactivity is strongly guided by conformational effects. The A-value is the difference in the Gibbs' free energy between the axial and equatorial forms of substituted cyclohexane, and by adding together the A-values of various substituents it is possible to quantitatively predict the preferred conformation of a cyclohexane derivative. In addition to molecular stability, conformational analysis is used to predict reaction products. One commonly cited example of the use of conformational analysis is a bi-molecular elimination reaction (E2). This reaction proceeds most readily when the nucleophile attacks the species that is antiperiplanar to the leaving group. A molecular orbital analysis of this phenomenon suggest that this conformation provides the best overlap between the electrons in the R-H σ bonding orbital that is undergoing nucleophilic attack and the empty σ* antibonding orbital of the R-X bond that is being broken. By exploiting this effect, conformational analysis can be used to design molecules that possess enhanced reactivity. The physical processes which give rise to bond rotation barriers are complex, and these barriers have been extensively studied through experimental and theoretical methods. A number of recent articles have investigated the predominance of the steric, electrostatic, and hyperconjugative contributions to rotational barriers in ethane, butane, and more substituted molecules. Non-covalent interactions. Chemists use the study of intramolecular and intermolecular non-covalent bonding/interactions in molecules to evaluate reactivity. Such interactions include, but are not limited to, hydrogen bonding, electrostatic interactions between charged molecules, dipole-dipole interactions, polar-π and cation-π interactions, π-stacking, donor-acceptor chemistry, and halogen bonding. In addition, the hydrophobic effect—the association of organic compounds in water—is an electrostatic, non-covalent interaction of interest to chemists. The precise physical origin of the hydrophobic effect originates from many complex interactions, but it is believed to be the most important component of biomolecular recognition in water. For example, researchers elucidated the structural basis for folic acid recognition by folate acid receptor proteins. The strong interaction between folic acid and folate receptor was attributed to both hydrogen bonds and hydrophobic interactions. The study of non-covalent interactions is also used to study binding and cooperativity in supramolecular assemblies and macrocyclic compounds such as crown ethers and cryptands, which can act as hosts to guest molecules. Acid–base chemistry. The properties of acids and bases are relevant to physical organic chemistry. Organic chemists are primarily concerned with Brønsted–Lowry acids/bases as proton donors/acceptors and Lewis acids/bases as electron acceptors/donors in organic reactions. Chemists use a series of factors developed from physical chemistry -- electronegativity/Induction, bond strengths, resonance, hybridization, aromaticity, and solvation—to predict relative acidities and basicities. The hard/soft acid/base principle is utilized to predict molecular interactions and reaction direction. In general, interactions between molecules of the same type are preferred. That is, hard acids will associate with hard bases, and soft acids with soft bases. The concept of hard acids and bases is often exploited in the synthesis of inorganic coordination complexes. Kinetics. Physical organic chemists use the mathematical foundation of chemical kinetics to study the rates of reactions and reaction mechanisms. Unlike thermodynamics, which is concerned with the relative stabilities of the products and reactants (Δ"G"°) and their equilibrium concentrations, the study of kinetics focuses on the free energy of activation (Δ"G"‡) -- the difference in free energy between the reactant structure and the transition state structure—of a reaction, and therefore allows a chemist to study the process of equilibration. Mathematically derived formalisms such as the Hammond Postulate, the Curtin-Hammett principle, and the theory of microscopic reversibility are often applied to organic chemistry. Chemists have also used the principle of thermodynamic versus kinetic control to influence reaction products. Rate laws. The study of chemical kinetics is used to determine the rate law for a reaction. The rate law provides a quantitative relationship between the rate of a chemical reaction and the concentrations or pressures of the chemical species present. Rate laws must be determined by experimental measurement and generally cannot be elucidated from the chemical equation. The experimentally determined rate law refers to the stoichiometry of the transition state structure relative to the ground state structure. Determination of the rate law was historically accomplished by monitoring the concentration of a reactant during a reaction through gravimetric analysis, but today it is almost exclusively done through fast and unambiguous spectroscopic techniques. In most cases, the determination of rate equations is simplified by adding a large excess ("flooding") all but one of the reactants. Catalysis. The study of catalysis and catalytic reactions is very important to the field of physical organic chemistry. A catalyst participates in the chemical reaction but is not consumed in the process. A catalyst lowers the activation energy barrier (Δ"G"‡), increasing the rate of a reaction by either stabilizing the transition state structure or destabilizing a key reaction intermediate, and as only a small amount of catalyst is required it can provide economic access to otherwise expensive or difficult to synthesize organic molecules. Catalysts may also influence a reaction rate by changing the mechanism of the reaction. Kinetic isotope effect. Although a rate law provides the stoichiometry of the transition state structure, it does not provide any information about breaking or forming bonds. The substitution of an isotope near a reactive position often leads to a change in the rate of a reaction. Isotopic substitution changes the potential energy of reaction intermediates and transition states because heavier isotopes form stronger bonds with other atoms. Atomic mass affects the zero-point vibrational state of the associated molecules, shorter and stronger bonds in molecules with heavier isotopes and longer, weaker bonds in molecules with light isotopes. Because vibrational motions will often change during a course of a reaction, due to the making and breaking of bonds, the frequencies will be affected, and the substitution of an isotope can provide insight into the reaction mechanism and rate law. Substituent effects. The study of how substituents affect the reactivity of a molecule or the rate of reactions is of significant interest to chemists. Substituents can exert an effect through both steric and electronic interactions, the latter of which include resonance and inductive effects. The polarizability of molecule can also be affected. Most substituent effects are analyzed through linear free energy relationships (LFERs). The most common of these is the Hammett Plot Analysis. This analysis compares the effect of various substituents on the ionization of benzoic acid with their impact on diverse chemical systems. The parameters of the Hammett plots are sigma (σ) and rho (ρ). The value of σ indicates the acidity of substituted benzoic acid relative to the unsubstituted form. A positive σ value indicates the compound is more acidic, while a negative value indicates that the substituted version is less acidic. The ρ value is a measure of the sensitivity of the reaction to the change in substituent, but only measures inductive effects. Therefore, two new scales were produced that evaluate the stabilization of localized charge through resonance. One is σ+, which concerns substituents that stabilize positive charges via resonance, and the other is σ− which is for groups that stabilize negative charges via resonance. Hammett analysis can be used to help elucidate the possible mechanisms of a reaction. For example, if it is predicted that the transition state structure has a build-up of negative charge relative to the ground state structure, then electron-donating groups would be expected to increase the rate of the reaction. Other LFER scales have been developed. Steric and polar effects are analyzed through Taft Parameters. Changing the solvent instead of the reactant can provide insight into changes in charge during the reaction. The Grunwald-Winstein Plot provides quantitative insight into these effects. Solvent effects. Solvents can have a powerful effect on solubility, stability, and reaction rate. A change in solvent can also allow a chemist to influence the thermodynamic or kinetic control of the reaction. Reactions proceed at different rates in different solvents due to the change in charge distribution during a chemical transformation. Solvent effects may operate on the ground state and/or transition state structures. An example of the effect of solvent on organic reactions is seen in the comparison of SN1 and SN2 reactions. Solvent can also have a significant effect on the thermodynamic equilibrium of a system, for instance as in the case of keto-enol tautomerizations. In non-polar aprotic solvents, the enol form is strongly favored due to the formation of an intramolecular hydrogen-bond, while in polar aprotic solvents, such as methylene chloride, the enol form is less favored due to the interaction between the polar solvent and the polar diketone. In protic solvents, the equilibrium lies towards the keto form as the intramolecular hydrogen bond competes with hydrogen bonds originating from the solvent. A modern example of the study of solvent effects on chemical equilibrium can be seen in a study of the epimerization of chiral cyclopropylnitrile Grignard reagents. This study reports that the equilibrium constant for the "cis" to "trans" isomerization of the Grignard reagent is much greater—the preference for the "cis" form is enhanced—in THF as a reaction solvent, over diethyl ether. However, the faster rate of "cis-trans isomerization" in THF results in a loss of stereochemical purity. This is a case where understanding the effect of solvent on the stability of the molecular configuration of a reagent is important with regard to the selectivity observed in an asymmetric synthesis. Quantum chemistry. Many aspects of the structure-reactivity relationship in organic chemistry can be rationalized through resonance, electron pushing, induction, the eight electron rule, and s-p hybridization, but these are only helpful formalisms and do not represent physical reality. Due to these limitations, a true understanding of physical organic chemistry requires a more rigorous approach grounded in particle physics. Quantum chemistry provides a rigorous theoretical framework capable of predicting the properties of molecules through calculation of a molecule's electronic structure, and it has become a readily available tool in physical organic chemists in the form of popular software packages. The power of quantum chemistry is built on the wave model of the atom, in which the nucleus is a very small, positively charged sphere surrounded by a diffuse electron cloud. Particles are defined by their associated wavefunction, an equation which contains all information associated with that particle. All information about the system is contained in the wavefunction. This information is extracted from the wavefunction through the use of mathematical operators. Time-independent Schrödinger equation ("general") formula_0 The energy associated with a particular wavefunction, perhaps the most important information contained in a wavefunction, can be extracted by solving the Schrödinger equation (above, Ψ is the wavefunction, E is the energy, and Ĥ is the Hamiltonian operator) in which an appropriate Hamiltonian operator is applied. In the various forms of the Schrödinger equation, the overall size of a particle's probability distribution increases with decreasing particle mass. For this reason, nuclei are of negligible size in relation to much lighter electrons and are treated as point charges in practical applications of quantum chemistry. Due to complex interactions which arise from electron-electron repulsion, algebraic solutions of the Schrödinger equation are only possible for systems with one electron such as the hydrogen atom, H2+, H32+, etc.; however, from these simple models arise all the familiar atomic (s,p,d,f) and bonding (σ,π) orbitals. In systems with multiple electrons, an overall multielectron wavefunction describes all of their properties at once. Such wavefunctions are generated through the linear addition of single electron wavefunctions to generate an initial guess, which is repeatedly modified until its associated energy is minimized. Thousands of guesses are often required until a satisfactory solution is found, so such calculations are performed by powerful computers. Importantly, the solutions for atoms with multiple electrons give properties such as diameter and electronegativity which closely mirror experimental data and the patterns found in the periodic table. The solutions for molecules, such as methane, provide exact representations of their electronic structure which are unobtainable by experimental methods. Instead of four discrete σ-bonds from carbon to each hydrogen atom, theory predicts a set of four bonding molecular orbitals which are delocalized across the entire molecule. Similarly, the true electronic structure of 1,3-butadiene shows delocalized π-bonding molecular orbitals stretching through the entire molecule rather than two isolated double bonds as predicted by a simple Lewis structure. A complete electronic structure offers great predictive power for organic transformations and dynamics, especially in cases concerning aromatic molecules, extended π systems, bonds between metal ions and organic molecules, molecules containing nonstandard heteroatoms like selenium and boron, and the conformational dynamics of large molecules such as proteins wherein the many approximations in chemical formalisms make structure and reactivity prediction impossible. An example of how electronic structure determination is a useful tool for the physical organic chemist is the metal-catalyzed dearomatization of benzene. Chromium tricarbonyl is highly electrophilic due to the withdrawal of electron density from filled chromium d-orbitals into antibonding CO orbitals, and is able to covalently bond to the face of a benzene molecule through delocalized molecular orbitals. The CO ligands inductively draw electron density from benzene through the chromium atom, and dramatically activate benzene to nucleophilic attack. Nucleophiles are then able to react to make hexacyclodienes, which can be used in further transformations such as Diels Alder cycloadditions. Quantum chemistry can also provide insight into the mechanism of an organic transformation without the collection of any experimental data. Because wavefunctions provide the total energy of a given molecular state, guessed molecular geometries can be optimized to give relaxed molecular structures very similar to those found through experimental methods. Reaction coordinates can then be simulated, and transition state structures solved. Solving a complete energy surface for a given reaction is therefore possible, and such calculations have been applied to many problems in organic chemistry where kinetic data is unavailable or difficult to acquire. Spectroscopy, spectrometry, and crystallography. Physical organic chemistry often entails the identification of molecular structure, dynamics, and the concentration of reactants in the course of a reaction. The interaction of molecules with light can afford a wealth of data about such properties through nondestructive spectroscopic experiments, with light absorbed when the energy of a photon matches the difference in energy between two states in a molecule and emitted when an excited state in a molecule collapses to a lower energy state. Spectroscopic techniques are broadly classified by the type of excitation being probed, such as vibrational, rotational, electronic, nuclear magnetic resonance (NMR), and electron paramagnetic resonance spectroscopy. In addition to spectroscopic data, structure determination is often aided by complementary data collected from X-Ray diffraction and mass spectrometric experiments. NMR and EPR spectroscopy. One of the most powerful tools in physical organic chemistry is NMR spectroscopy. An external magnetic field applied to a paramagnetic nucleus generates two discrete states, with positive and negative spin values diverging in energy; the difference in energy can then be probed by determining the frequency of light needed to excite a change in spin state for a given magnetic field. Nuclei that are not indistinguishable in a given molecule absorb at different frequencies, and the integrated peak area in an NMR spectrum is proportional to the number of nuclei responding to that frequency. It is possible to quantify the relative concentration of different organic molecules simply by integration peaks in the spectrum, and many kinetic experiments can be easily and quickly performed by following the progress of a reaction within one NMR sample. Proton NMR is often used by the synthetic organic chemist because protons associated with certain functional groups give characteristic absorption energies, but NMR spectroscopy can also be performed on isotopes of nitrogen, carbon, fluorine, phosphorus, boron, and a host of other elements. In addition to simple absorption experiments, it is also possible to determine the rate of fast atom exchange reactions through suppression exchange measurements, interatomic distances through multidimensional nuclear Overhauser effect experiments, and through-bond spin-spin coupling through homonuclear correlation spectroscopy. In addition to the spin excitation properties of nuclei, it is also possible to study the properties of organic radicals through the same fundamental technique. Unpaired electrons also have a net spin, and an external magnetic field allows for the extraction of similar information through electron paramagnetic resonance (EPR) spectroscopy. Vibrational spectroscopy. Vibrational spectroscopy, or infrared (IR) spectroscopy, allows for the identification of functional groups and, due to its low expense and robustness, is often used in teaching labs and the real-time monitoring of reaction progress in difficult to reach environments (high pressure, high temperature, gas phase, phase boundaries). Molecular vibrations are quantized in an analogous manner to electronic wavefunctions, with integer increases in frequency leading to higher energy states. The difference in energy between vibrational states is nearly constant, often falling in the energy range corresponding to infrared photons, because at normal temperatures molecular vibrations closely resemble harmonic oscillators. It allows for the crude identification of functional groups in organic molecules, but spectra are complicated by vibrational coupling between nearby functional groups in complex molecules. Therefore, its utility in structure determination is usually limited to simple molecules. Further complicating matters is that some vibrations do not induce a change in the molecular dipole moment and will not be observable with standard IR absorption spectroscopy. These can instead be probed through Raman spectroscopy, but this technique requires a more elaborate apparatus and is less commonly performed. However, as Raman spectroscopy relies on light scattering it can be performed on microscopic samples such as the surface of a heterogeneous catalyst, a phase boundary, or on a one microliter (μL) subsample within a larger liquid volume. The applications of vibrational spectroscopy are often used by astronomers to study the composition of molecular gas clouds, extrasolar planetary atmospheres, and planetary surfaces. Electronic excitation spectroscopy. Electronic excitation spectroscopy, or ultraviolet-visible (UV-vis) spectroscopy, is performed in the visible and ultraviolet regions of the electromagnetic spectrum and is useful for probing the difference in energy between the highest energy occupied (HOMO) and lowest energy unoccupied (LUMO) molecular orbitals. This information is useful to physical organic chemists in the design of organic photochemical systems and dyes, as absorption of different wavelengths of visible light give organic molecules color. A detailed understanding of an electronic structure is therefore helpful in explaining electronic excitations, and through careful control of molecular structure it is possible to tune the HOMO-LUMO gap to give desired colors and excited state properties. Mass spectrometry. Mass spectrometry is a technique which allows for the measurement of molecular mass and offers complementary data to spectroscopic techniques for structural identification. In a typical experiment a gas phase sample of an organic material is ionized and the resulting ionic species are accelerated by an applied electric field into a magnetic field. The deflection imparted by the magnetic field, often combined with the time it takes for the molecule to reach a detector, is then used to calculate the mass of the molecule. Often in the course of sample ionization large molecules break apart, and the resulting data show a parent mass and a number of smaller fragment masses; such fragmentation can give rich insight into the sequence of proteins and nucleic acid polymers. In addition to the mass of a molecule and its fragments, the distribution of isotopic variant masses can also be determined and the qualitative presence of certain elements identified due to their characteristic natural isotope distribution. The ratio of fragment mass population to the parent ion population can be compared against a library of empirical fragmentation data and matched to a known molecular structure. Combined gas chromatography and mass spectrometry is used to qualitatively identify molecules and quantitatively measure concentration with great precision and accuracy, and is widely used to test for small quantities of biomolecules and illicit narcotics in blood samples. For synthetic organic chemists it is a useful tool for the characterization of new compounds and reaction products. Crystallography. Unlike spectroscopic methods, X-ray crystallography always allows for unambiguous structure determination and provides precise bond angles and lengths totally unavailable through spectroscopy. It is often used in physical organic chemistry to provide an absolute molecular configuration and is an important tool in improving the synthesis of a pure enantiomeric substance. It is also the only way to identify the position and bonding of elements that lack an NMR active nucleus such as oxygen. Indeed, before x-ray structural determination methods were made available in the early 20th century all organic structures were entirely conjectural: tetrahedral carbon, for example, was only confirmed by the crystal structure of diamond, and the delocalized structure of benzene was confirmed by the crystal structure of hexamethylbenzene. While crystallography provides organic chemists with highly satisfying data, it is not an everyday technique in organic chemistry because a perfect single crystal of a target compound must be grown. Only complex molecules, for which NMR data cannot be unambiguously interpreted, require this technique. In the example below, the structure of the host–guest complex would have been quite difficult to solve without a single crystal structure: there are no protons on the fullerene, and with no covalent bonds between the two halves of the organic complex spectroscopy alone was unable to prove the hypothesized structure. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "E\\Psi=\\hat H \\Psi" } ]
https://en.wikipedia.org/wiki?curid=11304514
11304983
Gliding flight
Mode of flight Gliding flight is heavier-than-air flight without the use of thrust; the term volplaning also refers to this mode of flight in animals. It is employed by gliding animals and by aircraft such as gliders. This mode of flight involves flying a significant distance horizontally compared to its descent and therefore can be distinguished from a mostly straight downward descent like a round parachute. Although the human application of gliding flight usually refers to aircraft designed for this purpose, most powered aircraft are capable of gliding without engine power. As with sustained flight, gliding generally requires the application of an airfoil, such as the wings on aircraft or birds, or the gliding membrane of a gliding possum. However, gliding can be achieved with a flat (uncambered) wing, as with a simple paper plane, or even with card-throwing. However, some aircraft with lifting bodies and animals such as the flying snake can achieve gliding flight without any wings by creating a flattened surface underneath. Aircraft ("gliders"). Most winged aircraft can glide to some extent, but there are several types of aircraft designed to glide: The main human application is currently recreational, though during the Second World War military gliders were used for carrying troops and equipment into battle. The types of aircraft that are used for sport and recreation are classified as gliders (sailplanes), hang gliders and paragliders. These two latter types are often foot-launched. The design of all three types enables them to repeatedly climb using rising air and then to glide before finding the next source of lift. When done in gliders (sailplanes), the sport is known as gliding and sometimes as soaring. For foot-launched aircraft, it is known as hang gliding and paragliding. Radio-controlled gliders with fixed wings are also soared by enthusiasts. In addition to motor gliders, some powered aircraft are designed for routine glides during part of their flight; usually when landing after a period of a powered flight. These include: Aircraft which are not designed for glide may forced to perform gliding flight in an emergency, such as all engine failure or fuel exhaustion. See list of airline flights that required gliding flight. Gliding in a helicopter is called autorotation. Gliding animals. Birds. A number of animals have separately evolved gliding many times, without any single ancestor. Birds in particular use gliding flight to minimise their use of energy. Large birds are notably adept at gliding, including: Like recreational aircraft, birds can alternate periods of gliding with periods of soaring in rising air, and so spend a considerable time airborne with a minimal expenditure of energy. The great frigatebird in particular is capable of continuous flights up to several weeks. Mammals. To assist gliding, some mammals have evolved a structure called the patagium. This is a membranous structure found stretched between a range of body parts. It is most highly developed in bats. For similar reasons to birds, bats can glide efficiently. In bats, the skin forming the surface of the wing is an extension of the skin of the abdomen that runs to the tip of each digit, uniting the forelimb with the body. The patagium of a bat has four distinct parts: Other mammals such as gliding possums and flying squirrels also glide using a patagium, but with much poorer efficiency than bats. They cannot gain height. The animal launches itself from a tree, spreading its limbs to expose the gliding membranes, usually to get from tree to tree in rainforests as an efficient means of both locating food and evading predators. This form of arboreal locomotion, is common in tropical regions such as Borneo and Australia, where the trees are tall and widely spaced. In flying squirrels, the patagium stretches from the fore- to the hind-limbs along the length of each side of the torso. In the sugar glider, the patagia extend between the fifth finger of each hand to the first toe of each foot. This creates an aerofoil enabling them to glide 50 metres or more. This gliding flight is regulated by changing the curvature of the membrane or moving the legs and tail. Fish, reptiles, amphibians and other gliding animals. In addition to mammals and birds, other animals notably flying fish, flying snakes, flying frogs and flying squid also glide. The flights of flying fish are typically around 50 meters (160 ft), though they can use updrafts at the leading edge of waves to cover distances of up to . To glide upward out of the water, a flying fish moves its tail up to 70 times per second. It then spreads its pectoral fins and tilts them slightly upward to provide lift. At the end of a glide, it folds its pectoral fins to re-enter the sea, or drops its tail into the water to push against the water to lift itself for another glide, possibly changing direction. The curved profile of the "wing" is comparable to the aerodynamic shape of a bird wing. The fish is able to increase its time in the air by flying straight into or at an angle to the direction of updrafts created by a combination of air and ocean currents. Snakes of the genus "Chrysopelea" are also known by the common name "flying snake". Before launching from a branch, the snake makes a J-shape bend. After thrusting its body up and away from the tree, it sucks in its abdomen and flaring out its ribs to turn its body into a "pseudo concave wing", all the while making a continual serpentine motion of lateral undulation parallel to the ground to stabilise its direction in mid-air in order to land safely. Flying snakes are able to glide better than flying squirrels and other gliding animals, despite the lack of limbs, wings, or any other wing-like projections, gliding through the forest and jungle it inhabits with the distance being as great as 100 m. Their destination is mostly predicted by ballistics; however, they can exercise some in-flight attitude control by "slithering" in the air. Flying lizards of the genus Draco are capable of gliding flight via membranes that may be extended to create wings (patagia), formed by an enlarged set of ribs. Gliding flight has evolved independently among 3,400 species of frogs from both New World (Hylidae) and Old World (Rhacophoridae) families. This parallel evolution is seen as an adaptation to their life in trees, high above the ground. Characteristics of the Old World species include "enlarged hands and feet, full webbing between all fingers and toes, lateral skin flaps on the arms and legs Forces. Three principal forces act on aircraft and animals when gliding: As the aircraft or animal descends, the air moving over the wings generates lift. The lift force acts slightly forward of vertical because it is created at right angles to the airflow which comes from slightly below as the glider descends, see angle of attack. This horizontal component of lift is enough to overcome drag and allows the glider to accelerate forward. Even though the weight causes the aircraft to descend, if the air is rising faster than the sink rate, there will be a gain of altitude. Lift to drag ratio. The lift-to-drag ratio, or "L/D ratio", is the amount of lift generated by a wing or vehicle, divided by the drag it creates by moving through the air. A higher or more favourable L/D ratio is typically one of the major goals in aircraft design; since a particular aircraft's needed lift is set by its weight, delivering that lift with lower drag leads directly to better fuel economy and climb performance. The effect of airspeed on the rate of descent can be depicted by a polar curve. These curves show the airspeed where minimum sink can be achieved and the airspeed with the best L/D ratio. The curve is an inverted U-shape. As speeds reduce the amount of lift falls rapidly around the stalling speed. The peak of the 'U' is at minimum drag. As lift and drag are both proportional to the coefficient of Lift and Drag respectively multiplied by the same factor (1/2 ρair v2S), the L/D ratio can be simplified to the Coefficient of lift divided by the coefficient of drag or Cl/Cd, and since both are proportional to the airspeed, the ratio of L/D or Cl/Cd is then typically plotted against angle of attack. Drag. Induced drag is caused by the generation of lift by the wing. Lift generated by a wing is perpendicular to the relative wind, but since wings typically fly at some small angle of attack, this means that a component of the force is directed to the rear. The rearward component of this force (parallel with the relative wind) is seen as drag. At low speeds an aircraft has to generate lift with a higher angle of attack, thereby leading to greater induced drag. This term dominates the low-speed side of the drag graph, the left side of the U. Profile drag is caused by air hitting the wing, and other parts of the aircraft. This form of drag, also known as wind resistance, varies with the square of speed (see drag equation). For this reason profile drag is more pronounced at higher speeds, forming the right side of the drag graph's U shape. Profile drag is lowered primarily by reducing cross section and streamlining. As lift increases steadily until the critical angle, it is normally the point where the combined drag is at its lowest, that the wing or aircraft is performing at its best L/D. Designers will typically select a wing design which produces an L/D peak at the chosen cruising speed for a powered fixed-wing aircraft, thereby maximizing economy. Like all things in aeronautical engineering, the lift-to-drag ratio is not the only consideration for wing design. Performance at high angle of attack and a gentle stall are also important. Minimising drag is of particular interest in the design and operation of high performance glider (sailplane)s, the largest of which can have glide ratios approaching 60 to 1, though many others have a lower performance; 25:1 being considered adequate for training use. Glide ratio. When flown at a constant speed in still air a glider moves forwards a certain distance for a certain distance downwards. The ratio of the distance forwards to downwards is called the "glide ratio". The glide ratio (E) is numerically equal to the lift-to-drag ratio under these conditions; but is not necessarily equal during other manoeuvres, especially if speed is not constant. A glider's glide ratio varies with airspeed, but there is a maximum value which is frequently quoted. Glide ratio usually varies little with vehicle loading; a heavier vehicle glides faster, but nearly maintains its glide ratio. Glide ratio (or "finesse") is the cotangent of the downward angle, the "glide angle" (γ). Alternatively it is also the forward speed divided by sink speed (unpowered aircraft): formula_0 "Glide number" (ε) is the reciprocal of glide ratio but sometime it is confused. Importance of the glide ratio in gliding flight. Although the best glide ratio is important when measuring the performance of a gliding aircraft, its glide ratio at a range of speeds also determines its success (see article on gliding). Pilots sometimes fly at the aircraft's best L/D by precisely controlling airspeed and smoothly operating the controls to reduce drag. However the strength of the likely next lift, minimising the time spent in strongly sinking air and the strength of the wind also affects the optimal speed to fly. Pilots fly faster to get quickly through sinking air, and when heading into wind to optimise the glide angle relative to the ground. To achieve higher speed across country, gliders (sailplanes) are often loaded with water ballast to increase the airspeed and so reach the next area of lift sooner. This has little effect on the glide angle since the increases in the rate of sink and in the airspeed remain in proportion and thus the heavier aircraft achieves optimal L/D at a higher airspeed. If the areas of lift are strong on the day, the benefits of ballast outweigh the slower rate of climb. If the air is rising faster than the rate of sink, the aircraft will climb. At lower speeds an aircraft may have a worse glide ratio but it will also have a lower rate of sink. A low airspeed also improves its ability to turn tightly in the centre of the rising air where the rate of ascent is greatest. A sink rate of approximately 1.0 m/s is the most that a practical hang glider or paraglider could have before it would limit the occasions that a climb was possible to only when there was strongly rising air. Gliders (sailplanes) have minimum sink rates of between 0.4 and 0.6 m/s depending on the class. Aircraft such as airliners may have a better glide ratio than a hang glider, but would rarely be able to thermal because of their much higher forward speed and their much higher sink rate. (The Boeing 767 in the Gimli Glider incident achieved a glide ratio of only 12:1). The loss of height can be measured at several speeds and plotted on a "polar curve" to calculate the best speed to fly in various conditions, such as when flying into wind or when in sinking air. Other polar curves can be measured after loading the glider with water ballast. As mass increases, the best glide ratio is achieved at higher speeds (The glide ratio is not increased). Soaring. Soaring animals and aircraft may alternate glides with periods of soaring in rising air. Five principal types of lift are used: thermals, ridge lift, lee waves, convergences and dynamic soaring. Dynamic soaring is used predominately by birds, and some model aircraft, though it has also been achieved on rare occasions by piloted aircraft. Examples of soaring flight by birds are the use of: For humans, soaring is the basis for three air sports: gliding, hang gliding and paragliding. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "{L \\over D}={{\\Delta s} \\over {\\Delta h}}={v_{\\text{forward}} \\over v_{\\text{down}}}" } ]
https://en.wikipedia.org/wiki?curid=11304983
11306
Frame problem
Issue in artificial intelligence and categorical algebra In artificial intelligence, with implications for cognitive science, the frame problem describes an issue with using first-order logic to express facts about a robot in the world. Representing the state of a robot with traditional first-order logic requires the use of many axioms that simply imply that things in the environment do not change arbitrarily. For example, Hayes describes a "block world" with rules about stacking blocks together. In a first-order logic system, additional axioms are required to make inferences about the environment (for example, that a block cannot change position unless it is physically moved). The frame problem is the problem of finding adequate collections of axioms for a viable description of a robot environment. John McCarthy and Patrick J. Hayes defined this problem in their 1969 article, "Some Philosophical Problems from the Standpoint of Artificial Intelligence". In this paper, and many that came after, the formal mathematical problem was a starting point for more general discussions of the difficulty of knowledge representation for artificial intelligence. Issues such as how to provide rational default assumptions and what humans consider common sense in a virtual environment. In philosophy, the frame problem became more broadly construed in connection with the problem of limiting the beliefs that have to be updated in response to actions. In the logical context, actions are typically specified by what they change, with the implicit assumption that everything else (the frame) remains unchanged. Description. The frame problem occurs even in very simple domains. A scenario with a door, which can be open or closed, and a light, which can be on or off, is statically represented by two propositions formula_0 and formula_1. If these conditions can change, they are better represented by two predicates formula_2 and formula_3 that depend on time; such predicates are called fluents. A domain in which the door is closed and the light off at time 0, and the door opened at time 1, can be directly represented in logic by the following formulae: formula_4 formula_5 formula_6 The first two formulae represent the initial situation; the third formula represents the effect of executing the action of opening the door at time 1. If such an action had preconditions, such as the door being unlocked, it would have been represented by formula_7. In practice, one would have a predicate formula_8 for specifying when an action is executed and a rule formula_9 for specifying the effects of actions. The article on the situation calculus gives more details. While the three formulae above are a direct expression in logic of what is known, they do not suffice to correctly draw consequences. While the following conditions (representing the expected situation) are consistent with the three formulae above, they are not the only ones. Indeed, another set of conditions that is consistent with the three formulae above is: The frame problem is that specifying only which conditions are changed by the actions does not entail that all other conditions are not changed. This problem can be solved by adding the so-called “frame axioms”, which explicitly specify that all conditions not affected by actions are not changed while executing that action. For example, since the action executed at time 0 is that of opening the door, a frame axiom would state that the status of the light does not change from time 0 to time 1: formula_10 The frame problem is that one such frame axiom is necessary for every pair of action and condition such that the action does not affect the condition. In other words, the problem is that of formalizing a dynamical domain without explicitly specifying the frame axioms. The solution proposed by McCarthy to solve this problem involves assuming that a minimal amount of condition changes have occurred; this solution is formalized using the framework of circumscription. The Yale shooting problem, however, shows that this solution is not always correct. Alternative solutions were then proposed, involving predicate completion, fluent occlusion, successor state axioms, etc.; they are explained below. By the end of the 1980s, the frame problem as defined by McCarthy and Hayes was solved. Even after that, however, the term “frame problem” was still used, in part to refer to the same problem but under different settings (e.g., concurrent actions), and in part to refer to the general problem of representing and reasoning with dynamical domains. Solutions. The following solutions depict how the frame problem is solved in various formalisms. The formalisms themselves are not presented in full: what is presented are simplified versions that are sufficient to explain the full solution. Fluent occlusion solution. This solution was proposed by Erik Sandewall, who also defined a formal language for the specification of dynamical domains; therefore, such a domain can be first expressed in this language and then automatically translated into logic. In this article, only the expression in logic is shown, and only in the simplified language with no action names. The rationale of this solution is to represent not only the value of conditions over time, but also whether they can be affected by the last executed action. The latter is represented by another condition, called occlusion. A condition is said to be "occluded" in a given time point if an action has been just executed that makes the condition true or false as an effect. Occlusion can be viewed as “permission to change”: if a condition is occluded, it is relieved from obeying the constraint of inertia. In the simplified example of the door and the light, occlusion can be formalized by two predicates formula_11 and formula_12. The rationale is that a condition can change value only if the corresponding occlusion predicate is true at the next time point. In turn, the occlusion predicate is true only when an action affecting the condition is executed. formula_4 formula_5 formula_13 formula_14 formula_15 In general, every action making a condition true or false also makes the corresponding occlusion predicate true. In this case, formula_16 is true, making the antecedent of the fourth formula above false for formula_17; therefore, the constraint that formula_18 does not hold for formula_17. Therefore, formula_0 can change value, which is also what is enforced by the third formula. In order for this condition to work, occlusion predicates have to be true only when they are made true as an effect of an action. This can be achieved either by circumscription or by predicate completion. It is worth noticing that occlusion does not necessarily imply a change: for example, executing the action of opening the door when it was already open (in the formalization above) makes the predicate formula_19 true and makes formula_0 true; however, formula_0 has not changed value, as it was true already. Predicate completion solution. This encoding is similar to the fluent occlusion solution, but the additional predicates denote change, not permission to change. For example, formula_20 represents the fact that the predicate formula_0 will change from time formula_21 to formula_22. As a result, a predicate changes if and only if the corresponding change predicate is true. An action results in a change if and only if it makes true a condition that was previously false or vice versa. formula_4 formula_5 formula_23 formula_24 formula_25 The third formula is a different way of saying that opening the door causes the door to be opened. Precisely, it states that opening the door changes the state of the door if it had been previously closed. The last two conditions state that a condition changes value at time formula_21 if and only if the corresponding change predicate is true at time formula_21. To complete the solution, the time points in which the change predicates are true have to be as few as possible, and this can be done by applying predicate completion to the rules specifying the effects of actions. Successor state axioms solution. The value of a condition after the execution of an action can be determined by the fact that the condition is true if and only if: A successor state axiom is a formalization in logic of these two facts. For example, if formula_26 and formula_27 are two conditions used to denote that the action executed at time formula_21 was to open or close the door, respectively, the running example is encoded as follows. formula_4 formula_5 formula_28 formula_29 This solution is centered around the value of conditions, rather than the effects of actions. In other words, there is an axiom for every condition, rather than a formula for every action. Preconditions to actions (which are not present in this example) are formalized by other formulae. The successor state axioms are used in the variant to the situation calculus proposed by Ray Reiter. Fluent calculus solution. The fluent calculus is a variant of the situation calculus. It solves the frame problem by using first-order logic terms, rather than predicates, to represent the states. Converting predicates into terms in first-order logic is called reification; the fluent calculus can be seen as a logic in which predicates representing the state of conditions are reified. The difference between a predicate and a term in first-order logic is that a term is a representation of an object (possibly a complex object composed of other objects), while a predicate represents a condition that can be true or false when evaluated over a given set of terms. In the fluent calculus, each possible state is represented by a term obtained by composition of other terms, each one representing the conditions that are true in state. For example, the state in which the door is open and the light is on is represented by the term formula_30. It is important to notice that a term is not true or false by itself, as it is an object and not a condition. In other words, the term formula_30 represent a possible state, and does not by itself mean that this is the current state. A separate condition can be stated to specify that this is actually the state at a given time, e.g., formula_31 means that this is the state at time formula_32. The solution to the frame problem given in the fluent calculus is to specify the effects of actions by stating how a term representing the state changes when the action is executed. For example, the action of opening the door at time 0 is represented by the formula: formula_33 The action of closing the door, which makes a condition false instead of true, is represented in a slightly different way: formula_34 This formula works provided that suitable axioms are given about formula_35 and formula_36, e.g., a term containing the same condition twice is not a valid state (for example, formula_37 is always false for every formula_38 and formula_21). Event calculus solution. The event calculus uses terms for representing fluents, like the fluent calculus, but also has one or more axioms constraining the value of fluents, like the successor state axioms. There are many variants of the event calculus, but one of the simplest and most useful employs a single axiom to represent the law of inertia: formula_39 formula_40 formula_41 The axiom states that a fluent formula_42 holds at a time formula_43, if an event formula_44 happens and initiates formula_42 at an earlier time formula_45, and there is no event formula_46 that happens and terminates formula_42 after or at the same time as formula_45 and before formula_43. To apply the event calculus to a particular problem domain, it is necessary to define the formula_47 and formula_48 predicates for that domain. For example: formula_49 formula_50 formula_51 formula_52 To apply the event calculus to a particular problem in the domain, it is necessary to specify the events that happen in the context of the problem. For example: formula_53. formula_54. To solve a problem, such as "which fluents hold at time 5?", it is necessary to pose the problem as a goal, such as: formula_55 In this case, obtaining the unique solution: formula_56 The event calculus solves the frame problem, eliminating undesired solutions, by using a non-monotonic logic, such as first-order logic with circumscription or by treating the event calculus as a logic program using negation as failure. Default logic solution. The frame problem can be thought of as the problem of formalizing the principle that, by default, "everything is presumed to remain in the state in which it is" (Leibniz, "An Introduction to a Secret Encyclopædia", "c". 1679). This default, sometimes called the "commonsense law of inertia", was expressed by Raymond Reiter in default logic: formula_57 (if formula_58 is true in situation formula_38, and it can be assumed that formula_58 remains true after executing action formula_59, then we can conclude that formula_58 remains true). Steve Hanks and Drew McDermott argued, on the basis of their Yale shooting example, that this solution to the frame problem is unsatisfactory. Hudson Turner showed, however, that it works correctly in the presence of appropriate additional postulates. Answer set programming solution. The counterpart of the default logic solution in the language of answer set programming is a rule with strong negation: formula_60 (if formula_61 is true at time formula_62, and it can be assumed that formula_61 remains true at time formula_63, then we can conclude that formula_61 remains true). Separation logic solution. Separation logic is a formalism for reasoning about computer programs using pre/post specifications of the form formula_64. Separation logic is an extension of Hoare logic oriented to reasoning about mutable data structures in computer memory and other dynamic resources, and it has a special connective *, pronounced "and separately", to support independent reasoning about disjoint memory regions. Separation logic employs a "tight" interpretation of pre/post specs, which say that the code can "only" access memory locations guaranteed to exist by the precondition. This leads to the soundness of the most important inference rule of the logic, the "frame rule" formula_65 The frame rule allows descriptions of arbitrary memory outside the footprint (memory accessed) of the code to be added to a specification: this enables the initial specification to concentrate only on the footprint. For example, the inference formula_66 captures that code which sorts a list "x" does not unsort a separate list "y," and it does this without mentioning "y" at all in the initial spec above the line. Automation of the frame rule has led to significant increases in the scalability of automated reasoning techniques for code, eventually deployed industrially to codebases with tens of millions of lines. There appears to be some similarity between the separation logic solution to the frame problem and that of the fluent calculus mentioned above. Action description languages. Action description languages elude the frame problem rather than solving it. An action description language is a formal language with a syntax that is specific for describing situations and actions. For example, that the action formula_67 makes the door open if not locked is expressed by: formula_67 causes formula_0 if formula_68 The semantics of an action description language depends on what the language can express (concurrent actions, delayed effects, etc.) and is usually based on transition systems. Since domains are expressed in these languages rather than directly in logic, the frame problem only arises when a specification given in an action description logic is to be translated into logic. Typically, however, a translation is given from these languages to answer set programming rather than first-order logic. Notes. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "\\mathrm{open}" }, { "math_id": 1, "text": "\\mathrm{on}" }, { "math_id": 2, "text": "\\mathrm{open}(t)" }, { "math_id": 3, "text": "\\mathrm{on}(t)" }, { "math_id": 4, "text": "\\neg \\mathrm{open}(0)" }, { "math_id": 5, "text": "\\neg \\mathrm{on}(0)" }, { "math_id": 6, "text": "\\mathrm{open}(1)" }, { "math_id": 7, "text": "\\neg \\mathrm{locked}(0) \\implies \\mathrm{open}(1)" }, { "math_id": 8, "text": "\\mathrm{executeopen}(t)" }, { "math_id": 9, "text": "\\forall t . \\mathrm{executeopen}(t) \\implies \\mathrm{open}(t+1)" }, { "math_id": 10, "text": "\\mathrm{on}(0) \\iff \\mathrm{on}(1)" }, { "math_id": 11, "text": "\\mathrm{occludeopen}(t)" }, { "math_id": 12, "text": "\\mathrm{occludeon}(t)" }, { "math_id": 13, "text": "\\mathrm{open}(1) \\wedge \\mathrm{occludeopen}(1)" }, { "math_id": 14, "text": "\\forall t . \\neg \\mathrm{occludeopen}(t) \\implies (\\mathrm{open}(t-1) \\iff \\mathrm{open}(t))" }, { "math_id": 15, "text": "\\forall t . \\neg \\mathrm{occludeon}(t) \\implies (\\mathrm{on}(t-1) \\iff \\mathrm{on}(t))" }, { "math_id": 16, "text": "\\mathrm{occludeopen}(1)" }, { "math_id": 17, "text": "t=1" }, { "math_id": 18, "text": "\\mathrm{open}(t-1) \\iff \\mathrm{open}(t)" }, { "math_id": 19, "text": "\\mathrm{occludeopen}" }, { "math_id": 20, "text": "\\mathrm{changeopen}(t)" }, { "math_id": 21, "text": "t" }, { "math_id": 22, "text": "t+1" }, { "math_id": 23, "text": "\\neg \\mathrm{open}(0) \\implies \\mathrm{changeopen}(0)" }, { "math_id": 24, "text": "\\forall t. \\mathrm{changeopen}(t) \\iff (\\neg \\mathrm{open}(t) \\iff \\mathrm{open}(t+1))" }, { "math_id": 25, "text": "\\forall t. \\mathrm{changeon}(t) \\iff (\\neg \\mathrm{on}(t) \\iff \\mathrm{on}(t+1))" }, { "math_id": 26, "text": "\\mathrm{opendoor}(t)" }, { "math_id": 27, "text": "\\mathrm{closedoor}(t)" }, { "math_id": 28, "text": "\\mathrm{opendoor}(0)" }, { "math_id": 29, "text": "\\forall t . \\mathrm{open}(t+1) \\iff \\mathrm{opendoor}(t) \\vee (\\mathrm{open}(t) \\wedge \\neg \\mathrm{closedoor}(t))" }, { "math_id": 30, "text": "\\mathrm{open} \\circ \\mathrm{on}" }, { "math_id": 31, "text": "\\mathrm{state}(\\mathrm{open} \\circ \\mathrm{on}, 10)" }, { "math_id": 32, "text": "10" }, { "math_id": 33, "text": "\\mathrm{state}(s \\circ \\mathrm{open}, 1) \\iff \\mathrm{state}(s,0)" }, { "math_id": 34, "text": "\\mathrm{state}(s, 1) \\iff \\mathrm{state}(s \\circ \\mathrm{open}, 0)" }, { "math_id": 35, "text": "\\mathrm{state}" }, { "math_id": 36, "text": "\\circ" }, { "math_id": 37, "text": "\\mathrm{state}(\\mathrm{open} \\circ s \\circ \\mathrm{open}, t)" }, { "math_id": 38, "text": "s" }, { "math_id": 39, "text": "\\mathit{holdsAt}(F,T2) \\leftarrow" }, { "math_id": 40, "text": "[\\mathit{happensAt}(E1,T1) \\wedge \\mathit{initiates}(E1, F, T1) \\wedge (T1 < T2) \\wedge " }, { "math_id": 41, "text": "\\neg \\exists E2, T [\\mathit{happensAt}(E2, T) \\wedge \\mathit{terminates}(E2, F, T) \\wedge (T1 \\leq T < T2)] " }, { "math_id": 42, "text": "F" }, { "math_id": 43, "text": "T2" }, { "math_id": 44, "text": "E1" }, { "math_id": 45, "text": "T1" }, { "math_id": 46, "text": "E2" }, { "math_id": 47, "text": "initiates" }, { "math_id": 48, "text": "terminates" }, { "math_id": 49, "text": "\\mathit{initiates}(opendoor, open, T). " }, { "math_id": 50, "text": "\\mathit{terminates}(opendoor, closed, T). " }, { "math_id": 51, "text": "\\mathit{initiates}(closedoor, closed, T). " }, { "math_id": 52, "text": "\\mathit{terminates}(closeddoor, open, T). " }, { "math_id": 53, "text": "\\mathit{happensAt}(opendoor, 0)" }, { "math_id": 54, "text": "\\mathit{happensAt}(closedoor, 3)" }, { "math_id": 55, "text": " \\exists Fluent [\\mathit{holdsAt(Fluent, 5)}]. " }, { "math_id": 56, "text": "Fluent = closed. " }, { "math_id": 57, "text": "\\frac{R(x,s)\\; :\\ R(x,\\mathrm{do}(a,s))}{R(x,\\mathrm{do}(a,s))}" }, { "math_id": 58, "text": "R(x)" }, { "math_id": 59, "text": "a" }, { "math_id": 60, "text": "r(X,T+1) \\leftarrow r(X,T),\\ \\hbox{not }\\sim r(X,T+1)" }, { "math_id": 61, "text": "r(X)" }, { "math_id": 62, "text": "T" }, { "math_id": 63, "text": "T+1" }, { "math_id": 64, "text": "\\{\\mathrm{precondition}\\}\\ \\mathrm{code}\\ \\{\\mathrm{postcondition}\\}" }, { "math_id": 65, "text": "\\frac{ \\{\\mathrm{precondition}\\}\\ \\mathrm{code}\\ \\{\\mathrm{postcondition}\\} }{ \\{\\mathrm{precondition} \\ast \\mathrm{frame}\\}\\ \\mathrm{code}\\ \\{\\mathrm{postcondition} \\ast \\mathrm{frame}\\} }" }, { "math_id": 66, "text": "\\frac{ \\{\\operatorname{list}(x)\\}\\ \\mathrm{code}\\ \\{\\operatorname{sortedlist}(x)\\} }{ \\{\\operatorname{list}(x) \\ast \\operatorname{sortedlist}(y)\\}\\ \\mathrm{code}\\ \\{\\operatorname{sortedlist}(x) \\ast \\operatorname{sortedlist}(y)\\} }" }, { "math_id": 67, "text": "\\mathrm{opendoor}" }, { "math_id": 68, "text": "\\neg \\mathrm{locked}" } ]
https://en.wikipedia.org/wiki?curid=11306
11308417
Vertex (geometry)
Point where two or more curves, lines, or edges meet In geometry, a vertex (pl.: vertices or vertexes) is a point where two or more curves, lines, or edges meet or intersect. As a consequence of this definition, the point where two lines meet to form an angle and the corners of polygons and polyhedra are vertices. Definition. Of an angle. The "vertex" of an angle is the point where two rays begin or meet, where two line segments join or meet, where two lines intersect (cross), or any appropriate combination of rays, segments, and lines that result in two straight "sides" meeting at one place. Of a polytope. A vertex is a corner point of a polygon, polyhedron, or other higher-dimensional polytope, formed by the intersection of edges, faces or facets of the object. In a polygon, a vertex is called "convex" if the internal angle of the polygon (i.e., the angle formed by the two edges at the vertex with the polygon inside the angle) is less than π radians (180°, two right angles); otherwise, it is called "concave" or "reflex". More generally, a vertex of a polyhedron or polytope is convex, if the intersection of the polyhedron or polytope with a sufficiently small sphere centered at the vertex is convex, and is concave otherwise. Polytope vertices are related to vertices of graphs, in that the 1-skeleton of a polytope is a graph, the vertices of which correspond to the vertices of the polytope, and in that a graph can be viewed as a 1-dimensional simplicial complex the vertices of which are the graph's vertices. However, in graph theory, vertices may have fewer than two incident edges, which is usually not allowed for geometric vertices. There is also a connection between geometric vertices and the vertices of a curve, its points of extreme curvature: in some sense the vertices of a polygon are points of infinite curvature, and if a polygon is approximated by a smooth curve, there will be a point of extreme curvature near each polygon vertex. Of a plane tiling. A vertex of a plane tiling or tessellation is a point where three or more tiles meet; generally, but not always, the tiles of a tessellation are polygons and the vertices of the tessellation are also vertices of its tiles. More generally, a tessellation can be viewed as a kind of topological cell complex, as can the faces of a polyhedron or polytope; the vertices of other kinds of complexes such as simplicial complexes are its zero-dimensional faces. Principal vertex. A polygon vertex "x"i of a simple polygon P is a principal polygon vertex if the diagonal ["x"(i − 1), "x"(i + 1)] intersects the boundary of P only at "x"(i − 1) and "x"(i + 1). There are two types of principal vertices: "ears" and "mouths". Ears. A principal vertex "x"i of a simple polygon P is called an ear if the diagonal ["x"(i − 1), "x"(i + 1)] that bridges "x"i lies entirely in P. (see also convex polygon) According to the two ears theorem, every simple polygon has at least two ears. Mouths. A principal vertex "x"i of a simple polygon P is called a mouth if the diagonal ["x"(i − 1), "x"(i + 1)] lies outside the boundary of P. Number of vertices of a polyhedron. Any convex polyhedron's surface has Euler characteristic formula_0 where "V" is the number of vertices, "E" is the number of edges, and "F" is the number of faces. This equation is known as Euler's polyhedron formula. Thus the number of vertices is 2 more than the excess of the number of edges over the number of faces. For example, since a cube has 12 edges and 6 faces, the formula implies that it has eight vertices. Vertices in computer graphics. In computer graphics, objects are often represented as triangulated polyhedra in which the object vertices are associated not only with three spatial coordinates but also with other graphical information necessary to render the object correctly, such as colors, reflectance properties, textures, and surface normal. These properties are used in rendering by a vertex shader, part of the vertex pipeline. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V - E + F = 2," } ]
https://en.wikipedia.org/wiki?curid=11308417
113087
System of linear equations
Several equations of degree 1 to be solved simultaneously In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variables. For example, formula_0 is a system of three equations in the three variables "x", "y", "z". A "solution" to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. In the example above, a solution is given by the ordered triple formula_1 since it makes all three equations valid. Linear systems are a fundamental part of linear algebra, a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system. Very often, and in this article, the coefficients and solutions of the equations are constrained to be real or complex numbers, but the theory and algorithms apply to coefficients and solutions in any field. For other algebraic structures, other theories have been developed. For coefficients and solutions in an integral domain, such as the ring of integers, see Linear equation over a ring. For coefficients and solutions that are polynomials, see Gröbner basis. For finding the "best" integer solutions among many, see Integer linear programming. For an example of a more exotic structure to which linear algebra can be applied, see Tropical geometry. Elementary examples. Trivial example. The system of one equation in one unknown formula_2 has the solution formula_3 However, most interesting linear systems have at least two equations. Simple nontrivial example. The simplest kind of nontrivial linear system involves two equations and two variables: formula_4 One method for solving such a system is as follows. First, solve the top equation for formula_5 in terms of formula_6: formula_7 Now substitute this expression for "x" into the bottom equation: formula_8 This results in a single equation involving only the variable formula_6. Solving gives formula_9, and substituting this back into the equation for formula_5 yields formula_10. This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.) General form. A general system of "m" linear equations with "n" unknowns and coefficients can be written as formula_11 where formula_12 are the unknowns, formula_13 are the coefficients of the system, and formula_14 are the constant terms. Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure. Vector equation. One extremely helpful view is that each unknown is a weight for a column vector in a linear combination. formula_15 This allows all the language and theory of "vector spaces" (or more generally, "modules") to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side is called their "span", and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a "basis" of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its "dimension") cannot be larger than "m" or "n", but it can be smaller. This is important because if we have "m" independent vectors a solution is guaranteed regardless of the right-hand side, and otherwise not guaranteed. Matrix equation. The vector equation is equivalent to a matrix equation of the form formula_16 where "A" is an "m"×"n" matrix, x is a column vector with "n" entries, and b is a column vector with "m" entries. formula_17 The number of vectors in a basis for the span is now expressed as the "rank" of the matrix. Solution set. A "solution" of a linear system is an assignment of values to the variables "x"1, "x"2, ..., "xn" such that each of the equations is satisfied. The set of all possible solutions is called the "solution set". A linear system may behave in any one of three possible ways: Geometric interpretation. For a system involving two variables ("x" and "y"), each linear equation determines a line on the "xy"-plane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set. For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For example, as three parallel planes do not have a common point, the solution set of their equations is empty; the solution set of the equations of three planes intersecting at a point is single point; if three planes pass through two points, their equations have at least two common solutions; in fact the solution set is infinite and consists in all the line passing through these points. For "n" variables, each linear equation determines a hyperplane in "n"-dimensional space. The solution set is the intersection of these hyperplanes, and is a flat, which may have any dimension lower than "n". General behavior. In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns. Here, "in general" means that a different behavior may occur for specific values of the coefficients of the equations. In the first case, the dimension of the solution set is, in general, equal to "n" − "m", where "n" is the number of variables and "m" is the number of equations. The following pictures illustrate this trichotomy in the case of two variables: The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point. It must be kept in mind that the pictures above show only the most common case (the general case). It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point). A system of linear equations behave differently from the general case if the equations are "linearly dependent", or if it is "inconsistent" and has no more equations than unknowns. Properties. Independence. The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence. For example, the equations formula_18 are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations. For a more complicated example, the equations formula_19 are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point. Consistency. A linear system is inconsistent if it has no solution, and otherwise, it is said to be consistent. When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten as the statement 0 1. For example, the equations formula_20 are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get 0 1. The graphs of these equations on the "xy"-plane are a pair of parallel lines. It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations formula_21 are inconsistent. Adding the first two equations together gives 3"x" + 2"y" 2, which can be subtracted from the third equation to yield 0 1. Any two of these equations have a common solution. The same phenomenon can occur for any number of equations. In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent. Putting it another way, according to the Rouché–Capelli theorem, any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix. If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has "k" free parameters where "k" is the difference between the number of variables and the rank; hence in such a case there is an infinitude of solutions. The rank of a system of equations (that is, the rank of the augmented matrix) can never be higher than [the number of variables] + 1, which means that a system with any number of equations can always be reduced to a system that has a number of independent equations that is at most equal to [the number of variables] + 1. Equivalence. Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Two systems are equivalent if either both are inconsistent or each equation of each of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set. Solving a linear system. There are several algorithms for solving a system of linear equations. Describing the solution. When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example formula_22. When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like formula_23 for the previous example. To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables. For example, consider the following system: formula_24 The solution set to this system can be described by the following equations: formula_25 Here "z" is the free variable, while "x" and "y" are dependent on "z". Any point in the solution set can be obtained by first choosing a value for "z", and then computing the corresponding values for "x" and "y". Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter "z". An infinite solution of higher order may describe a plane, or higher-dimensional set. Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows: formula_26 Here "x" is the free variable, and "y" and "z" are dependent. Elimination of variables. The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows: For example, consider the following system: formula_27 Solving the first equation for "x" gives formula_28, and plugging this into the second and third equation yields formula_29 Since the LHS of both of these equations equal "y", equating the RHS of the equations. We now have: formula_30 Substituting "z" = 2 into the second or third equation gives "y" = 8, and the values of "y" and "z" into the first equation yields "x" = −15. Therefore, the solution set is the ordered triple formula_31. Row reduction. In row reduction (also known as Gaussian elimination), the linear system is represented as an augmented matrix formula_32 This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations: Type 1: Swap the positions of two rows. Type 2: Multiply a row by a nonzero scalar. Type 3: Add to one row a scalar multiple of another. Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original. There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination. The following computation shows Gauss–Jordan elimination applied to the matrix above: formula_33 The last matrix is in reduced row echelon form, and represents the system "x" −15, "y" 8, "z" 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down. Cramer's rule. Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system formula_34 is given by formula_35 For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms. Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.) Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision. Matrix solution. If the equation system is expressed in the matrix form formula_36, the entire solution set can also be expressed in matrix form. If the matrix "A" is square (has "m" rows and "n"="m" columns) and has full rank (all "m" rows are independent), then the system has a unique solution given by formula_37 where formula_38 is the inverse of "A". More generally, regardless of whether "m"="n" or not and regardless of the rank of "A", all solutions (if any exist) are given using the Moore–Penrose inverse of "A", denoted formula_39, as follows: formula_40 where formula_41 is a vector of free parameters that ranges over all possible "n"×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using formula_42 satisfy formula_36 — that is, that formula_43 If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which "A" is square and of full rank, formula_39 simply equals formula_38 and the general solution equation simplifies to formula_44 as previously stated, where formula_41 has completely dropped out of the solution, leaving only a single solution. In other cases, though, formula_41 remains and hence an infinitude of potential values of the free parameter vector formula_41 give an infinitude of solutions of the equation. Other methods. While systems of three or four equations can be readily solved by hand (see Cracovian), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as "pivoting". Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix "A". This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix "A" but different vectors b. If the matrix "A" has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications. A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods. For some sparse matrices, the introduction of randomness improves the speed of the iterative methods. One example of an iterative method is the Jacobi method, where the matrix formula_45 is split into its diagonal component formula_46 and its non-diagonal component formula_47. An initial guess formula_48 is used at the start of the algorithm. Each subsequent guess is computed using the iterative equation: formula_49 When the difference between guesses formula_50 and formula_51 is sufficiently small, the algorithm is said to have "converged" on the solution. There is also a quantum algorithm for linear systems of equations. Homogeneous systems. A system of linear equations is homogeneous if all of the constant terms are zero: formula_52 A homogeneous system is equivalent to a matrix equation of the form formula_53 where "A" is an "m" × "n" matrix, x is a column vector with "n" entries, and 0 is the zero vector with "m" entries. Homogeneous solution set. Every homogeneous system has at least one solution, known as the "zero" (or "trivial") solution, which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix (det("A") ≠ 0) then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties: These are exactly the properties required for the solution set to be a linear subspace of R"n". In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix "A". Relation to nonhomogeneous systems. There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system: formula_54 Specifically, if p is any specific solution to the linear system "A"x = b, then the entire solution set can be described as formula_55 Geometrically, this says that the solution set for "Ax = b is a translation of the solution set for "Ax = 0. Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p. This reasoning only applies if the system "A"x = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation "A". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{cases}\n3x+2y-z=1\\\\\n2x-2y+4z=-2\\\\\n-x+\\frac{1}{2}y-z=0\n\\end{cases}" }, { "math_id": 1, "text": "(x,y,z)=(1,-2,-2)," }, { "math_id": 2, "text": "2x = 4" }, { "math_id": 3, "text": "x = 2." }, { "math_id": 4, "text": "\\begin{alignat}{5}\n2x &&\\; + \\;&& 3y &&\\; = \\;&& 6 & \\\\\n4x &&\\; + \\;&& 9y &&\\; = \\;&& 15&.\n\\end{alignat}" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "y" }, { "math_id": 7, "text": "x = 3 - \\frac{3}{2}y." }, { "math_id": 8, "text": "4\\left( 3 - \\frac{3}{2}y \\right) + 9y = 15." }, { "math_id": 9, "text": "y = 1" }, { "math_id": 10, "text": "x = \\frac{3}{2}" }, { "math_id": 11, "text": "\\begin{cases}\na_{11} x_1 + a_{12} x_2 +\\dots + a_{1n} x_n = b_1 \\\\\na_{21} x_1 + a_{22} x_2 + \\dots + a_{2n} x_n = b_2 \\\\ \n\\vdots\\\\\na_{m1} x_1 + a_{m2} x_2 + \\dots + a_{mn} x_n = b_m,\n\\end{cases}" }, { "math_id": 12, "text": "x_1, x_2,\\dots,x_n" }, { "math_id": 13, "text": "a_{11},a_{12},\\dots,a_{mn}" }, { "math_id": 14, "text": "b_1,b_2,\\dots,b_m" }, { "math_id": 15, "text": "\nx_1\\begin{bmatrix}a_{11}\\\\a_{21}\\\\ \\vdots \\\\a_{m1}\\end{bmatrix} +\nx_2\\begin{bmatrix}a_{12}\\\\a_{22}\\\\ \\vdots \\\\a_{m2}\\end{bmatrix} +\n\\dots +\nx_n\\begin{bmatrix}a_{1n}\\\\a_{2n}\\\\ \\vdots \\\\a_{mn}\\end{bmatrix} = \\begin{bmatrix}b_1\\\\b_2\\\\ \\vdots \\\\b_m\\end{bmatrix}\n" }, { "math_id": 16, "text": " A\\mathbf{x} = \\mathbf{b} " }, { "math_id": 17, "text": "\nA=\n\\begin{bmatrix}\na_{11} & a_{12} & \\cdots & a_{1n} \\\\\na_{21} & a_{22} & \\cdots & a_{2n} \\\\\n\\vdots & \\vdots & \\ddots & \\vdots \\\\\na_{m1} & a_{m2} & \\cdots & a_{mn}\n\\end{bmatrix},\\quad\n\\mathbf{x}=\n\\begin{bmatrix}\nx_1 \\\\\nx_2 \\\\\n\\vdots \\\\\nx_n\n\\end{bmatrix},\\quad\n\\mathbf{b}=\n\\begin{bmatrix}\nb_1 \\\\\nb_2 \\\\\n\\vdots \\\\\nb_m\n\\end{bmatrix}.\n" }, { "math_id": 18, "text": "3x+2y=6\\;\\;\\;\\;\\text{and}\\;\\;\\;\\;6x+4y=12" }, { "math_id": 19, "text": "\\begin{alignat}{5}\n x &&\\; - \\;&& 2y &&\\; = \\;&& -1 & \\\\\n 3x &&\\; + \\;&& 5y &&\\; = \\;&& 8 & \\\\\n 4x &&\\; + \\;&& 3y &&\\; = \\;&& 7 &\n\\end{alignat}" }, { "math_id": 20, "text": "3x+2y=6\\;\\;\\;\\;\\text{and}\\;\\;\\;\\;3x+2y=12" }, { "math_id": 21, "text": "\\begin{alignat}{7}\n x &&\\; + \\;&& y &&\\; = \\;&& 1 & \\\\\n2x &&\\; + \\;&& y &&\\; = \\;&& 1 & \\\\\n3x &&\\; + \\;&& 2y &&\\; = \\;&& 3 &\n\\end{alignat}" }, { "math_id": 22, "text": "(x=3, \\;y=-2,\\; z=6)" }, { "math_id": 23, "text": "(3, \\,-2,\\, 6)" }, { "math_id": 24, "text": "\\begin{alignat}{7}\n x &&\\; + \\;&& 3y &&\\; - \\;&& 2z &&\\; = \\;&& 5 & \\\\\n3x &&\\; + \\;&& 5y &&\\; + \\;&& 6z &&\\; = \\;&& 7 &\n\\end{alignat}" }, { "math_id": 25, "text": "x=-7z-1\\;\\;\\;\\;\\text{and}\\;\\;\\;\\;y=3z+2\\text{.}" }, { "math_id": 26, "text": "y=-\\frac{3}{7}x + \\frac{11}{7}\\;\\;\\;\\;\\text{and}\\;\\;\\;\\;z=-\\frac{1}{7}x-\\frac{1}{7}\\text{.}" }, { "math_id": 27, "text": "\\begin{cases}\nx+3y-2z=5\\\\\n3x+5y+6z=7\\\\\n2x+4y+3z=8\n\\end{cases}" }, { "math_id": 28, "text": "x=5+2z-3y" }, { "math_id": 29, "text": "\\begin{cases}\ny=3z+2\\\\\ny=\\tfrac{7}{2}z+1 \n\\end{cases}" }, { "math_id": 30, "text": "\\begin{align}\n3z+2=\\tfrac{7}{2}z+1\\\\ \\Rightarrow z=2\n\\end{align}\n " }, { "math_id": 31, "text": "(x,y,z)=(-15,8,2) " }, { "math_id": 32, "text": "\\left[\\begin{array}{rrr|r}\n1 & 3 & -2 & 5 \\\\\n3 & 5 & 6 & 7 \\\\\n2 & 4 & 3 & 8\n\\end{array}\\right]\\text{.}\n" }, { "math_id": 33, "text": "\\begin{align}\\left[\\begin{array}{rrr|r}\n1 & 3 & -2 & 5 \\\\\n3 & 5 & 6 & 7 \\\\\n2 & 4 & 3 & 8\n\\end{array}\\right]&\\sim\n\\left[\\begin{array}{rrr|r}\n1 & 3 & -2 & 5 \\\\\n0 & -4 & 12 & -8 \\\\\n2 & 4 & 3 & 8\n\\end{array}\\right]\\sim\n\\left[\\begin{array}{rrr|r}\n1 & 3 & -2 & 5 \\\\\n0 & -4 & 12 & -8 \\\\\n0 & -2 & 7 & -2\n\\end{array}\\right]\\sim\n\\left[\\begin{array}{rrr|r}\n1 & 3 & -2 & 5 \\\\\n0 & 1 & -3 & 2 \\\\\n0 & -2 & 7 & -2\n\\end{array}\\right]\n\\\\\n&\\sim\n\\left[\\begin{array}{rrr|r}\n1 & 3 & -2 & 5 \\\\\n0 & 1 & -3 & 2 \\\\\n0 & 0 & 1 & 2\n\\end{array}\\right]\\sim\n\\left[\\begin{array}{rrr|r}\n1 & 3 & -2 & 5 \\\\\n0 & 1 & 0 & 8 \\\\\n0 & 0 & 1 & 2\n\\end{array}\\right]\\sim\n\\left[\\begin{array}{rrr|r}\n1 & 3 & 0 & 9 \\\\\n0 & 1 & 0 & 8 \\\\\n0 & 0 & 1 & 2\n\\end{array}\\right]\\sim\n\\left[\\begin{array}{rrr|r}\n1 & 0 & 0 & -15 \\\\\n0 & 1 & 0 & 8 \\\\\n0 & 0 & 1 & 2\n\\end{array}\\right].\\end{align}" }, { "math_id": 34, "text": "\\begin{alignat}{7}\n x &\\; + &\\; 3y &\\; - &\\; 2z &\\; = &\\; 5 \\\\\n3x &\\; + &\\; 5y &\\; + &\\; 6z &\\; = &\\; 7 \\\\\n2x &\\; + &\\; 4y &\\; + &\\; 3z &\\; = &\\; 8 \n\\end{alignat}" }, { "math_id": 35, "text": "\nx=\\frac\n{\\, \\begin{vmatrix}5&3&-2\\\\7&5&6\\\\8&4&3\\end{vmatrix} \\,}\n{\\, \\begin{vmatrix}1&3&-2\\\\3&5&6\\\\2&4&3\\end{vmatrix} \\,}\n,\\;\\;\\;\\;\ny=\\frac\n{\\, \\begin{vmatrix}1&5&-2\\\\3&7&6\\\\2&8&3\\end{vmatrix} \\,}\n{\\, \\begin{vmatrix}1&3&-2\\\\3&5&6\\\\2&4&3\\end{vmatrix} \\,}\n,\\;\\;\\;\\;\nz=\\frac\n{\\, \\begin{vmatrix}1&3&5\\\\3&5&7\\\\2&4&8\\end{vmatrix} \\,}\n{\\, \\begin{vmatrix}1&3&-2\\\\3&5&6\\\\2&4&3\\end{vmatrix} \\,}.\n" }, { "math_id": 36, "text": "A\\mathbf{x}=\\mathbf{b}" }, { "math_id": 37, "text": "\\mathbf{x}=A^{-1}\\mathbf{b}" }, { "math_id": 38, "text": "A^{-1}" }, { "math_id": 39, "text": "A^+" }, { "math_id": 40, "text": "\\mathbf{x}=A^+ \\mathbf{b} + \\left(I - A^+ A\\right)\\mathbf{w}" }, { "math_id": 41, "text": "\\mathbf{w}" }, { "math_id": 42, "text": "\\mathbf{w}=\\mathbf{0}" }, { "math_id": 43, "text": "AA^+ \\mathbf{b}=\\mathbf{b}." }, { "math_id": 44, "text": "\\mathbf{x}=A^{-1}\\mathbf{b} + \\left(I - A^{-1}A\\right)\\mathbf{w} = A^{-1}\\mathbf{b} + \\left(I-I\\right)\\mathbf{w} = A^{-1}\\mathbf{b}" }, { "math_id": 45, "text": "A" }, { "math_id": 46, "text": "D" }, { "math_id": 47, "text": "L+U" }, { "math_id": 48, "text": "{\\bold x}^{(0)}" }, { "math_id": 49, "text": "{\\bold x}^{(k+1)} = D^{-1}({\\bold b} - (L+U){\\bold x}^{(k)})" }, { "math_id": 50, "text": "{\\bold x}^{(k)}" }, { "math_id": 51, "text": "{\\bold x}^{(k+1)}" }, { "math_id": 52, "text": "\\begin{alignat}{7}\na_{11} x_1 &&\\; + \\;&& a_{12} x_2 &&\\; + \\cdots + \\;&& a_{1n} x_n &&\\; = \\;&&& 0 \\\\\na_{21} x_1 &&\\; + \\;&& a_{22} x_2 &&\\; + \\cdots + \\;&& a_{2n} x_n &&\\; = \\;&&& 0 \\\\\n && && && && && \\vdots\\;\\ &&& \\\\\na_{m1} x_1 &&\\; + \\;&& a_{m2} x_2 &&\\; + \\cdots + \\;&& a_{mn} x_n &&\\; = \\;&&& 0. \\\\\n\\end{alignat}" }, { "math_id": 53, "text": "A\\mathbf{x}=\\mathbf{0}" }, { "math_id": 54, "text": "A\\mathbf{x}=\\mathbf{b}\\qquad \\text{and}\\qquad A\\mathbf{x}=\\mathbf{0}." }, { "math_id": 55, "text": "\\left\\{ \\mathbf{p}+\\mathbf{v} : \\mathbf{v}\\text{ is any solution to }A\\mathbf{x}=\\mathbf{0} \\right\\}." } ]
https://en.wikipedia.org/wiki?curid=113087
11310261
Bit numbering
Convention to identify bit positions In computing, bit numbering is the convention used to identify the bit positions in a binary number. Bit significance and indexing. In computing, the least significant bit (LSb) is the bit position in a binary integer representing the binary 1s place of the integer. Similarly, the most significant bit (MSb) represents the highest-order place of the binary integer. The LSb is sometimes referred to as the "low-order bit" or "right-most bit", due to the convention in positional notation of writing less significant digits further to the right. The MSb is similarly referred to as the "high-order bit" or "left-most bit". In both cases, the LSb and MSb correlate directly to the least significant digit and most significant digit of a decimal integer. Bit indexing correlates to the positional notation of the value in base 2. For this reason, bit index is not affected by how the value is stored on the device, such as the value's byte order. Rather, it is a property of the numeric value in binary itself. This is often utilized in programming via bit shifting: A value of codice_0 corresponds to the "n"th bit of a binary integer (with a value of codice_1). Least significant bit in digital steganography. In digital steganography, sensitive messages may be concealed by manipulating and storing information in the least significant bits of an image or a sound file. The user may later recover this information by extracting the least significant bits of the manipulated pixels to recover the original message. This allows the storage or transfer of digital information to remain concealed. A diagram showing how manipulating the least significant bits of a color can have a very subtle and generally unnoticeable affect on the color. In this diagram, green is represented by its RGB value, both in decimal and in binary. The red box surrounding the last two bits illustrates the least significant bits changed in the binary representation. Unsigned integer example. This table illustrates an example of decimal value of 149 and the location of LSb. In this particular example, the position of unit value (decimal 1 or 0) is located in bit position 0 (n = 0). MSb stands for "most significant bit", while LSb stands for "least significant bit". Most- vs least-significant bit first. The expressions "most significant bit first" and "least significant bit at last" are indications on the ordering of the sequence of the bits in the bytes sent over a wire in a serial transmission protocol or in a stream (e.g. an audio stream). "Most significant bit first" means that the most significant bit will arrive first: hence e.g. the hexadecimal number codice_2, codice_3 in binary representation, will arrive as the sequence codice_4 . "Least significant bit first" means that the least significant bit will arrive first: hence e.g. the same hexadecimal number codice_2, again codice_3 in binary representation, will arrive as the (reversed) sequence codice_7. LSb 0 bit numbering. When the bit numbering starts at zero for the least significant bit (LSb) the numbering scheme is called "LSb 0". This bit numbering method has the advantage that for any unsigned number the value of the number can be calculated by using exponentiation with the bit number and a base of 2. The value of an unsigned binary integer is therefore formula_0 where "bi" denotes the value of the bit with number "i", and "N" denotes the number of bits in total. MSb 0 bit numbering. When the bit numbering starts at zero for the most significant bit (MSb) the numbering scheme is called "MSb 0". The value of an unsigned binary integer is therefore formula_1 LSb calculation. LSb of a number can be calculated with time complexity of formula_2 with formula formula_3, where formula_4 means bitwise operation "AND" and formula_5 means bitwise operation "NOT on" formula_6"." Other. For MSb 1 numbering, the value of an unsigned binary integer is formula_7 PL/I numbers BIT strings starting with 1 for the leftmost bit. The Fortran BTEST function uses LSb 0 numbering. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\sum_{i=0}^{N-1} b_i \\cdot 2^i " }, { "math_id": 1, "text": " \\sum_{i=0}^{N-1} b_i \\cdot 2^{N-1-i} " }, { "math_id": 2, "text": "O(n)" }, { "math_id": 3, "text": "a \\And (\\sim a+1)" }, { "math_id": 4, "text": "\\And" }, { "math_id": 5, "text": "\\sim" }, { "math_id": 6, "text": "a" }, { "math_id": 7, "text": " \\sum_{i=1}^{N} b_i \\cdot 2^{N-i} " } ]
https://en.wikipedia.org/wiki?curid=11310261
11310655
Marsaglia polar method
Method for generating pseudo-random numbers The Marsaglia polar method is a pseudo-random number sampling method for generating a pair of independent standard normal random variables. Standard normal random variables are frequently used in computer science, computational statistics, and in particular, in applications of the Monte Carlo method. The polar method works by choosing random points ("x", "y") in the square −1 &lt; "x" &lt; 1, −1 &lt; "y" &lt; 1 until formula_0 and then returning the required pair of normal random variables as formula_1 or, equivalently, formula_2 where formula_3 and formula_4 represent the cosine and sine of the angle that the vector ("x", "y") makes with "x" axis. Theoretical basis. The underlying theory may be summarized as follows: If "u" is uniformly distributed in the interval 0 ≤ "u" &lt; 1, then the point is uniformly distributed on the unit circumference "x"2 + "y"2 = 1, and multiplying that point by an independent random variable ρ whose distribution is formula_5 will produce a point formula_6 whose coordinates are jointly distributed as two independent standard normal random variables. History. This idea dates back to Laplace, whom Gauss credits with finding the above formula_7 by taking the square root of formula_8 The transformation to polar coordinates makes evident that θ is uniformly distributed (constant density) from 0 to 2π, and that the radial distance "r" has density formula_9 This method of producing a pair of independent standard normal variates by radially projecting a random point on the unit circumference to a distance given by the square root of a chi-square-2 variate is called the polar method for generating a pair of normal random variables, Practical considerations. A direct application of this idea, formula_10 is called the Box–Muller transform, in which the chi variate is usually generated as formula_11 but that transform requires logarithm, square root, sine and cosine functions. On some processors, the cosine and sine of the same argument can be calculated in parallel using a single instruction. Notably for Intel-based machines, one can use fsincos assembler instruction or the expi instruction (available e.g. in D), to calculate complex formula_12 and just separate the real and imaginary parts. Note: To explicitly calculate the complex-polar form use the following substitutions in the general form, Let formula_13 and formula_14 Then formula_15 In contrast, the polar method here removes the need to calculate a cosine and sine. Instead, by solving for a point on the unit circle, these two functions can be replaced with the "x" and "y" coordinates normalized to the formula_16 radius. In particular, a random point ("x", "y") inside the unit circle is projected onto the unit circumference by setting formula_17 and forming the point formula_18 which is a faster procedure than calculating the cosine and sine. Some researchers argue that the conditional if instruction (for rejecting a point outside of the unit circle), can make programs slower on modern processors equipped with pipelining and branch prediction. Also this procedure requires about 27% more evaluations of the underlying random number generator (only formula_19 of generated points lie inside of unit circle). That random point on the circumference is then radially projected the required random distance by means of formula_20 using the same "s" because that "s" is independent of the random point on the circumference and is itself uniformly distributed from 0 to 1. Implementation. Java. Simple implementation in Java using the mean and standard deviation: private static double spare; private static boolean hasSpare = false; public static synchronized double generateGaussian(double mean, double stdDev) { if (hasSpare) { hasSpare = false; return spare * stdDev + mean; } else { double u, v, s; do { u = Math.random() * 2 - 1; v = Math.random() * 2 - 1; s = u * u + v * v; } while (s &gt;= 1 || s == 0); s = Math.sqrt(-2.0 * Math.log(s) / s); spare = v * s; hasSpare = true; return mean + stdDev * u * s; C++. A non-thread safe implementation in C++ using the mean and standard deviation: double generateGaussian(double mean, double stdDev) { static double spare; static bool hasSpare = false; if (hasSpare) { hasSpare = false; return spare * stdDev + mean; } else { double u, v, s; do { u = (rand() / ((double)RAND_MAX)) * 2.0 - 1.0; v = (rand() / ((double)RAND_MAX)) * 2.0 - 1.0; s = u * u + v * v; } while (s &gt;= 1.0 || s == 0.0); s = sqrt(-2.0 * log(s) / s); spare = v * s; hasSpare = true; return mean + stdDev * u * s; C++11 GNU GCC libstdc++'s implementation of std::normal_distribution uses the Marsaglia polar method, as quoted from herein. Julia. A simple Julia implementation: marsagliasample(N) Generate `2N` samples from the standard normal distribution using the Marsaglia method. function marsagliasample(N) z = Array{Float64}(undef,N,2); for i in axes(z,1) s = Inf; while s &gt; 1 z[i,:] .= 2rand(2) .- 1; s = sum(abs2.(z[i,:])) end z[i,:] .*= sqrt(-2log(s)/s); end vec(z) end marsagliasample(n,μ,σ) Generate `n` samples from the normal distribution with mean `μ` and standard deviation `σ` using the Marsaglia method. function marsagliasample(n,μ,σ) μ .+ σ*marsagliasample(cld(n,2))[1:n]; end The for loop can be parallelized by using the codice_0 macro. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " 0 < s=x^2+y^2 < 1, \\," }, { "math_id": 1, "text": " x\\sqrt{\\frac{-2\\ln(s)}{s}}\\,,\\ \\ y\\sqrt{\\frac{-2\\ln(s)}{s}}," }, { "math_id": 2, "text": " \\frac{x}{\\sqrt{s}} \\sqrt{-2\\ln(s)}\\,,\\ \\ \\frac{y}{\\sqrt{s}} \\sqrt{-2\\ln(s)}," }, { "math_id": 3, "text": "x/\\sqrt{s}" }, { "math_id": 4, "text": "y/\\sqrt{s}" }, { "math_id": 5, "text": "\\Pr(\\rho<a)=\\int_0^a re^{-r^2/2}\\,dr " }, { "math_id": 6, "text": " \\left(\\rho\\cos(2\\pi u),\\rho\\sin(2\\pi u)\\right) " }, { "math_id": 7, "text": "I=\\int_{-\\infty}^\\infty e^{-x^2/2}\\,dx " }, { "math_id": 8, "text": "I^2 = \\int_{-\\infty}^\\infty\\int_{-\\infty}^\\infty e^{-(x^2+y^2)/2}\\,dx\\,dy\n =\\int_0^{2\\pi}\\int_0^\\infty re^{-r^2/2} \\, dr \\, d\\theta." }, { "math_id": 9, "text": "re^{-r^2/2}. \\, " }, { "math_id": 10, "text": "x=\\sqrt{-2\\ln(u_1)}\\cos(2\\pi u_2),\\quad y=\\sqrt{-2\\ln(u_1)}\\sin(2\\pi u_2)" }, { "math_id": 11, "text": "\\sqrt{-2\\ln(u_1)}," }, { "math_id": 12, "text": "\\operatorname{expi}(z) = e^{i z} = \\cos(z) + i \\sin(z), \\, " }, { "math_id": 13, "text": " r = \\sqrt{-2 \\ln(u_1)} " }, { "math_id": 14, "text": " z = 2 \\pi u_2. " }, { "math_id": 15, "text": " \\ re^{i z} = \\sqrt{-2 \\ln(u_1)} e^{i 2 \\pi u_2} =\\sqrt{-2 \\ln(u_1)}\\left[ \\cos(2 \\pi u_2) + i \\sin(2 \\pi u_2)\\right]." }, { "math_id": 16, "text": "\\sqrt{x^2 + y^2}" }, { "math_id": 17, "text": "s=x^2+y^2" }, { "math_id": 18, "text": "\\left( \\frac{x}{\\sqrt{s}}, \\frac{y}{\\sqrt{s}} \\right), \\, " }, { "math_id": 19, "text": "\\pi/4 \\approx 79\\%" }, { "math_id": 20, "text": "\\sqrt{-2\\ln(s)}, \\, " } ]
https://en.wikipedia.org/wiki?curid=11310655
1131073
Pentagonal cupola
5th Johnson solid (12 faces) In geometry, the pentagonal cupola is one of the Johnson solids ("J"5). It can be obtained as a slice of the rhombicosidodecahedron. The pentagonal cupola consists of 5 equilateral triangles, 5 squares, 1 pentagon, and 1 decagon. Properties. The pentagonal cupola's faces are five equilateral triangles, five squares, one regular pentagon, and one regular decagon. It has the property of convexity and regular polygonal faces, from which it is classified as the fifth Johnson solid. This cupola produces two or more regular polyhedrons by slicing it with a plane, an elementary polyhedron's example. The following formulae for circumradius formula_0, and height formula_1, surface area formula_2, and volume formula_3 may be applied if all faces are regular with edge length formula_4: formula_5 It has an axis of symmetry passing through the center of both top and base, which is symmetrical by rotating around it at one-, two-, three-, and four-fifth of a full-turn angle. It is also mirror-symmetric relative to any perpendicular plane passing through a bisector of the hexagonal base. Therefore, it has pyramidal symmetry, the cyclic group formula_6 of order ten. Related polyhedron. The pentagonal cupola can be applied to construct a polyhedron. A construction that involves the attachment of its base to another polyhedron is known as augmentation; attaching it to prisms or antiprisms is known as elongation or gyroelongation. Some of the Johnson solids with such constructions are: elongated pentagonal cupola formula_7, gyroelongated pentagonal cupola formula_8, pentagonal orthobicupola formula_9, pentagonal gyrobicupola formula_10, pentagonal orthocupolarotunda formula_11, pentagonal gyrocupolarotunda formula_12, elongated pentagonal orthobicupola formula_13, elongated pentagonal gyrobicupola formula_14, elongated pentagonal orthocupolarotunda formula_15, gyroelongated pentagonal bicupola formula_16, gyroelongated pentagonal cupolarotunda formula_17, augmented truncated dodecahedron formula_18, parabiaugmented truncated dodecahedron formula_19, metabiaugmented truncated dodecahedron formula_20, triaugmented truncated dodecahedron formula_21, gyrate rhombicosidodecahedron formula_22, parabigyrate rhombicosidodecahedron formula_23, metabigyrate rhombicosidodecahedron formula_24, and trigyrate rhombicosidodecahedron formula_25. Relatedly, a construction from polyhedra by removing one or more pentagonal cupolas is known as diminishment: diminished rhombicosidodecahedron formula_26, paragyrate diminished rhombicosidodecahedron formula_27, metagyrate diminished rhombicosidodecahedron formula_28, bigyrate diminished rhombicosidodecahedron formula_29, parabidiminished rhombicosidodecahedron formula_30, metabidiminished rhombicosidodecahedron formula_31, gyrate bidiminished rhombicosidodecahedron formula_32, and tridiminished rhombicosidodecahedron formula_33. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " R " }, { "math_id": 1, "text": " h " }, { "math_id": 2, "text": " A " }, { "math_id": 3, "text": " V " }, { "math_id": 4, "text": " a " }, { "math_id": 5, "text": " \\begin{align}\n h &= \\sqrt{\\frac{5 - \\sqrt{5}}{10}}a &\\approx 0.526a, \\\\\n R &= \\frac{\\sqrt{11+4\\sqrt{5}}}{2}a &\\approx 2.233a, \\\\\n A &= \\frac{20+5\\sqrt{3}+\\sqrt{5\\left(145+62\\sqrt{5}\\right)}}{4}a^2 &\\approx 16.580a^2, \\\\\n V &= \\frac{5+4\\sqrt{5}}{6}a^3 &\\approx 2.324a^3.\n\\end{align} " }, { "math_id": 6, "text": " C_{5\\mathrm{v}} " }, { "math_id": 7, "text": " J_{20} " }, { "math_id": 8, "text": " J_{24} " }, { "math_id": 9, "text": " J_{30} " }, { "math_id": 10, "text": " J_{31} " }, { "math_id": 11, "text": " J_{32} " }, { "math_id": 12, "text": " J_{33} " }, { "math_id": 13, "text": " J_{38} " }, { "math_id": 14, "text": " J_{39} " }, { "math_id": 15, "text": " J_{40} " }, { "math_id": 16, "text": " J_{46} " }, { "math_id": 17, "text": " J_{47} " }, { "math_id": 18, "text": " J_{68} " }, { "math_id": 19, "text": " J_{69} " }, { "math_id": 20, "text": " J_{70} " }, { "math_id": 21, "text": " J_{71} " }, { "math_id": 22, "text": " J_{72} " }, { "math_id": 23, "text": " J_{73} " }, { "math_id": 24, "text": " J_{74} " }, { "math_id": 25, "text": " J_{75} " }, { "math_id": 26, "text": " J_{76} " }, { "math_id": 27, "text": " J_{77} " }, { "math_id": 28, "text": " J_{78} " }, { "math_id": 29, "text": " J_{79} " }, { "math_id": 30, "text": " J_{80} " }, { "math_id": 31, "text": " J_{81} " }, { "math_id": 32, "text": " J_{82} " }, { "math_id": 33, "text": " J_{83} " } ]
https://en.wikipedia.org/wiki?curid=1131073
1131111
Gyrate rhombicosidodecahedron
72nd Johnson solid In geometry, the gyrate rhombicosidodecahedron is one of the Johnson solids ("J"72). It is also a canonical polyhedron. A Johnson solid is one of 92 strictly convex polyhedra that is composed of regular polygon faces but are not uniform polyhedra (that is, they are not Platonic solids, Archimedean solids, prisms, or antiprisms). They were named by Norman Johnson, who first listed these polyhedra in 1966. Construction. The gyrate rhombicosidodecahedron can be constructed similarly as rhombicosidodecahedron: it is constructed from parabidiminished rhombicosidodecahedron by attaching two regular pentagonal cupolas onto its decagonal faces. As a result, these pentagonal cupolas cover its dodecagonal faces, so the resulting polyhedron has 20 equilateral triangles, 30 squares, and 10 regular pentagons as its faces. The difference between those two polyhedrons is that one of two pentagonal cupolas from the gyrate rhombicosidodecahedron is rotated through 36°. A convex polyhedron in which all faces are regular polygons is called the Johnson solid, and the gyrate rhombicosidodecahedron is among them, enumerated as the 72th Johnson solid formula_0. Properties. Because the two aforementioned polyhedrons have similar construction, they have the same surface area and volume. A gyrate rhombicosidodecahedron with edge length has a surface area by adding all of the area of its faces: formula_1 Its volume can be calculated by slicing it into two regular pentagonal cupolas and one parabigyrate rhombicosidodecahedron, and adding their volumes: formula_2 The gyrate rhombicosidodecahedron is one of the five Johnson solids that do not have Rupert property, meaning a polyhedron of the same or larger size and the same shape as it cannot pass through a hole in it. The other Johnson solids with no such property are parabigyrate rhombicosidodecahedron, metabigyrate rhombicosidodecahedron, trigyrate rhombicosidodecahedron, and paragyrate diminished rhombicosidodecahedron. See also. Alternative Johnson solids, constructed by rotating different cupolae of a rhombicosidodecahedron, are: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " J_{72} " }, { "math_id": 1, "text": " \\left(30+5\\sqrt{3}+3\\sqrt{25+10\\sqrt{5}}\\right)a^2 \\approx 59.306a^2. " }, { "math_id": 2, "text": " \\frac{60+29\\sqrt{5}}{3}a^3 \\approx 41.615a^3. " }, { "math_id": 3, "text": " J_{73} " }, { "math_id": 4, "text": " J_{74} " }, { "math_id": 5, "text": " J_{75} " } ]
https://en.wikipedia.org/wiki?curid=1131111
1131243
Dedekind zeta function
Generalization of the Riemann zeta function for algebraic number fields In mathematics, the Dedekind zeta function of an algebraic number field "K", generally denoted ζ"K"("s"), is a generalization of the Riemann zeta function (which is obtained in the case where "K" is the field of rational numbers Q). It can be defined as a Dirichlet series, it has an Euler product expansion, it satisfies a functional equation, it has an analytic continuation to a meromorphic function on the complex plane C with only a simple pole at "s" = 1, and its values encode arithmetic data of "K". The extended Riemann hypothesis states that if "ζ""K"("s") = 0 and 0 &lt; Re("s") &lt; 1, then Re("s") = 1/2. The Dedekind zeta function is named for Richard Dedekind who introduced it in his supplement to Peter Gustav Lejeune Dirichlet's Vorlesungen über Zahlentheorie. Definition and basic properties. Let "K" be an algebraic number field. Its Dedekind zeta function is first defined for complex numbers "s" with real part Re("s") &gt; 1 by the Dirichlet series formula_0 where "I" ranges through the non-zero ideals of the ring of integers "O""K" of "K" and "N""K"/Q("I") denotes the absolute norm of "I" (which is equal to both the index ["O""K" : "I"] of "I" in "O""K" or equivalently the cardinality of quotient ring "O""K" / "I"). This sum converges absolutely for all complex numbers "s" with real part Re("s") &gt; 1. In the case "K" = Q, this definition reduces to that of the Riemann zeta function. Euler product. The Dedekind zeta function of formula_1 has an Euler product which is a product over all the non-zero prime ideals formula_2 of formula_3 formula_4 This is the expression in analytic terms of the uniqueness of prime factorization of ideals in formula_3. For formula_5 is non-zero. Analytic continuation and functional equation. Erich Hecke first proved that "ζ""K"("s") has an analytic continuation to a meromorphic function that is analytic at all points of the complex plane except for one simple pole at "s" = 1. The residue at that pole is given by the analytic class number formula and is made up of important arithmetic data involving invariants of the unit group and class group of "K". The Dedekind zeta function satisfies a functional equation relating its values at "s" and 1 − "s". Specifically, let Δ"K" denote the discriminant of "K", let "r"1 (resp. "r"2) denote the number of real places (resp. complex places) of "K", and let formula_6 and formula_7 where Γ("s") is the gamma function. Then, the functions formula_8 satisfy the functional equation formula_9 Special values. Analogously to the Riemann zeta function, the values of the Dedekind zeta function at integers encode (at least conjecturally) important arithmetic data of the field "K". For example, the analytic class number formula relates the residue at "s" = 1 to the class number "h"("K") of "K", the regulator "R"("K") of "K", the number "w"("K") of roots of unity in "K", the absolute discriminant of "K", and the number of real and complex places of "K". Another example is at "s" = 0 where it has a zero whose order "r" is equal to the rank of the unit group of "O""K" and the leading term is given by formula_10 It follows from the functional equation that formula_11. Combining the functional equation and the fact that Γ("s") is infinite at all integers less than or equal to zero yields that "ζ""K"("s") vanishes at all negative even integers. It even vanishes at all negative odd integers unless "K" is totally real (i.e. "r"2 = 0; e.g. Q or a real quadratic field). In the totally real case, Carl Ludwig Siegel showed that "ζ""K"("s") is a non-zero rational number at negative odd integers. Stephen Lichtenbaum conjectured specific values for these rational numbers in terms of the algebraic K-theory of "K". Relations to other "L"-functions. For the case in which "K" is an abelian extension of Q, its Dedekind zeta function can be written as a product of Dirichlet L-functions. For example, when "K" is a quadratic field this shows that the ratio formula_12 is the "L"-function "L"("s", χ), where χ is a Jacobi symbol used as Dirichlet character. That the zeta function of a quadratic field is a product of the Riemann zeta function and a certain Dirichlet "L"-function is an analytic formulation of the quadratic reciprocity law of Gauss. In general, if "K" is a Galois extension of Q with Galois group "G", its Dedekind zeta function is the Artin "L"-function of the regular representation of "G" and hence has a factorization in terms of Artin "L"-functions of irreducible Artin representations of "G". The relation with Artin L-functions shows that if "L"/"K" is a Galois extension then formula_13 is holomorphic (formula_14 "divides" formula_15): for general extensions the result would follow from the Artin conjecture for L-functions. Additionally, "ζ""K"("s") is the Hasse–Weil zeta function of Spec "O""K" and the motivic "L"-function of the motive coming from the cohomology of Spec "K". Arithmetically equivalent fields. Two fields are called arithmetically equivalent if they have the same Dedekind zeta function. Wieb Bosma and Bart de Smit (2002) used Gassmann triples to give some examples of pairs of non-isomorphic fields that are arithmetically equivalent. In particular some of these pairs have different class numbers, so the Dedekind zeta function of a number field does not determine its class number. showed that two number fields "K" and "L" are arithmetically equivalent if and only if all but finitely many prime numbers "p" have the same inertia degrees in the two fields, i.e., if formula_16 are the prime ideals in "K" lying over "p", then the tuples formula_17 need to be the same for "K" and for "L" for almost all "p". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\zeta_K (s) = \\sum_{I \\subseteq \\mathcal{O}_K} \\frac{1}{(N_{K/\\mathbf{Q}} (I))^{s}}" }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "\\mathfrak{p}" }, { "math_id": 3, "text": "\\mathcal{O}_K" }, { "math_id": 4, "text": "\\zeta_K (s) = \\prod_{\\mathfrak{p} \\subseteq \\mathcal{O}_K} \\frac{1}{1-N_{K/\\mathbf{Q}}(\\mathfrak{p})^{-s}},\\text{ for Re}(s)>1." }, { "math_id": 5, "text": "\\mathrm{Re}(s)>1,\\ \\zeta_K(s)" }, { "math_id": 6, "text": "\\Gamma_\\mathbf{R}(s)=\\pi^{-s/2}\\Gamma(s/2)" }, { "math_id": 7, "text": "\\Gamma_\\mathbf{C}(s)=\n(2\\pi)^{-s}\\Gamma(s)" }, { "math_id": 8, "text": "\\Lambda_K(s)=\\left|\\Delta_K\\right|^{s/2}\\Gamma_\\mathbf{R}(s)^{r_1}\\Gamma_\\mathbf{C}(s)^{r_2}\\zeta_K(s)\\qquad\n\\Xi_K(s)=\\tfrac12(s^2+\\tfrac14)\\Lambda_K(\\tfrac12+is)\n" }, { "math_id": 9, "text": "\\Lambda_K(s)=\\Lambda_K(1-s).\\qquad\n\\Xi_K(-s)=\\Xi_K(s)\\;" }, { "math_id": 10, "text": "\\lim_{s\\rightarrow0}s^{-r}\\zeta_K(s)=-\\frac{h(K)R(K)}{w(K)}." }, { "math_id": 11, "text": "r=r_1+r_2-1" }, { "math_id": 12, "text": "\\frac{\\zeta_K(s)}{\\zeta_{\\mathbf{Q}}(s)}" }, { "math_id": 13, "text": "\\frac{\\zeta_L(s)}{\\zeta_K(s)}" }, { "math_id": 14, "text": "\\zeta_K(s)" }, { "math_id": 15, "text": "\\zeta_L(s)" }, { "math_id": 16, "text": "\\mathfrak p_i" }, { "math_id": 17, "text": "(\\dim_{\\mathbf Z/p} \\mathcal O_K / \\mathfrak p_i)" } ]
https://en.wikipedia.org/wiki?curid=1131243
1131644
Weil pairing
Binary function non degenerative defined between the point of twist of an abelian variety In mathematics, the Weil pairing is a pairing (bilinear form, though with multiplicative notation) on the points of order dividing "n" of an elliptic curve "E", taking values in "n"th roots of unity. More generally there is a similar Weil pairing between points of order "n" of an abelian variety and its dual. It was introduced by André Weil (1940) for Jacobians of curves, who gave an abstract algebraic definition; the corresponding results for elliptic functions were known, and can be expressed simply by use of the Weierstrass sigma function. Formulation. Choose an elliptic curve "E" defined over a field "K", and an integer "n" &gt; 0 (we require "n" to be coprime to char("K") if char("K") &gt; 0) such that "K" contains a primitive nth root of unity. Then the "n"-torsion on formula_0 is known to be a Cartesian product of two cyclic groups of order "n". The Weil pairing produces an "n"-th root of unity formula_1 by means of Kummer theory, for any two points formula_2, where formula_3 and formula_4. A down-to-earth construction of the Weil pairing is as follows. Choose a function "F" in the function field of "E" over the algebraic closure of "K" with divisor formula_5 So "F" has a simple zero at each point "P" + "kQ", and a simple pole at each point "kQ" if these points are all distinct. Then "F" is well-defined up to multiplication by a constant. If "G" is the translation of "F" by "Q", then by construction "G" has the same divisor, so the function "G/F" is constant. Therefore if we define formula_6 we shall have an "n"-th root of unity (as translating "n" times must give 1) other than 1. With this definition it can be shown that "w" is alternating and bilinear, giving rise to a non-degenerate pairing on the "n"-torsion. The Weil pairing does not extend to a pairing on all the torsion points (the direct limit of "n"-torsion points) because the pairings for different "n" are not the same. However they do fit together to give a pairing "T"ℓ("E") × "T"ℓ("E") → "T"ℓ(μ) on the Tate module "T"ℓ("E") of the elliptic curve "E" (the inverse limit of the ℓ"n"-torsion points) to the Tate module "T"ℓ(μ) of the multiplicative group (the inverse limit of ℓ"n" roots of unity). Generalisation to abelian varieties. For abelian varieties over an algebraically closed field "K", the Weil pairing is a nondegenerate pairing formula_7 for all "n" prime to the characteristic of " K". Here formula_8 denotes the dual abelian variety of "A". This is the so-called "Weil pairing" for higher dimensions. If "A" is equipped with a polarisation formula_9, then composition gives a (possibly degenerate) pairing formula_10 If "C" is a projective, nonsingular curve of genus ≥ 0 over "k", and "J" its Jacobian, then the theta-divisor of "J" induces a principal polarisation of "J", which in this particular case happens to be an isomorphism (see autoduality of Jacobians). Hence, composing the Weil pairing for "J" with the polarisation gives a nondegenerate pairing formula_11 for all "n" prime to the characteristic of "k". As in the case of elliptic curves, explicit formulae for this pairing can be given in terms of divisors of "C". Applications. The pairing is used in number theory and algebraic geometry, and has also been applied in elliptic curve cryptography and identity based encryption. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E(\\overline{K})" }, { "math_id": 1, "text": "w(P,Q) \\in \\mu_n" }, { "math_id": 2, "text": "P,Q \\in E(K)[n]" }, { "math_id": 3, "text": "E(K)[n]=\\{T \\in E(K) \\mid n \\cdot T = O \\} " }, { "math_id": 4, "text": "\\mu_n = \\{x\\in K \\mid x^n =1 \\} " }, { "math_id": 5, "text": " \\mathrm{div}(F)= \\sum_{0 \\leq k < n}[P+k\\cdot Q] - \\sum_{0 \\leq k < n} [k\\cdot Q]. " }, { "math_id": 6, "text": " w(P,Q):=\\frac{G}{F}" }, { "math_id": 7, "text": "A[n] \\times A^\\vee[n] \\longrightarrow \\mu_n" }, { "math_id": 8, "text": "A^\\vee" }, { "math_id": 9, "text": "\\lambda: A \\longrightarrow A^\\vee" }, { "math_id": 10, "text": "A[n] \\times A[n] \\longrightarrow \\mu_n." }, { "math_id": 11, "text": " J[n]\\times J[n] \\longrightarrow \\mu_n" } ]
https://en.wikipedia.org/wiki?curid=1131644
1131674
Muzzle energy
Kinetic energy of a bullet Muzzle energy is the kinetic energy of a bullet as it is expelled from the muzzle of a firearm. Without consideration of factors such as aerodynamics and gravity for the sake of comparison, muzzle energy is used as a rough indication of the destructive potential of a given firearm or cartridge. The heavier the bullet and especially the faster it moves, the higher its muzzle energy and the more damage it will do. Kinetic energy. The general formula for the kinetic energy is formula_0 where "v" is the velocity of the bullet and "m" is the mass of the bullet. Although both mass and velocity contribute to the muzzle energy, the muzzle energy is proportional to the mass while proportional to the "square" of the velocity. The velocity of the bullet is a more important determinant of muzzle energy. For a constant velocity, if the mass is doubled, the energy is doubled; however, for a constant mass, if the velocity is doubled, the muzzle energy increases "four" times. In the SI system the above "E"k will be in unit joules if the mass, "m", is in kilograms, and the speed, "v", is in metres per second. Typical muzzle energies of common firearms and cartridges. Muzzle energy is dependent upon the factors previously listed, and velocity is highly variable depending upon the length of the barrel a projectile is fired from. Also the muzzle energy is only an upper limit for how much energy is transmitted to the target, and the effects of a ballistic trauma depend on several other factors as well. There is wide variation in commercial ammunition. A bullet fired from .357 Magnum handgun can achieve a muzzle energy of . A bullet fired from the same gun might only achieve of muzzle energy, depending upon the manufacturer of the cartridge. Some .45 Colt +P ammunition can produce of muzzle energy. Legal requirements on muzzle energy. Many parts of the world use muzzle energy to classify guns into categories that require different categories of licence. In general guns that have the potential to be more dangerous have tighter controls, while those of minimal energy, such as small air pistols or air rifles, require little more than user registration, or in some countries have no restrictions at all. Overview of gun laws by nation indicates the various approaches taken. Firearms regulation in the United Kingdom is a complicated example, but is demarked by muzzle energy as well as barrel length and ammunition diameter. Some jurisdictions also stipulate "minimum" muzzle energies for safe hunting. For example, in Denmark rifle ammunition used for hunting the largest types of game there such as red deer must have a kinetic energy "E"100 (i.e.: at range) of at least and a bullet mass of at least or alternatively an "E"100 of at least and a bullet mass of at least . Namibia specifies three levels of minimum muzzle energy for hunting depending on the size of the game, for game such as springbok, for game such as hartebeest, and for Big Five game, together with a minimum caliber of . In Germany, airsoft guns with a muzzle energy of no more than are exempt from the gun law, while air guns with a muzzle energy of no more than may be acquired without a firearms license. Mainland China uses a varied concept of "muzzle ratio kinetic energy" (), which is the quotient (ratio) of the muzzle energy divided by the bore cross sectional area, to distinguish genuine guns from "imitation" replicas like toy guns. The Ministry of Public Security unilaterally introduced the concept in 2008 leading up to the Beijing Olympic Games, dictating that anything over 1.8 J/cm2 to be defined as real firearms. This caused many existing toy gun products on the Chinese market (particularly airsoft) to become illegal overnight, as almost all airsoft guns shooting a standard pellet have a muzzle velocity over , which translates to more than of muzzle energy, or 2.0536 J/cm2 of "ratio energy". For comparison a standard baseball changeup thrown at has 1.951 J/cm2 of "ratio energy" which also exceeds the 1.8 J/cm2 of a real firearm while a fastball can reach over 3.5 J/cm2 or nearly double the level of a real firearm. The subsequent crackdowns by local law enforcement led to many seizures, arrests and prosecutions of individual owners for "trafficking and possession of illegal weapons" over the years for weapons that were previously permitted. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E_\\mathrm{k} = \\frac{1}{2} mv^2," } ]
https://en.wikipedia.org/wiki?curid=1131674
11316994
Knot thickness
In knot theory, each link and knot can have an assigned knot thickness. Each realization of a link or knot has a thickness assigned to it. The thickness τ of a link allows us to introduce a scale with respect to which we can then define the ropelength of a link. Definition. There exist several possible definitions of thickness that coincide for smooth enough curves. Global radius of curvature. The thickness is defined using the simpler concept of the local thickness τ("x"). The local thickness at a point "x" on the link is defined as formula_0 where "x", "y", and "z" are points on the link, all distinct, and "r"("x", "y", "z") is the radius of the circle that passes through all three points ("x", "y", "z"). From this definition we can deduce that the local thickness is at most equal to the local radius of curvature. The thickness of a link is defined as formula_1 Injectivity radius. This definition ensures that a normal tube to the link with radius equal to τ("L") will not self intersect, and so we arrive at a "real world" knot made out of a thick string.
[ { "math_id": 0, "text": " \\tau(x)=\\inf r(x,y,z),\\, " }, { "math_id": 1, "text": "\\tau(L) = \\inf \\tau(x)." } ]
https://en.wikipedia.org/wiki?curid=11316994
11317157
Ropelength
Knot invariant In physical knot theory, each realization of a link or knot has an associated ropelength. Intuitively this is the minimal length of an ideally flexible rope that is needed to tie a given link, or knot. Knots and links that minimize ropelength are called ideal knots and ideal links respectively. Definition. The ropelength of a knotted curve formula_0 is defined as the ratio formula_1, where formula_2 is the length of formula_0 and formula_3 is the knot thickness of formula_0. Ropelength can be turned into a knot invariant by defining the ropelength of a knot formula_4 to be the minimum ropelength over all curves that realize formula_4. Ropelength minimizers. One of the earliest knot theory questions was posed in the following terms: &lt;templatestyles src="Block indent/styles.css"/&gt;"Can I tie a knot on a foot-long rope that is one inch thick?" In terms of ropelength, this asks if there is a knot with ropelength formula_5. The answer is no: an argument using quadrisecants shows that the ropelength of any nontrivial knot has to be at least formula_6. However, the search for the answer has spurred research on both theoretical and computational ground. It has been shown that for each link type there is a ropelength minimizer although it may only be of differentiability class formula_7. For the simplest nontrivial knot, the trefoil knot, computer simulations have shown that its minimum ropelength is at most 16.372. Dependence on crossing number. An extensive search has been devoted to showing relations between ropelength and other knot invariants such as the crossing number of a knot. For every knot formula_4, the ropelength of formula_4 is at least proportional to formula_8, where formula_9 denotes the crossing number. There exist knots and links, namely the formula_10 torus knots and formula_11-Hopf links, for which this lower bound is tight. That is, for these knots (in big O notation), formula_12 On the other hand, there also exist knots whose ropelength is larger, proportional to the crossing number itself rather than to a smaller power of it. This is nearly tight, as for every knot, formula_13 The proof of this near-linear upper bound uses a divide-and-conquer argument to show that minimum projections of knots can be embedded as planar graphs in the cubic lattice. However, no one has yet observed a knot family with super-linear dependence of length on crossing number and it is conjectured that the tight upper bound should be linear. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C" }, { "math_id": 1, "text": "L(C) = \\operatorname{Len}(C)/\\tau(C)" }, { "math_id": 2, "text": "\\operatorname{Len}(C)" }, { "math_id": 3, "text": "\\tau(C)" }, { "math_id": 4, "text": "K" }, { "math_id": 5, "text": "12" }, { "math_id": 6, "text": "15.66" }, { "math_id": 7, "text": "C^1" }, { "math_id": 8, "text": "\\operatorname{Cr}(K)^{3/4}" }, { "math_id": 9, "text": "\\operatorname{Cr}(K)" }, { "math_id": 10, "text": "(k,k-1)" }, { "math_id": 11, "text": "k" }, { "math_id": 12, "text": "L(K)=O(\\operatorname{Cr}(K)^{3/4})." }, { "math_id": 13, "text": "L(K)= O(\\operatorname{Cr}(K)\\log^5(\\operatorname{Cr}(K)))." } ]
https://en.wikipedia.org/wiki?curid=11317157
1131721
Neutron flux
Total distance traveled by neutrons within a volume over a time period The neutron flux is a scalar quantity used in nuclear physics and nuclear reactor physics. It is the total distance travelled by all free neutrons per unit time and volume. Equivalently, it can be defined as the number of neutrons travelling through a small sphere of radius formula_0 in a time interval, divided by a maximal cross section of the sphere (the great disk area, formula_1) and by the duration of the time interval. The dimension of neutron flux is formula_2 and the usual unit is cm−2s−1 (reciprocal square centimetre times reciprocal second). The neutron fluence is defined as the neutron flux integrated over a certain time period. So its dimension is formula_3 and its usual unit is cm−2 (reciprocal square centimetre). An older term used instead of cm−2 was "n.v.t." (neutrons, velocity, time). Natural neutron flux. Neutron flux in asymptotic giant branch stars and in supernovae is responsible for most of the natural nucleosynthesis producing elements heavier than iron. In stars there is a relatively low neutron flux on the order of 105 to 1011 cm−2 s−1, resulting in nucleosynthesis by the s-process (slow neutron-capture process). By contrast, after a core-collapse supernova, there is an extremely high neutron flux, on the order of 1032 cm−2 s−1, resulting in nucleosynthesis by the r-process (rapid neutron-capture process). Earth atmospheric neutron flux, apparently from thunderstorms, can reach levels of 3·10−2 to 9·10+1 cm−2 s−1. However, recent results (considered invalid by the original investigators) obtained with unshielded scintillation neutron detectors show a decrease in the neutron flux during thunderstorms. Recent research appears to support lightning generating 1013–1015 neutrons per discharge via photonuclear processes. Artificial neutron flux. Artificial neutron flux refers to neutron flux which is man-made, either as byproducts from weapons or nuclear energy production or for a specific application such as from a research reactor or by spallation. A flow of neutrons is often used to initiate the fission of unstable large nuclei. The additional neutron(s) may cause the nucleus to become unstable, causing it to decay (split) to form more stable products. This effect is essential in fission reactors and nuclear weapons. Within a nuclear fission reactor, the neutron flux is the primary quantity measured to control the reaction inside. The flux shape is the term applied to the density or relative strength of the flux as it moves around the reactor. Typically the strongest neutron flux occurs in the middle of the reactor core, becoming lower toward the edges. The higher the neutron flux the greater the chance of a nuclear reaction occurring as there are more neutrons going through an area per unit time. Reactor vessel wall neutron fluence. A reactor vessel of a typical nuclear power plant (PWR) endures in 40 years (32 full reactor years) of operation approximately 6.5×1019 cm−2 ("E" &gt; 1 MeV) of neutron fluence. Neutron flux causes reactor vessels to suffer from neutron embrittlement. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "R" }, { "math_id": 1, "text": "\\pi R^2" }, { "math_id": 2, "text": "\\mathsf{L}^{-2}\\mathsf{T}^{-1}" }, { "math_id": 3, "text": "\\mathsf{L}^{-2}" } ]
https://en.wikipedia.org/wiki?curid=1131721
113192
ER
ER or Er may refer to: &lt;templatestyles src="Template:TOC_right/styles.css" /&gt; See also. Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "\\epsilon_r" } ]
https://en.wikipedia.org/wiki?curid=113192
11320637
Dimension function
In mathematics, the notion of an (exact) dimension function (also known as a gauge function) is a tool in the study of fractals and other subsets of metric spaces. Dimension functions are a generalisation of the simple "diameter to the dimension" power law used in the construction of "s"-dimensional Hausdorff measure. Motivation: "s"-dimensional Hausdorff measure. Consider a metric space ("X", "d") and a subset "E" of "X". Given a number "s" ≥ 0, the "s"-dimensional Hausdorff measure of "E", denoted "μ""s"("E"), is defined by formula_0 where formula_1 "μ""δ""s"("E") can be thought of as an approximation to the "true" "s"-dimensional area/volume of "E" given by calculating the minimal "s"-dimensional area/volume of a covering of "E" by sets of diameter at most "δ". As a function of increasing "s", "μ""s"("E") is non-increasing. In fact, for all values of "s", except possibly one, "H""s"("E") is either 0 or +∞; this exceptional value is called the Hausdorff dimension of "E", here denoted dimH("E"). Intuitively speaking, "μ""s"("E") = +∞ for "s" &lt; dimH("E") for the same reason as the 1-dimensional linear length of a 2-dimensional disc in the Euclidean plane is +∞; likewise, "μ""s"("E") = 0 for "s" &gt; dimH("E") for the same reason as the 3-dimensional volume of a disc in the Euclidean plane is zero. The idea of a dimension function is to use different functions of diameter than just diam("C")"s" for some "s", and to look for the same property of the Hausdorff measure being finite and non-zero. Definition. Let ("X", "d") be a metric space and "E" ⊆ "X". Let "h" : [0, +∞) → [0, +∞] be a function. Define "μ""h"("E") by formula_2 where formula_3 Then "h" is called an (exact) dimension function (or gauge function) for "E" if "μ""h"("E") is finite and strictly positive. There are many conventions as to the properties that "h" should have: Rogers (1998), for example, requires that "h" should be monotonically increasing for "t" ≥ 0, strictly positive for "t" &gt; 0, and continuous on the right for all "t" ≥ 0. Packing dimension. Packing dimension is constructed in a very similar way to Hausdorff dimension, except that one "packs" "E" from inside with pairwise disjoint balls of diameter at most "δ". Just as before, one can consider functions "h" : [0, +∞) → [0, +∞] more general than "h"("δ") = "δ""s" and call "h" an exact dimension function for "E" if the "h"-packing measure of "E" is finite and strictly positive. Example. Almost surely, a sample path "X" of Brownian motion in the Euclidean plane has Hausdorff dimension equal to 2, but the 2-dimensional Hausdorff measure "μ"2("X") is zero. The exact dimension function "h" is given by the logarithmic correction formula_4 I.e., with probability one, 0 &lt; "μ""h"("X") &lt; +∞ for a Brownian path "X" in R2. For Brownian motion in Euclidean "n"-space R"n" with "n" ≥ 3, the exact dimension function is formula_5 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu^{s} (E) = \\lim_{\\delta \\to 0} \\mu_{\\delta}^{s} (E)," }, { "math_id": 1, "text": "\\mu_{\\delta}^{s} (E) = \\inf \\left\\{ \\left. \\sum_{i = 1}^{\\infty} \\mathrm{diam} (C_{i})^{s} \\right| \\mathrm{diam} (C_{i}) \\leq \\delta, \\bigcup_{i = 1}^{\\infty} C_{i} \\supseteq E \\right\\}." }, { "math_id": 2, "text": "\\mu^{h} (E) = \\lim_{\\delta \\to 0} \\mu_{\\delta}^{h} (E)," }, { "math_id": 3, "text": "\\mu_{\\delta}^{h} (E) = \\inf \\left\\{ \\left. \\sum_{i = 1}^{\\infty} h \\left( \\mathrm{diam} (C_{i}) \\right) \\right| \\mathrm{diam} (C_{i}) \\leq \\delta, \\bigcup_{i = 1}^{\\infty} C_{i} \\supseteq E \\right\\}." }, { "math_id": 4, "text": "h(r) = r^{2} \\cdot \\log \\frac1{r} \\cdot \\log \\log \\log \\frac1{r}." }, { "math_id": 5, "text": "h(r) = r^{2} \\cdot \\log \\log \\frac1r." } ]
https://en.wikipedia.org/wiki?curid=11320637
1132068
Density altitude
Altitude relative to standard atmospheric conditions The density altitude is the altitude relative to standard atmospheric conditions at which the air density would be equal to the indicated air density at the place of observation. In other words, the density altitude is the air density given as a height above mean sea level. The density altitude can also be considered to be the pressure altitude adjusted for a non-standard temperature. Both an increase in the temperature and a decrease in the atmospheric pressure, and, to a much lesser degree, an increase in the humidity, will cause an increase in the density altitude. In hot and humid conditions, the density altitude at a particular location may be significantly higher than the true altitude. In aviation, the density altitude is used to assess an aircraft's aerodynamic performance under certain weather conditions. The lift generated by the aircraft's airfoils, and the relation between its indicated airspeed (IAS) and its true airspeed (TAS), are also subject to air-density changes. Furthermore, the power delivered by the aircraft's engine is affected by the density and composition of the atmosphere. Aircraft safety. Air density is perhaps the single most important factor affecting aircraft performance. It has a direct bearing on: Aircraft taking off from a “hot and high” airport, such as the Quito Airport or Mexico City, are at a significant aerodynamic disadvantage. The following effects result from a density altitude that is higher than the actual physical altitude: Due to these performance issues, an aircraft's takeoff weight may need to be lowered, or takeoffs may need to be scheduled for cooler times of the day. The wind direction and the runway slope may need to be taken into account. Skydiving. The density altitude is an important factor in skydiving, and one that can be difficult to judge properly, even for experienced skydivers. In addition to the general change in wing efficiency that is common to all aviation, skydiving has additional considerations. There is an increased risk due to the high mobility of jumpers (who will often travel to a drop zone with a completely different density altitude than they are used to, without being made consciously aware of it by the routine of calibrating to QNH/QFE). Another factor is the higher susceptibility to hypoxia at high density altitudes, which, combined especially with the unexpected higher free-fall rate, can create dangerous situations and accidents. Parachutes at higher altitudes fly more aggressively, making their effective area smaller, which is more demanding for a pilot's skill and can be especially dangerous for high-performance landings, which require accurate estimates and have a low margin of error before they become dangerous. Calculation. The density altitude can be calculated from the atmospheric pressure and the outside air temperature (assuming dry air) using the following formula: formula_0 In this formula, formula_1, density altitude in meters (m); formula_2, (static) atmospheric pressure; formula_3, standard sea-level atmospheric pressure, International Standard Atmosphere (ISA): 1013.25 hectopascals (hPa), or U.S. Standard Atmosphere: 29.92 inches of mercury (inHg); formula_4, outside air temperature in kelvins (K); formula_5 = 288.15K, ISA sea-level air temperature; formula_6 = 0.0065K/m, ISA temperature lapse rate (below 11km); formula_7 ≈ 8.3144598J/mol·K, ideal gas constant; formula_8 ≈ 9.80665m/s2, gravitational acceleration; formula_9 ≈ 0.028964kg/mol, molar mass of dry air. The National Weather Service (NWS) formula. The National Weather Service uses the following dry-air approximation to the formula for the density altitude above in its standard: formula_10 In this formula, formula_11, National Weather Service density altitude in feet (formula_12); formula_2, station pressure (static atmospheric pressure) in inches of mercury (inHg); formula_4, station temperature (outside air temperature) in degrees Fahrenheit (°F). Note that the NWS standard specifies that the density altitude should be rounded to the nearest 100ft. Approximation formula for calculating the density altitude from the pressure altitude. This is an easier formula to calculate (with great approximation) the "density altitude" from the "pressure altitude" and the "ISA temperature deviation": formula_13 In this formula, formula_14, pressure altitude in feet (ft) formula_15; formula_16, atmospheric pressure in millibars (mb) adjusted to mean sea level; formula_17, outside air temperature in degrees Celsius (°C); formula_18, assuming that the outside air temperature falls at the rate of 1.98°C per 1,000ft of altitude until the tropopause (at ) is reached. Rounding up 1.98°C to 2°C, this approximation simplifies to become formula_19 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\text{DA} \\approx \\frac{T_\\text{SL}}{\\Gamma} \\left[ 1 - \\left( \\frac{P / P_\\text{SL}}{T / T_\\text{SL}} \\right)^{\\left(\\frac{g M}{\\Gamma R} - 1\\right)^{-1}} \\right].\n" }, { "math_id": 1, "text": " \\text{DA} " }, { "math_id": 2, "text": " P " }, { "math_id": 3, "text": " P_\\text{SL} " }, { "math_id": 4, "text": " T " }, { "math_id": 5, "text": " T_\\text{SL} " }, { "math_id": 6, "text": " \\Gamma " }, { "math_id": 7, "text": " R " }, { "math_id": 8, "text": " g " }, { "math_id": 9, "text": " M " }, { "math_id": 10, "text": "\n\\text{DA}_\\text{NWS} = 145442.16 ~ \\text{ft} \\left( 1 - \\left[ 17.326 ~ \\frac{^\\circ \\text{F}}{\\text{inHg}} \\ \\frac{P}{459.67 ~ {{}^\\circ \\text{F}} + T} \\right]^{0.235} \\right).\n" }, { "math_id": 11, "text": " \\text{DA}_\\text{NWS} " }, { "math_id": 12, "text": " \\text{ft} " }, { "math_id": 13, "text": "\n\\text{DA} \\approx \\text{PA} + 118.8 ~ \\frac{\\text{ft}}{{^\\circ \\text{C}}} \\left(T_\\text{OA} - T_\\text{ISA}\\right).\n" }, { "math_id": 14, "text": " \\text{PA} " }, { "math_id": 15, "text": " \\approx \\text{station elevation in feet} + 27 ~ \\frac{\\text{ft}}{\\text{mb}} (1013 ~ \\text{mb} - \\text{QNH}) " }, { "math_id": 16, "text": " \\text{QNH} " }, { "math_id": 17, "text": " T_\\text{OA}" }, { "math_id": 18, "text": " T_\\text{ISA} \\approx 15 ~ {{}^\\circ \\text{C}} - 1.98 ~ {{}^\\circ \\text{C}} \\, \\frac{\\text{PA}}{1000 ~ \\text{ft}} " }, { "math_id": 19, "text": "\\begin{align}\n \\text{DA}\n & \\approx \\text{PA} + 118.8 ~ \\frac{\\text{ft}}{^\\circ \\text{C}} \\left[ T_\\text{OA} + \\frac{\\text{PA}}{500 ~ \\text{ft}} {^\\circ \\text{C}} - 15 ~ {^\\circ \\text{C}} \\right] \\\\[3pt]\n & = 1.2376 \\, \\text{PA} + 118.8 ~ \\frac{\\text{ft}}{{}^\\circ \\text{C}} \\, T_\\text{OA} - 1782 ~ \\text{ft}.\n\\end{align}" } ]
https://en.wikipedia.org/wiki?curid=1132068
113222
Law of sines
Property of all triangles on a Euclidean plane In trigonometry, the law of sines, sine law, sine formula, or sine rule is an equation relating the lengths of the sides of any triangle to the sines of its angles. According to the law, formula_0 where "a", "b", and "c" are the lengths of the sides of a triangle, and "α", "β", and "γ" are the opposite angles (see figure 2), while "R" is the radius of the triangle's circumcircle. When the last part of the equation is not used, the law is sometimes stated using the reciprocals; formula_1 The law of sines can be used to compute the remaining sides of a triangle when two angles and a side are known—a technique known as triangulation. It can also be used when two sides and one of the non-enclosed angles are known. In some such cases, the triangle is not uniquely determined by this data (called the "ambiguous case") and the technique gives two possible values for the enclosed angle. The law of sines is one of two trigonometric equations commonly applied to find lengths and angles in scalene triangles, with the other being the law of cosines. The law of sines can be generalized to higher dimensions on surfaces with constant curvature. History. H. J. J. Winter's book "Eastern Science" states that the 7th century Indian mathematician Brahmagupta describes what we now know as the law of sines in his astronomical treatise Brāhmasphuṭasiddhānta. In his partial translation of this work, Colebrooke translates Brahmagupta's statement of the sine rule as: The product of the two sides of a triangle, divided by twice the perpendicular, is the central line; and the double of this is the diameter of the central line. According to Ubiratàn D'Ambrosio and Helaine Selin, the spherical law of sines was discovered in the 10th century. It is variously attributed to Abu-Mahmud Khojandi, Abu al-Wafa' Buzjani, Nasir al-Din al-Tusi and Abu Nasr Mansur. Ibn Muʿādh al-Jayyānī's "The book of unknown arcs of a sphere" in the 11th century contains the spherical law of sines. The plane law of sines was later stated in the 13th century by Nasīr al-Dīn al-Tūsī. In his "On the Sector Figure", he stated the law of sines for plane and spherical triangles, and provided proofs for this law. According to Glen Van Brummelen, "The Law of Sines is really Regiomontanus's foundation for his solutions of right-angled triangles in Book IV, and these solutions are in turn the bases for his solutions of general triangles." Regiomontanus was a 15th-century German mathematician. Proof. With the side of length a as the base, the triangle's altitude can be computed as "b" sin "γ" or as "c" sin "β". Equating these two expressions gives formula_2 and similar equations arise by choosing the side of length b or the side of length c as the base of the triangle. The ambiguous case of triangle solution. When using the law of sines to find a side of a triangle, an ambiguous case occurs when two separate triangles can be constructed from the data provided (i.e., there are two different possible solutions to the triangle). In the case shown below they are triangles "ABC" and "ABC′". &lt;templatestyles src="Block indent/styles.css"/&gt; Given a general triangle, the following conditions would need to be fulfilled for the case to be ambiguous: If all the above conditions are true, then each of angles "β" and "β′" produces a valid triangle, meaning that both of the following are true: formula_3 From there we can find the corresponding "β" and "b" or "β′" and "b′" if required, where "b" is the side bounded by vertices "A" and "C" and "b′" is bounded by "A" and "C′". Examples. The following are examples of how to solve a problem using the law of sines. Example 1. Given: side "a" = 20, side "c" = 24, and angle "γ" = 40°. Angle "α" is desired. Using the law of sines, we conclude that formula_4 formula_5 Note that the potential solution "α" = 147.61° is excluded because that would necessarily give "α" + "β" + "γ" &gt; 180°. Example 2. If the lengths of two sides of the triangle "a" and "b" are equal to "x", the third side has length "c", and the angles opposite the sides of lengths "a", "b", and "c" are "α", "β", and "γ" respectively then formula_6 Relation to the circumcircle. In the identity formula_7 the common value of the three fractions is actually the diameter of the triangle's circumcircle. This result dates back to Ptolemy. Proof. As shown in the figure, let there be a circle with inscribed formula_8 and another inscribed formula_9 that passes through the circle's center O. The formula_10 has a central angle of formula_11 and thus formula_12, by Thales's theorem. Since formula_13 is a right triangle, formula_14 where formula_15 is the radius of the circumscribing circle of the triangle. Angles formula_16 and formula_17 lie on the same circle and subtend the same chord "c"; thus, by the inscribed angle theorem, formula_18. Therefore, formula_19 Rearranging yields formula_20 Repeating the process of creating formula_9 with other points gives formula_21 Relationship to the area of the triangle. The area of a triangle is given by formula_22, where formula_23 is the angle enclosed by the sides of lengths "a" and "b". Substituting the sine law into this equation gives formula_24 Taking formula_25 as the circumscribing radius, formula_26 It can also be shown that this equality implies formula_27 where "T" is the area of the triangle and "s" is the semiperimeter formula_28 The second equality above readily simplifies to Heron's formula for the area. The sine rule can also be used in deriving the following formula for the triangle's area: denoting the semi-sum of the angles' sines as formula_29, we have formula_30 where formula_25 is the radius of the circumcircle: formula_31. The spherical law of sines. The spherical law of sines deals with triangles on a sphere, whose sides are arcs of great circles. Suppose the radius of the sphere is 1. Let "a", "b", and "c" be the lengths of the great-arcs that are the sides of the triangle. Because it is a unit sphere, "a", "b", and "c" are the angles at the center of the sphere subtended by those arcs, in radians. Let "A", "B", and "C" be the angles opposite those respective sides. These are dihedral angles between the planes of the three great circles. Then the spherical law of sines says: formula_32 Vector proof. Consider a unit sphere with three unit vectors OA, OB and OC drawn from the origin to the vertices of the triangle. Thus the angles "α", "β", and "γ" are the angles "a", "b", and "c", respectively. The arc BC subtends an angle of magnitude "a" at the centre. Introduce a Cartesian basis with OA along the "z"-axis and OB in the "xz"-plane making an angle "c" with the "z"-axis. The vector OC projects to ON in the "xy"-plane and the angle between ON and the "x"-axis is "A". Therefore, the three vectors have components: formula_33 The scalar triple product, OA ⋅ (OB × OC) is the volume of the parallelepiped formed by the position vectors of the vertices of the spherical triangle OA, OB and OC. This volume is invariant to the specific coordinate system used to represent OA, OB and OC. The value of the scalar triple product OA ⋅ (OB × OC) is the 3 × 3 determinant with OA, OB and OC as its rows. With the "z"-axis along OA the square of this determinant is formula_34 Repeating this calculation with the "z"-axis along OB gives (sin "c" sin "a" sin "B")2, while with the "z"-axis along OC it is (sin "a" sin "b" sin "C")2. Equating these expressions and dividing throughout by (sin "a" sin "b" sin "c")2 gives formula_35 where V is the volume of the parallelepiped formed by the position vector of the vertices of the spherical triangle. Consequently, the result follows. It is easy to see how for small spherical triangles, when the radius of the sphere is much greater than the sides of the triangle, this formula becomes the planar formula at the limit, since formula_36 and the same for sin "b" and sin "c". Geometric proof. Consider a unit sphere with: formula_37 Construct point formula_38 and point formula_39 such that formula_40 Construct point formula_41 such that formula_42 It can therefore be seen that formula_43 and formula_44 Notice that formula_41 is the projection of formula_45 on plane formula_46. Therefore formula_47 By basic trigonometry, we have: formula_48 But formula_49 Combining them we have: formula_50 By applying similar reasoning, we obtain the spherical law of sine: formula_51 Other proofs. A purely algebraic proof can be constructed from the spherical law of cosines. From the identity formula_52 and the explicit expression for formula_53 from the spherical law of cosines formula_54 Since the right hand side is invariant under a cyclic permutation of formula_55 the spherical sine rule follows immediately. The figure used in the Geometric proof above is used by and also provided in Banerjee (see Figure 3 in this paper) to derive the sine law using elementary linear algebra and projection matrices. Hyperbolic case. In hyperbolic geometry when the curvature is −1, the law of sines becomes formula_56 In the special case when "B" is a right angle, one gets formula_57 which is the analog of the formula in Euclidean geometry expressing the sine of an angle as the opposite side divided by the hypotenuse. The case of surfaces of constant curvature. Define a generalized sine function, depending also on a real parameter formula_58: formula_59 The law of sines in constant curvature formula_58 reads as formula_60 By substituting formula_61, formula_62, and formula_63, one obtains respectively formula_64, formula_65, and formula_66, that is, the Euclidean, spherical, and hyperbolic cases of the law of sines described above. Let formula_67 indicate the circumference of a circle of radius formula_68 in a space of constant curvature formula_58. Then formula_69. Therefore, the law of sines can also be expressed as: formula_70 This formulation was discovered by János Bolyai. Higher dimensions. A tetrahedron has four triangular facets. The absolute value of the polar sine (psin) of the normal vectors to the three facets that share a vertex of the tetrahedron, divided by the area of the fourth facet will not depend upon the choice of the vertex: formula_71 More generally, for an "n"-dimensional simplex (i.e., triangle ("n" = 2), tetrahedron ("n" = 3), pentatope ("n" = 4), etc.) in "n"-dimensional Euclidean space, the absolute value of the polar sine of the normal vectors of the facets that meet at a vertex, divided by the hyperarea of the facet opposite the vertex is independent of the choice of the vertex. Writing "V" for the hypervolume of the "n"-dimensional simplex and "P" for the product of the hyperareas of its ("n" − 1)-dimensional facets, the common ratio is formula_72 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\frac{a}{\\sin{\\alpha}} \\,=\\, \\frac{b}{\\sin{\\beta}} \\,=\\, \\frac{c}{\\sin{\\gamma}} \\,=\\, 2R, " }, { "math_id": 1, "text": " \\frac{\\sin{\\alpha}}{a} \\,=\\, \\frac{\\sin{\\beta}}{b} \\,=\\, \\frac{\\sin{\\gamma}}{c}. " }, { "math_id": 2, "text": "\\frac{\\sin \\beta}{b} = \\frac{\\sin \\gamma}{c}\\,," }, { "math_id": 3, "text": " {\\gamma}' = \\arcsin\\frac{c \\sin{\\alpha}}{a} \\quad \\text{or} \\quad {\\gamma} = \\pi - \\arcsin\\frac{c \\sin{\\alpha}}{a}." }, { "math_id": 4, "text": "\\frac{\\sin \\alpha}{20} = \\frac{\\sin (40^\\circ)}{24}." }, { "math_id": 5, "text": " \\alpha = \\arcsin\\left( \\frac{20\\sin (40^\\circ)}{24} \\right) \\approx 32.39^\\circ. " }, { "math_id": 6, "text": "\\begin{align}\n& \\alpha = \\beta = \\frac{180^\\circ-\\gamma}{2}= 90^\\circ-\\frac{\\gamma}{2} \\\\[6pt]\n& \\sin \\alpha = \\sin \\beta = \\sin \\left(90^\\circ-\\frac{\\gamma}{2}\\right) = \\cos \\left(\\frac{\\gamma}{2}\\right) \\\\[6pt]\n& \\frac{c}{\\sin \\gamma}=\\frac{a}{\\sin \\alpha}=\\frac{x}{\\cos \\left(\\frac{\\gamma}{2}\\right)} \\\\[6pt]\n& \\frac{c \\cos \\left(\\frac{\\gamma}{2}\\right)}{\\sin \\gamma} = x\n\\end{align}" }, { "math_id": 7, "text": " \\frac{a}{\\sin{\\alpha}} = \\frac{b}{\\sin{\\beta}} = \\frac{c}{\\sin{\\gamma}}," }, { "math_id": 8, "text": " \\triangle ABC" }, { "math_id": 9, "text": " \\triangle ADB" }, { "math_id": 10, "text": " \\angle AOD" }, { "math_id": 11, "text": " 180^\\circ" }, { "math_id": 12, "text": " \\angle ABD = 90^\\circ" }, { "math_id": 13, "text": " \\triangle ABD" }, { "math_id": 14, "text": " \\sin{\\delta}= \\frac{\\text{opposite}}{\\text{hypotenuse}}= \\frac{c}{2R}," }, { "math_id": 15, "text": " R= \\frac{d}{2}" }, { "math_id": 16, "text": "{\\gamma}" }, { "math_id": 17, "text": "{\\delta}" }, { "math_id": 18, "text": "{\\gamma} = {\\delta}" }, { "math_id": 19, "text": " \\sin{\\delta} = \\sin{\\gamma} = \\frac{c}{2R}." }, { "math_id": 20, "text": " 2R = \\frac{c}{\\sin{\\gamma}}." }, { "math_id": 21, "text": " \\frac{a}{\\sin{\\alpha}} = \\frac{b}{\\sin{\\beta}} = \\frac{c}{\\sin{\\gamma}}=2R." }, { "math_id": 22, "text": "T = \\frac{1}{2}ab \\sin \\theta" }, { "math_id": 23, "text": "\\theta" }, { "math_id": 24, "text": "T=\\frac{1}{2}ab \\cdot \\frac {c}{2R}." }, { "math_id": 25, "text": "R" }, { "math_id": 26, "text": "T=\\frac{abc}{4R}." }, { "math_id": 27, "text": "\\begin{align}\n\\frac{abc} {2T}\n& = \\frac{abc} {2\\sqrt{s(s-a)(s-b)(s-c)}} \\\\[6pt]\n& = \\frac {2abc} {\\sqrt{{(a^2+b^2+c^2)}^2-2(a^4+b^4+c^4) }},\n\\end{align}" }, { "math_id": 28, "text": "s = \\frac{1}{2}\\left(a+b+c\\right)." }, { "math_id": 29, "text": "S =\\frac{1}{2}\\left(\\sin A + \\sin B + \\sin C\\right)" }, { "math_id": 30, "text": "T = 4R^{2} \\sqrt{S \\left(S - \\sin A\\right) \\left(S - \\sin B\\right) \\left(S - \\sin C\\right)}" }, { "math_id": 31, "text": "2R = \\frac{a}{\\sin A} = \\frac{b}{\\sin B} = \\frac{c}{\\sin C}" }, { "math_id": 32, "text": "\\frac{\\sin A}{\\sin a} = \\frac{\\sin B}{\\sin b} = \\frac{\\sin C}{\\sin c}." }, { "math_id": 33, "text": "\\mathbf{OA} = \\begin{pmatrix}0 \\\\ 0 \\\\ 1\\end{pmatrix}, \\quad\n\\mathbf{OB} = \\begin{pmatrix}\\sin c \\\\ 0 \\\\ \\cos c\\end{pmatrix}, \\quad\n\\mathbf{OC} = \\begin{pmatrix}\\sin b\\cos A \\\\ \\sin b\\sin A \\\\ \\cos b\\end{pmatrix}." }, { "math_id": 34, "text": " \\begin{align}\n\\bigl(\\mathbf{OA} \\cdot (\\mathbf{OB} \\times \\mathbf{OC})\\bigr)^2\n& = \\left(\\det \\begin{pmatrix}\\mathbf{OA} & \\mathbf{OB} & \\mathbf{OC}\\end{pmatrix}\\right)^2 \\\\[4pt]\n& = \\begin{vmatrix}\n0 & 0 & 1 \\\\ \n\\sin c & 0 & \\cos c \\\\ \n\\sin b \\cos A & \\sin b \\sin A & \\cos b \n\\end{vmatrix} ^2\n= \\left(\\sin b \\sin c \\sin A\\right)^2.\n\\end{align}" }, { "math_id": 35, "text": "\n\\frac{\\sin^2 A}{\\sin^2 a}\n= \\frac{\\sin^2 B}{\\sin^2 b}\n= \\frac{\\sin^2 C}{\\sin^2 c}\n= \\frac{V^2}{\\sin^2 (a) \\sin^2 (b) \\sin^2 (c)},\n" }, { "math_id": 36, "text": "\\lim_{a \\to 0} \\frac{\\sin a}{a} = 1" }, { "math_id": 37, "text": "OA = OB = OC = 1" }, { "math_id": 38, "text": "D" }, { "math_id": 39, "text": "E" }, { "math_id": 40, "text": "\\angle ADO = \\angle AEO = 90^\\circ" }, { "math_id": 41, "text": "A'" }, { "math_id": 42, "text": "\\angle A'DO = \\angle A'EO = 90^\\circ" }, { "math_id": 43, "text": "\\angle ADA' = B" }, { "math_id": 44, "text": "\\angle AEA' = C" }, { "math_id": 45, "text": "A" }, { "math_id": 46, "text": "OBC" }, { "math_id": 47, "text": "\\angle AA'D = \\angle AA'E = 90^\\circ" }, { "math_id": 48, "text": "\\begin{align}\n AD &= \\sin c \\\\\n AE &= \\sin b\n\\end{align}" }, { "math_id": 49, "text": "AA' = AD \\sin B = AE \\sin C " }, { "math_id": 50, "text": "\\begin{align}\n\\sin c \\sin B &= \\sin b \\sin C \\\\\n\\Rightarrow \\frac{\\sin B}{\\sin b} &=\\frac{\\sin C}{\\sin c}\n\\end{align}" }, { "math_id": 51, "text": "\\frac{\\sin A}{\\sin a} =\\frac{\\sin B}{\\sin b} =\\frac{\\sin C}{\\sin c} " }, { "math_id": 52, "text": "\\sin^2 A = 1 - \\cos^2 A" }, { "math_id": 53, "text": "\\cos A" }, { "math_id": 54, "text": "\\begin{align}\n \\sin^2\\!A &= 1-\\left(\\frac{\\cos a - \\cos b\\, \\cos c}{\\sin b \\,\\sin c}\\right)^2\\\\\n &=\\frac{\\left(1-\\cos^2\\!b\\right) \\left(1-\\cos^2\\!c\\right)-\\left(\\cos a - \\cos b\\, \\cos c\\right)^2}\n {\\sin^2\\!b \\,\\sin^2\\!c}\\\\[8pt]\n \\frac{\\sin A}{\\sin a}\n &= \\frac{\\left[1-\\cos^2\\!a-\\cos^2\\!b-\\cos^2\\!c + 2\\cos a\\cos b\\cos c\\right]^{1/2}}{\\sin a\\sin b\\sin c}.\n\\end{align}" }, { "math_id": 55, "text": "a,\\;b,\\;c" }, { "math_id": 56, "text": "\\frac{\\sin A}{\\sinh a} = \\frac{\\sin B}{\\sinh b} = \\frac{\\sin C}{\\sinh c} \\,." }, { "math_id": 57, "text": "\\sin C = \\frac{\\sinh c}{\\sinh b} " }, { "math_id": 58, "text": "\\kappa" }, { "math_id": 59, "text": "\\sin_\\kappa(x) = x - \\frac{\\kappa}{3!}x^3 + \\frac{\\kappa^2}{5!}x^5 - \\frac{\\kappa^3}{7!}x^7 + \\cdots = \\sum_{n=0}^\\infty \\frac{(-1)^n \\kappa^n}{(2n+1)!}x^{2n+1}." }, { "math_id": 60, "text": "\\frac{\\sin A}{\\sin_\\kappa a} = \\frac{\\sin B}{\\sin_\\kappa b} = \\frac{\\sin C}{\\sin_\\kappa c} \\,." }, { "math_id": 61, "text": "\\kappa=0" }, { "math_id": 62, "text": "\\kappa=1" }, { "math_id": 63, "text": "\\kappa=-1" }, { "math_id": 64, "text": "\\sin_{0}(x) = x" }, { "math_id": 65, "text": "\\sin_{1}(x) = \\sin x" }, { "math_id": 66, "text": "\\sin_{-1}(x) = \\sinh x" }, { "math_id": 67, "text": "p_\\kappa(r)" }, { "math_id": 68, "text": "r" }, { "math_id": 69, "text": "p_\\kappa(r)=2\\pi\\sin_\\kappa(r)" }, { "math_id": 70, "text": "\\frac{\\sin A}{p_\\kappa(a)} = \\frac{\\sin B}{p_\\kappa(b)} = \\frac{\\sin C}{p_\\kappa(c)} \\,." }, { "math_id": 71, "text": "\\begin{align}\n& \\frac{\\left|\\operatorname{psin}(\\mathbf{b}, \\mathbf{c}, \\mathbf{d})\\right|}{\\mathrm{Area}_a} =\n \\frac{\\left|\\operatorname{psin}(\\mathbf{a}, \\mathbf{c}, \\mathbf{d})\\right|}{\\mathrm{Area}_b} =\n \\frac{\\left|\\operatorname{psin}(\\mathbf{a}, \\mathbf{b}, \\mathbf{d})\\right|}{\\mathrm{Area}_c} =\n \\frac{\\left|\\operatorname{psin}(\\mathbf{a}, \\mathbf{b}, \\mathbf{c})\\right|}{\\mathrm{Area}_d} \\\\[4pt]\n= {} & \\frac{(3~\\mathrm{Volume}_\\mathrm{tetrahedron})^2}{2~\\mathrm{Area}_a \\mathrm{Area}_b \\mathrm{Area}_c \\mathrm{Area}_d}\\,.\n\\end{align}" }, { "math_id": 72, "text": "\\frac{\\left|\\operatorname{psin}(\\mathbf{b}, \\ldots, \\mathbf{z})\\right|}{\\mathrm{Area}_a} = \\cdots = \\frac{\\left|\\operatorname{psin}(\\mathbf{a}, \\ldots, \\mathbf{y})\\right|}{\\mathrm{Area}_z} = \\frac{(nV)^{n-1}}{(n-1)! P}." } ]
https://en.wikipedia.org/wiki?curid=113222
11323534
Papyrus 115
Papyrus 115 ("P. Oxy." 4499), designated by 𝔓115 (in the Gregory-Aland numbering of New Testament manuscripts) is a fragmented manuscript of the New Testament written in Greek on papyrus. It consists of 26 fragments of a codex containing parts of the Book of Revelation. Using the study of comparative writing styles (palaeography), the manuscript is dated to the third century, "c." 225-275 AD. Scholars Bernard Pyne Grenfell and Arthur Hunt discovered the papyrus in Oxyrhynchus, Egypt. 𝔓115 was not deciphered and published until 2011. It is currently housed at the Ashmolean Museum. Description. The manuscript is a codex (precursor to the modern book) although in a very fragmentary condition. In its original form it had 33-36 lines per page of 15.5 cm by 23.5 cm. The surviving text includes Revelation 2:1-3, 13-15, 27-29; 3:10-12; 5:8-9; 6:5-6; 8:3-8, 11-13; 9:1-5, 7-16, 18-21; 10:1-4, 8-11; 11:1-5, 8-15, 18-19; 12:1-5, 8-10, 12-17; 13:1-3, 6-16, 18; 14:1-3, 5-7, 10-11, 14-15, 18-20; 15:1, 4-7. The manuscript has evidence of the following nomina sacra (names/titles considered sacred in Christianity): ΙΗΛ ("Israel"), ΑΥΤΟΥ ("his"), ΠΡΣ ("Father"), ΘΩ/ΘΝ/ΘΥ ("God"), ΑΝΩΝ/ΑΝΟΥ ("man"), ΠΝΑ ("Spirit"), ΟΥΝΟΥ/ΟΥΝΟΝ/ΟΥΝΩ ("heaven"), ΚΥ ("Master/Lord"). The manuscript uses the Greek numeral system, with no number extant as being written out in full. The manuscript is considered to be a witness to the Alexandrian text-type, following the text of Codex Alexandrinus (A) and Codex Ephraemi Rescriptus (C). An interesting element of 𝔓115 is that it gives the number of the beast in Revelation 13:18 as 616 (chi, iota, stigma (ΧΙϚ)), rather than the majority reading of 666 (chi, xi, stigma (ΧΞϚ)), as does Codex Ephraemi Rescriptus. According to the transcription of the INTF, a conjectured reading of the manuscript, due to the space left, is [χξϛ] η χιϛ ("666 or 616"), therefore not giving a definite number to the beast. και το τριτον της σεληνης ("and a third of the moon"): omit. : 𝔓115 incl. : A ο απολλυων ("the Destroyer") : 𝔓115 1740 απολλυων ("Apollyon") : 𝔓47 "pc" gig 2344 "δ/τεσσάρων" ("fourth") incl. : 𝔓115 Majority of manuscripts vgcl syp, h omit. : 𝔓47 A 0207 1611 2053 2344 "pc" lat syh co λεγουσαι ("they said") : 𝔓115 𝔓47 C 051 1006 1611 1841 1854 2329 2344 formula_0A λεγοντες ("saying") : A 2053 2351 formula_0K. το ονομα ("name") : 𝔓115 formula_0 co Bea. τα ονοματα αυτων ("their names") : 𝔓47 P 051 1006 1841 2329 al lat το ονομα αυτου ("his name") : C 1854 2053 "pc" Irlat Prim. "εκ του ουρανου" ("out of heaven") omit. : 𝔓115 175. incl. : 𝔓47 A Majority of manuscripts κατοικουντας ("who inhabit") : 𝔓115 A 2049 69. καθημενους ("dwelling") : 𝔓47 C P 1611 1854 2053 2329 "pc" syp, h Or "βχ" (2600) : 𝔓115. "αχ" / "χιλιων εξακοσιων" (1600) : 𝔓47 c2 A 42 69 82 93 177 325 456 498 627 699 1849 2138 2329 Majority of manuscripts References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathfrak{M}" } ]
https://en.wikipedia.org/wiki?curid=11323534
11324792
Tensor-hom adjunction
In mathematics, the tensor-hom adjunction is that the tensor product formula_0 and hom-functor formula_1 form an adjoint pair: formula_2 This is made more precise below. The order of terms in the phrase "tensor-hom adjunction" reflects their relationship: tensor is the left adjoint, while hom is the right adjoint. General statement. Say "R" and "S" are (possibly noncommutative) rings, and consider the right module categories (an analogous statement holds for left modules): formula_3 Fix an formula_4-bimodule formula_5 and define functors formula_6 and formula_7 as follows: formula_8 formula_9 Then formula_10 is left adjoint to formula_11. This means there is a natural isomorphism formula_12 This is actually an isomorphism of abelian groups. More precisely, if formula_13 is an formula_14-bimodule and formula_15 is a formula_16-bimodule, then this is an isomorphism of formula_17-bimodules. This is one of the motivating examples of the structure in a closed bicategory. Counit and unit. Like all adjunctions, the tensor-hom adjunction can be described by its counit and unit natural transformations. Using the notation from the previous section, the counit formula_18 has components formula_19 given by evaluation: For formula_20 formula_21 The components of the unit formula_22 formula_23 are defined as follows: For formula_24 in formula_13, formula_25 is a right formula_26-module homomorphism given by formula_27 The counit and unit equations can now be explicitly verified. For formula_13 in formula_28, formula_29 is given on simple tensors of formula_30 by formula_31 Likewise, formula_32 For formula_33 in formula_34"," formula_35 is a right formula_26-module homomorphism defined by formula_36 and therefore formula_37 The Ext and Tor functors. The Hom functor formula_38 commutes with arbitrary limits, while the tensor product formula_39 functor commutes with arbitrary colimits that exist in their domain category. However, in general, formula_38 fails to commute with colimits, and formula_39 fails to commute with limits; this failure occurs even among finite limits or colimits. This failure to preserve short exact sequences motivates the definition of the Ext functor and the Tor functor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "- \\otimes X" }, { "math_id": 1, "text": "\\operatorname{Hom}(X,-)" }, { "math_id": 2, "text": "\\operatorname{Hom}(Y \\otimes X, Z) \\cong \\operatorname{Hom}(Y,\\operatorname{Hom}(X,Z))." }, { "math_id": 3, "text": "\\mathcal{C} = \\mathrm{Mod}_S\\quad \\text{and} \\quad \\mathcal{D} = \\mathrm{Mod}_R ." }, { "math_id": 4, "text": "(R,S)" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": "F \\colon \\mathcal D \\rightarrow \\mathcal C" }, { "math_id": 7, "text": "G \\colon \\mathcal C \\rightarrow \\mathcal D" }, { "math_id": 8, "text": "F(Y) = Y \\otimes_R X \\quad \\text{for } Y \\in \\mathcal{D}" }, { "math_id": 9, "text": "G(Z) = \\operatorname{Hom}_S (X, Z) \\quad \\text{for } Z \\in \\mathcal{C}" }, { "math_id": 10, "text": "F" }, { "math_id": 11, "text": "G" }, { "math_id": 12, "text": "\\operatorname{Hom}_S (Y \\otimes_R X, Z) \\cong \\operatorname{Hom}_R (Y , \\operatorname{Hom}_S (X, Z))." }, { "math_id": 13, "text": "Y" }, { "math_id": 14, "text": "(A,R)" }, { "math_id": 15, "text": "Z" }, { "math_id": 16, "text": "(B,S)" }, { "math_id": 17, "text": "(B,A)" }, { "math_id": 18, "text": "\\varepsilon : FG \\to 1_{\\mathcal{C}}" }, { "math_id": 19, "text": "\\varepsilon_Z : \\operatorname{Hom}_S (X, Z) \\otimes_R X \\to Z" }, { "math_id": 20, "text": "\\phi \\in \\operatorname{Hom}_S (X, Z) \\quad \\text{and} \\quad x \\in X," }, { "math_id": 21, "text": "\\varepsilon(\\phi \\otimes x) = \\phi(x)." }, { "math_id": 22, "text": "\\eta : 1_{\\mathcal{D}} \\to GF" }, { "math_id": 23, "text": "\\eta_Y : Y \\to \\operatorname{Hom}_S (X, Y \\otimes_R X)" }, { "math_id": 24, "text": "y" }, { "math_id": 25, "text": "\\eta_Y(y) \\in \\operatorname{Hom}_S (X, Y \\otimes_R X)" }, { "math_id": 26, "text": "S" }, { "math_id": 27, "text": "\\eta_Y(y)(t) = y \\otimes t \\quad \\text{for } t \\in X." }, { "math_id": 28, "text": "\\mathcal{D}" }, { "math_id": 29, "text": "\n\\varepsilon_{FY}\\circ F(\\eta_Y) : \nY \\otimes_R X \\to \n\\operatorname{Hom}_S (X , Y \\otimes_R X) \\otimes_R X \\to\nY \\otimes_R X\n" }, { "math_id": 30, "text": "Y \\otimes X" }, { "math_id": 31, "text": "\\varepsilon_{FY}\\circ F(\\eta_Y)(y \\otimes x) = \\eta_Y(y)(x) = y \\otimes x." }, { "math_id": 32, "text": "G(\\varepsilon_Z)\\circ\\eta_{GZ} :\n\\operatorname{Hom}_S (X, Z) \\to \n\\operatorname{Hom}_S (X, \\operatorname{Hom}_S (X , Z) \\otimes_R X) \\to\n\\operatorname{Hom}_S (X, Z).\n" }, { "math_id": 33, "text": "\\phi" }, { "math_id": 34, "text": " \\operatorname{Hom}_S (X, Z)" }, { "math_id": 35, "text": "G(\\varepsilon_Z)\\circ\\eta_{GZ}(\\phi)" }, { "math_id": 36, "text": "G(\\varepsilon_Z)\\circ\\eta_{GZ}(\\phi)(x) = \\varepsilon_{Z}(\\phi \\otimes x) = \\phi(x)" }, { "math_id": 37, "text": "G(\\varepsilon_Z)\\circ\\eta_{GZ}(\\phi) = \\phi." }, { "math_id": 38, "text": "\\hom(X,-)" }, { "math_id": 39, "text": "-\\otimes X" } ]
https://en.wikipedia.org/wiki?curid=11324792
11325244
Raising and lowering indices
Mathematical operations relating different types of tensor In mathematics and mathematical physics, raising and lowering indices are operations on tensors which change their type. Raising and lowering indices are a form of index manipulation in tensor expressions. Vectors, covectors and the metric. Mathematical formulation. Mathematically vectors are elements of a vector space formula_0 over a field formula_1, and for use in physics formula_0 is usually defined with formula_2 or formula_3. Concretely, if the dimension formula_4 of formula_0 is finite, then, after making a choice of basis, we can view such vector spaces as formula_5 or formula_6. The dual space is the space of linear functionals mapping formula_7. Concretely, in matrix notation these can be thought of as row vectors, which give a number when applied to column vectors. We denote this by formula_8, so that formula_9 is a linear map formula_10. Then under a choice of basis formula_11, we can view vectors formula_12 as an formula_13 vector with components formula_14 (vectors are taken by convention to have indices up). This picks out a choice of basis formula_15 for formula_16, defined by the set of relations formula_17. For applications, raising and lowering is done using a structure known as the (pseudo‑)metric tensor (the 'pseudo-' refers to the fact we allow the metric to be indefinite). Formally, this is a non-degenerate, symmetric bilinear form formula_18 formula_19 formula_20 In this basis, it has components formula_21, and can be viewed as a symmetric matrix in formula_22 with these components. The inverse metric exists due to non-degeneracy and is denoted formula_23, and as a matrix is the inverse to formula_24. Raising and lowering vectors and covectors. Raising and lowering is then done in coordinates. Given a vector with components formula_14, we can contract with the metric to obtain a covector: formula_25 and this is what we mean by lowering the index. Conversely, contracting a covector with the inverse metric gives a vector: formula_26 This process is called raising the index. Raising and then lowering the same index (or conversely) are inverse operations, which is reflected in the metric and inverse metric tensors being inverse to each other (as is suggested by the terminology): formula_27 where formula_28 is the Kronecker delta or identity matrix. Finite-dimensional real vector spaces with (pseudo-)metrics are classified up to signature, a coordinate-free property which is well-defined by Sylvester's law of inertia. Possible metrics on real space are indexed by signature formula_29. This is a metric associated to formula_30 dimensional real space. The metric has signature formula_29 if there exists a basis (referred to as an orthonormal basis) such that in this basis, the metric takes the form formula_31 with formula_32 positive ones and formula_33 negative ones. The concrete space with elements which are formula_34-vectors and this concrete realization of the metric is denoted formula_35, where the 2-tuple formula_36 is meant to make it clear that the underlying vector space of formula_37 is formula_5: equipping this vector space with the metric formula_24 is what turns the space into formula_37. Examples: Well-formulated expressions are constrained by the rules of Einstein summation: any index may appear at most twice and furthermore a raised index must contract with a lowered index. With these rules we can immediately see that an expression such as formula_42 is well formulated while formula_43 is not. Example in Minkowski spacetime. The covariant 4-position is given by formula_44 with components: formula_45 (where x,y,z are the usual Cartesian coordinates) and the Minkowski metric tensor with metric signature (− + + +) is defined as formula_46 in components: formula_47 To raise the index, multiply by the tensor and contract: formula_48 then for "λ" 0: formula_49 and for "λ" "j" 1, 2, 3: formula_50 So the index-raised contravariant 4-position is: formula_51 This operation is equivalent to the matrix multiplication formula_52 Given two vectors, formula_53 and formula_54, we can write down their (pseudo-)inner product in two ways: formula_55 By lowering indices, we can write this expression as formula_56 What is this in matrix notation? The first expression can be written as formula_57 while the second is, after lowering the indices of formula_53, formula_58 Coordinate free formalism. It is instructive to consider what raising and lowering means in the abstract linear algebra setting. We first fix definitions: formula_0 is a finite-dimensional vector space over a field formula_1. Typically formula_2 or formula_3. formula_59 is a non-degenerate bilinear form, that is, formula_60 is a map which is linear in both arguments, making it a bilinear form. By formula_59 being non-degenerate we mean that for each formula_12 such that formula_61, there is a formula_62 such that formula_63 In concrete applications, formula_59 is often considered a structure on the vector space, for example an inner product or more generally a metric tensor which is allowed to have indefinite signature, or a symplectic form formula_64. Together these cover the cases where formula_59 is either symmetric or anti-symmetric, but in full generality formula_59 need not be either of these cases. There is a partial evaluation map associated to formula_59, formula_65 where formula_66 denotes an argument which is to be evaluated, and formula_67 denotes an argument whose evaluation is deferred. Then formula_68 is an element of formula_16, which sends formula_69. We made a choice to define this partial evaluation map as being evaluated on the first argument. We could just as well have defined it on the second argument, and non-degeneracy is also independent of argument chosen. Also, when formula_59 has well defined (anti-)symmetry, evaluating on either argument is equivalent (up to a minus sign for anti-symmetry). Non-degeneracy shows that the partial evaluation map is injective, or equivalently that the kernel of the map is trivial. In finite dimension, the dual space formula_16 has equal dimension to formula_0, so non-degeneracy is enough to conclude the map is a linear isomorphism. If formula_59 is a structure on the vector space sometimes call this the canonical isomorphism formula_70. It therefore has an inverse, formula_71 and this is enough to define an associated bilinear form on the dual: formula_72 where the repeated use of formula_73 is disambiguated by the argument taken. That is, formula_74 is the inverse map, while formula_75 is the bilinear form. Checking these expressions in coordinates makes it evident that this is what raising and lowering indices means abstractly. Tensors. We will not develop the abstract formalism for tensors straightaway. Formally, an formula_76 tensor is an object described via its components, and has formula_77 components up, formula_78 components down. A generic formula_76 tensor is written formula_79 We can use the metric tensor to raise and lower tensor indices just as we raised and lowered vector indices and raised covector indices. Example of raising and lowering. For a (0,2) tensor, twice contracting with the inverse metric tensor and contracting in different indices raises each index: formula_84 Similarly, twice contracting with the metric tensor and contracting in different indices lowers each index: formula_85 Let's apply this to the theory of electromagnetism. The contravariant electromagnetic tensor in the (+ − − −) signature is given by formula_86 In components, formula_87 To obtain the covariant tensor Fαβ, contract with the inverse metric tensor: formula_88 and since "F"00 0 and "F"0"i" − "Fi"0, this reduces to formula_89 Now for "α" 0, "β" "k" 1, 2, 3: formula_90 and by antisymmetry, for "α" "k" 1, 2, 3, "β" 0: formula_91 then finally for "α" "k" 1, 2, 3, "β" "l" 1, 2, 3; formula_92 The (covariant) lower indexed tensor is then: formula_93 This operation is equivalent to the matrix multiplication formula_94 General rank. For a tensor of order n, indices are raised by (compatible with above): formula_95 and lowered by: formula_96 and for a mixed tensor: formula_97 We need not raise or lower all indices at once: it is perfectly fine to raise or lower a single index. Lowering an index of an formula_76 tensor gives a formula_98 tensor, while raising an index gives a formula_99 (where formula_100 have suitable values, for example we cannot lower the index of a formula_101 tensor.) References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "K" }, { "math_id": 2, "text": "K=\\mathbb{R}" }, { "math_id": 3, "text": "\\mathbb{C}" }, { "math_id": 4, "text": "n=\\text{dim}(V)" }, { "math_id": 5, "text": "\\mathbb{R}^n" }, { "math_id": 6, "text": "\\mathbb{C}^n" }, { "math_id": 7, "text": "V\\rightarrow K" }, { "math_id": 8, "text": "V^*:= \\text{Hom}(V,K)" }, { "math_id": 9, "text": "\\alpha \\in V^*" }, { "math_id": 10, "text": "\\alpha:V\\rightarrow K" }, { "math_id": 11, "text": "\\{e_i\\}" }, { "math_id": 12, "text": "v\\in V" }, { "math_id": 13, "text": "K^n" }, { "math_id": 14, "text": "v^i" }, { "math_id": 15, "text": "\\{e^i\\}" }, { "math_id": 16, "text": "V^*" }, { "math_id": 17, "text": "e^i(e_j) = \\delta^i_j" }, { "math_id": 18, "text": "g:V\\times V\\rightarrow K \\text{ a bilinear form}" }, { "math_id": 19, "text": "g(u,v) = g(v,u) \\text{ for all }u,v\\in V \\text{ (Symmetric)}" }, { "math_id": 20, "text": "\\forall v\\in V \\text{ s.t. } v\\neq \\vec{0}, \\exists u\\in V \\text{ such that } g(v,u)\\neq 0 \\text{ (Non-degenerate)}" }, { "math_id": 21, "text": "g(e_i,e_j) = g_{ij}" }, { "math_id": 22, "text": "\\text{Mat}_{n\\times n}(K)" }, { "math_id": 23, "text": "g^{ij}" }, { "math_id": 24, "text": "g_{ij}" }, { "math_id": 25, "text": "g_{ij}v^j = v_i" }, { "math_id": 26, "text": "g^{ij}\\alpha_j=\\alpha^i." }, { "math_id": 27, "text": "g^{ij}g_{jk}=g_{kj}g^{ji}={\\delta^i}_k={\\delta_k}^i" }, { "math_id": 28, "text": "\\delta^i_j" }, { "math_id": 29, "text": "(p,q)" }, { "math_id": 30, "text": "n=p+q" }, { "math_id": 31, "text": "(g_{ij}) = \\text{diag}(+1, \\cdots, +1, -1, \\cdots, -1)" }, { "math_id": 32, "text": "p" }, { "math_id": 33, "text": "q" }, { "math_id": 34, "text": "n" }, { "math_id": 35, "text": "\\mathbb{R}^{p,q}=(\\mathbb{R}^n,g_{ij})" }, { "math_id": 36, "text": "(\\mathbb{R}^n, g_{ij})" }, { "math_id": 37, "text": "\\mathbb{R}^{p,q}" }, { "math_id": 38, "text": "\\mathbb{R}^3" }, { "math_id": 39, "text": "\\mathbb{R}^{n,0} = \\mathbb{R}^n" }, { "math_id": 40, "text": "g_{ij} = \\delta_{ij}" }, { "math_id": 41, "text": "\\mathbb{R}^{1,3}" }, { "math_id": 42, "text": "g_{ij}v^iu^j" }, { "math_id": 43, "text": "g_{ij}v_iu_j" }, { "math_id": 44, "text": "X_\\mu = (-ct, x, y, z)" }, { "math_id": 45, "text": "X_0 = -ct, \\quad X_1 = x, \\quad X_2 = y, \\quad X_3 = z" }, { "math_id": 46, "text": " \\eta_{\\mu \\nu} = \\eta^{\\mu \\nu} = \\begin{pmatrix}\n -1 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1\n\\end{pmatrix} " }, { "math_id": 47, "text": "\\eta_{00} = -1, \\quad \\eta_{i0} = \\eta_{0i} = 0,\\quad \\eta_{ij} = \\delta_{ij}\\,(i,j \\neq 0)." }, { "math_id": 48, "text": "X^\\lambda = \\eta^{\\lambda\\mu}X_\\mu = \\eta^{\\lambda 0}X_0 + \\eta^{\\lambda i}X_i" }, { "math_id": 49, "text": "X^0 = \\eta^{00}X_0 + \\eta^{0i}X_i = -X_0" }, { "math_id": 50, "text": "X^j = \\eta^{j0}X_0 + \\eta^{ji}X_i = \\delta^{ji}X_i = X_j \\,." }, { "math_id": 51, "text": "X^\\mu = (ct, x, y, z)\\,." }, { "math_id": 52, "text": " \\begin{pmatrix}\n -1 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1\n\\end{pmatrix} \\begin{pmatrix}\n -ct \\\\\n x \\\\\n y \\\\\n z \n\\end{pmatrix} = \\begin{pmatrix}\n ct \\\\\n x \\\\\n y \\\\\n z \n\\end{pmatrix}. " }, { "math_id": 53, "text": "X^\\mu" }, { "math_id": 54, "text": "Y^\\mu" }, { "math_id": 55, "text": "\\eta_{\\mu\\nu}X^\\mu Y^\\nu." }, { "math_id": 56, "text": "X_\\mu Y^\\mu." }, { "math_id": 57, "text": " \\begin{pmatrix} X^0 & X^1 & X^2 & X^3 \\end{pmatrix} \\begin{pmatrix}\n -1 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1\n\\end{pmatrix} \\begin{pmatrix}\n Y^0 \\\\\n Y^1 \\\\\n Y^2 \\\\\n Y^3\\end{pmatrix}\n" }, { "math_id": 58, "text": "\\begin{pmatrix} -X^0 & X^1 & X^2 & X^3 \\end{pmatrix}\\begin{pmatrix}\n Y^0 \\\\\n Y^1 \\\\\n Y^2 \\\\\n Y^3\\end{pmatrix}." }, { "math_id": 59, "text": "\\phi" }, { "math_id": 60, "text": "\\phi:V\\times V\\rightarrow K" }, { "math_id": 61, "text": "v\\neq 0" }, { "math_id": 62, "text": "u\\in V" }, { "math_id": 63, "text": "\\phi(v,u)\\neq 0." }, { "math_id": 64, "text": "\\omega" }, { "math_id": 65, "text": "\\phi(\\cdot, - ):V\\rightarrow V^*; v\\mapsto \\phi(v,\\cdot)" }, { "math_id": 66, "text": "\\cdot" }, { "math_id": 67, "text": "-" }, { "math_id": 68, "text": "\\phi(v,\\cdot)" }, { "math_id": 69, "text": "u\\mapsto \\phi(v,u)" }, { "math_id": 70, "text": "V\\rightarrow V^*" }, { "math_id": 71, "text": "\\phi^{-1}:V^*\\rightarrow V, " }, { "math_id": 72, "text": "\\phi^{-1}:V^*\\times V^*\\rightarrow K, \\phi^{-1}(\\alpha,\\beta) = \\phi(\\phi^{-1}(\\alpha),\\phi^{-1}(\\beta))." }, { "math_id": 73, "text": "\\phi^{-1}" }, { "math_id": 74, "text": "\\phi^{-1}(\\alpha)" }, { "math_id": 75, "text": "\\phi^{-1}(\\alpha,\\beta)" }, { "math_id": 76, "text": "(r,s)" }, { "math_id": 77, "text": "r" }, { "math_id": 78, "text": "s" }, { "math_id": 79, "text": "T^{\\mu_1\\cdots \\mu_r}{}_{\\nu_1\\cdots \\nu_s}." }, { "math_id": 80, "text": "\\mathbb{F}" }, { "math_id": 81, "text": "g_{\\mu\\nu}." }, { "math_id": 82, "text": "\\delta^\\mu{}_\\nu" }, { "math_id": 83, "text": "\\Lambda^\\mu{}_\\nu." }, { "math_id": 84, "text": "A^{\\mu\\nu}=g^{\\mu\\rho}g^{\\nu\\sigma}A_{\\rho \\sigma}." }, { "math_id": 85, "text": "A_{\\mu\\nu}=g_{\\mu\\rho}g_{\\nu\\sigma}A^{\\rho\\sigma}" }, { "math_id": 86, "text": "F^{\\alpha\\beta} = \\begin{pmatrix}\n0 & -\\frac{E_x}{c} & -\\frac{E_y}{c} & -\\frac{E_z}{c} \\\\\n\\frac{E_x}{c} & 0 & -B_z & B_y \\\\\n\\frac{E_y}{c} & B_z & 0 & -B_x \\\\\n\\frac{E_z}{c} & -B_y & B_x & 0\n\\end{pmatrix}." }, { "math_id": 87, "text": "F^{0i} = -F^{i0} = - \\frac{E^i}{c} ,\\quad F^{ij} = - \\varepsilon^{ijk} B_k " }, { "math_id": 88, "text": "\\begin{align}\nF_{\\alpha\\beta} & = \\eta_{\\alpha\\gamma} \\eta_{\\beta\\delta} F^{\\gamma\\delta} \\\\\n& = \\eta_{\\alpha 0} \\eta_{\\beta 0} F^{0 0} + \\eta_{\\alpha i} \\eta_{\\beta 0} F^{i 0}\n+ \\eta_{\\alpha 0} \\eta_{\\beta i} F^{0 i} + \\eta_{\\alpha i} \\eta_{\\beta j} F^{i j}\n\\end{align}" }, { "math_id": 89, "text": "F_{\\alpha\\beta} = \\left(\\eta_{\\alpha i} \\eta_{\\beta 0} - \\eta_{\\alpha 0} \\eta_{\\beta i} \\right) F^{i 0} + \\eta_{\\alpha i} \\eta_{\\beta j} F^{i j}" }, { "math_id": 90, "text": "\\begin{align}\nF_{0k} & = \\left(\\eta_{0i} \\eta_{k0} - \\eta_{00} \\eta_{ki} \\right) F^{i0} + \\eta_{0i} \\eta_{kj} F^{ij} \\\\\n& = \\bigl(0 - (-\\delta_{ki}) \\bigr) F^{i0} + 0 \\\\\n& = F^{k0} = - F^{0k} \\\\\n\\end{align}" }, { "math_id": 91, "text": " F_{k0} = - F^{k0} " }, { "math_id": 92, "text": "\\begin{align}\nF_{kl} & = \\left(\\eta_{ k i} \\eta_{ l 0} - \\eta_{ k 0} \\eta_{ l i} \\right) F^{i 0} + \\eta_{ k i} \\eta_{ l j} F^{i j} \\\\\n& = 0 + \\delta_{ k i} \\delta_{ l j} F^{i j} \\\\\n& = F^{k l} \\\\\n\\end{align}" }, { "math_id": 93, "text": "F_{\\alpha\\beta} = \\begin{pmatrix}\n0 & \\frac{E_x}{c} & \\frac{E_y}{c} & \\frac{E_z}{c} \\\\\n-\\frac{E_x}{c} & 0 & -B_z & B_y \\\\\n-\\frac{E_y}{c} & B_z & 0 & -B_x \\\\\n-\\frac{E_z}{c} & -B_y & B_x & 0\n\\end{pmatrix}" }, { "math_id": 94, "text": " \\begin{pmatrix}\n -1 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1\n\\end{pmatrix} \n\\begin{pmatrix}\n0 & -\\frac{E_x}{c} & -\\frac{E_y}{c} & -\\frac{E_z}{c} \\\\\n\\frac{E_x}{c} & 0 & -B_z & B_y \\\\\n\\frac{E_y}{c} & B_z & 0 & -B_x \\\\\n\\frac{E_z}{c} & -B_y & B_x & 0\n\\end{pmatrix}\n\\begin{pmatrix}\n -1 & 0 & 0 & 0 \\\\\n 0 & 1 & 0 & 0 \\\\\n 0 & 0 & 1 & 0 \\\\\n 0 & 0 & 0 & 1\n\\end{pmatrix} \n=\\begin{pmatrix}\n0 & \\frac{E_x}{c} & \\frac{E_y}{c} & \\frac{E_z}{c} \\\\\n-\\frac{E_x}{c} & 0 & -B_z & B_y \\\\\n-\\frac{E_y}{c} & B_z & 0 & -B_x \\\\\n-\\frac{E_z}{c} & -B_y & B_x & 0\n\\end{pmatrix}.\n" }, { "math_id": 95, "text": "g^{j_1i_1}g^{j_2i_2}\\cdots g^{j_ni_n}A_{i_1i_2\\cdots i_n} = A^{j_1j_2\\cdots j_n}" }, { "math_id": 96, "text": "g_{j_1i_1}g_{j_2i_2}\\cdots g_{j_ni_n}A^{i_1i_2\\cdots i_n} = A_{j_1j_2\\cdots j_n}" }, { "math_id": 97, "text": "g_{p_1i_1}g_{p_2i_2}\\cdots g_{p_ni_n}g^{q_1j_1}g^{q_2j_2}\\cdots g^{q_mj_m}{A^{i_1i_2\\cdots i_n}}_{j_1j_2\\cdots j_m} = {A_{p_1p_2\\cdots p_n}}^{q_1q_2\\cdots q_m}" }, { "math_id": 98, "text": "(r-1,s+1)" }, { "math_id": 99, "text": "(r+1,s-1)" }, { "math_id": 100, "text": "r,s" }, { "math_id": 101, "text": "(0,2)" } ]
https://en.wikipedia.org/wiki?curid=11325244
11326411
Positive current
In mathematics, more particularly in complex geometry, algebraic geometry and complex analysis, a positive current is a positive ("n-p","n-p")-form over an "n"-dimensional complex manifold, taking values in distributions. For a formal definition, consider a manifold "M". Currents on "M" are (by definition) differential forms with coefficients in distributions; integrating over "M", we may consider currents as "currents of integration", that is, functionals formula_0 on smooth forms with compact support. This way, currents are considered as elements in the dual space to the space formula_1 of forms with compact support. Now, let "M" be a complex manifold. The Hodge decomposition formula_2 is defined on currents, in a natural way, the "(p,q)"-currents being functionals on formula_3. A positive current is defined as a real current of Hodge type "(p,p)", taking non-negative values on all positive "(p,p)"-forms. Characterization of Kähler manifolds. Using the Hahn–Banach theorem, Harvey and Lawson proved the following criterion of existence of Kähler metrics. Theorem: Let "M" be a compact complex manifold. Then "M" does not admit a Kähler structure if and only if "M" admits a non-zero positive (1,1)-current formula_4 which is a (1,1)-part of an exact 2-current. Note that the de Rham differential maps 3-currents to 2-currents, hence formula_4 is a differential of a 3-current; if formula_4 is a current of integration of a complex curve, this means that this curve is a (1,1)-part of a boundary. When "M" admits a surjective map formula_5 to a Kähler manifold with 1-dimensional fibers, this theorem leads to the following result of complex algebraic geometry. Corollary: In this situation, "M" is non-Kähler if and only if the homology class of a generic fiber of formula_6 is a (1,1)-part of a boundary.
[ { "math_id": 0, "text": "\\eta \\mapsto \\int_M \\eta\\wedge \\rho" }, { "math_id": 1, "text": "\\Lambda_c^*(M)" }, { "math_id": 2, "text": "\\Lambda^i(M)=\\bigoplus_{p+q=i}\\Lambda^{p,q}(M)" }, { "math_id": 3, "text": "\\Lambda_c^{p, q}(M)" }, { "math_id": 4, "text": "\\Theta" }, { "math_id": 5, "text": "\\pi:\\; M \\mapsto X" }, { "math_id": 6, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=11326411
1132704
Denny's paradox
Question of animal locomotion on water In biology, Denny's paradox refers to the apparent impossibility of surface-dwelling animals such as the water strider generating enough propulsive force to move. It is named after biologist Mark Denny, and relates to animal locomotion on the surface layer of water. If capillary waves are assumed to generate the momentum transfer to the water, the animal's legs must move faster than the phase speed formula_0 of the waves, given by formula_1 where formula_2 is the acceleration due to gravity, formula_3 is the strength of surface tension, and formula_4 the density of water. For standard conditions, this works out to be about 0.23 m/s. In fact, water striders' legs move at speeds much less than this and, according to this physical picture, cannot move. Writing in the Journal of Fluid Mechanics, David Hu and John Bush state that Denny's paradox "rested on two flawed assumptions. First, water striders' motion was assumed to rely on the generation of capillary waves, since the propulsive force was thought to be that associated with wave drag on the driving leg. Second, in order to generate capillary waves, it was assumed that the strider leg speed must exceed the minimum wave speed, formula_5. We note that this second assumption is strictly true only for steady motions". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "c_m" }, { "math_id": 1, "text": "c_m=\\left(\\frac{4g\\sigma}{\\rho}\\right)^{1/4}," }, { "math_id": 2, "text": "g" }, { "math_id": 3, "text": "\\sigma" }, { "math_id": 4, "text": "\\rho" }, { "math_id": 5, "text": "c_m=\\left(4g\\sigma/\\rho\\right)^{1/4}\\simeq 0.23\\mathrm{m/s}" } ]
https://en.wikipedia.org/wiki?curid=1132704
11327223
Goormaghtigh conjecture
In mathematics, the Goormaghtigh conjecture is a conjecture in number theory named for the Belgian mathematician René Goormaghtigh. The conjecture is that the only non-trivial integer solutions of the exponential Diophantine equation formula_0 satisfying formula_1 and formula_2 are formula_3 and formula_4 Partial results. showed that, for each pair of fixed exponents formula_5 and formula_6, this equation has only finitely many solutions. But this proof depends on Siegel's finiteness theorem, which is ineffective. showed that, if formula_7 and formula_8 with formula_9, formula_10, and formula_11, then formula_12 is bounded by an effectively computable constant depending only on formula_13 and formula_14. showed that for formula_15 and odd formula_6, this equation has no solution formula_16 other than the two solutions given above. Balasubramanian and Shorey proved in 1980 that there are only finitely many possible solutions formula_17 to the equations with prime divisors of formula_18 and formula_19 lying in a given finite set and that they may be effectively computed. showed that, for each fixed formula_18 and formula_19, this equation has at most one solution. For fixed "x" (or "y"), equation has at most 15 solutions, and at most two unless "x" is either odd prime power times a power of two, or in the finite set {15, 21, 30, 33, 35, 39, 45, 51, 65, 85, 143, 154, 713}, in which case there are at most three solutions. Furthermore, there is at most one solution if the odd part of "x" is squareful unless "x" has at most two distinct odd prime factors or "x" is in a finite set {315, 495, 525, 585, 630, 693, 735, 765, 855, 945, 1035, 1050, 1170, 1260, 1386, 1530, 1890, 1925, 1950, 1953, 2115, 2175, 2223, 2325, 2535, 2565, 2898, 2907, 3105, 3150, 3325, 3465, 3663, 3675, 4235, 5525, 5661, 6273, 8109, 17575, 39151}. If "x" is a power of two, there is at most one solution except for x=2, in which case there are two known solutions. In fact, max(m,n)&lt;4^x and y&lt;2^(2^x). Application to repunits. The Goormaghtigh conjecture may be expressed as saying that 31 (111 in base 5, 11111 in base 2) and 8191 (111 in base 90, 1111111111111 in base 2) are the only two numbers that are repunits with at least 3 digits in two different bases.
[ { "math_id": 0, "text": "\\frac{x^m - 1}{x-1} = \\frac{y^n - 1}{y-1}" }, { "math_id": 1, "text": "x>y>1" }, { "math_id": 2, "text": "n,m>2" }, { "math_id": 3, "text": "\\frac{5^3-1}{5-1} = \\frac{2^5-1}{2-1} = 31" }, { "math_id": 4, "text": "\\frac{90^3-1}{90-1} = \\frac{2^{13}-1}{2-1} = 8191." }, { "math_id": 5, "text": "m" }, { "math_id": 6, "text": "n" }, { "math_id": 7, "text": "m-1=dr" }, { "math_id": 8, "text": "n-1=ds" }, { "math_id": 9, "text": "d \\ge 2" }, { "math_id": 10, "text": "r \\ge 1" }, { "math_id": 11, "text": "s \\ge 1" }, { "math_id": 12, "text": "\\max(x,y,m,n)" }, { "math_id": 13, "text": "r" }, { "math_id": 14, "text": "s" }, { "math_id": 15, "text": "m=3" }, { "math_id": 16, "text": "(x,y,n)" }, { "math_id": 17, "text": "(x,y,m,n)" }, { "math_id": 18, "text": "x" }, { "math_id": 19, "text": "y" } ]
https://en.wikipedia.org/wiki?curid=11327223
11329698
Vitale's random Brunn–Minkowski inequality
In mathematics, Vitale's random Brunn–Minkowski inequality is a theorem due to Richard Vitale that generalizes the classical Brunn–Minkowski inequality for compact subsets of "n"-dimensional Euclidean space R"n" to random compact sets. Statement of the inequality. Let "X" be a random compact set in R"n"; that is, a Borel–measurable function from some probability space (Ω, Σ, Pr) to the space of non-empty, compact subsets of R"n" equipped with the Hausdorff metric. A random vector "V" : Ω → R"n" is called a selection of "X" if Pr("V" ∈ "X") = 1. If "K" is a non-empty, compact subset of R"n", let formula_0 and define the set-valued expectation E["X"] of "X" to be formula_1 Note that E["X"] is a subset of R"n". In this notation, Vitale's random Brunn–Minkowski inequality is that, for any random compact set "X" with formula_2, formula_3 where "formula_4" denotes "n"-dimensional Lebesgue measure. Relationship to the Brunn–Minkowski inequality. If "X" takes the values (non-empty, compact sets) "K" and "L" with probabilities 1 − "λ" and "λ" respectively, then Vitale's random Brunn–Minkowski inequality is simply the original Brunn–Minkowski inequality for compact sets.
[ { "math_id": 0, "text": "\\| K \\| = \\max \\left\\{ \\left. \\| v \\|_{\\mathbb{R}^{n}} \\right| v \\in K \\right\\}" }, { "math_id": 1, "text": "\\mathrm{E} [X] = \\{ \\mathrm{E} [V] | V \\mbox{ is a selection of } X \\mbox{ and } \\mathrm{E} \\| V \\| < + \\infty \\}." }, { "math_id": 2, "text": "E[\\|X\\|]<+\\infty" }, { "math_id": 3, "text": "\\left( \\mathrm{vol}_n \\left( \\mathrm{E} [X] \\right) \\right)^{1/n} \\geq \\mathrm{E} \\left[ \\mathrm{vol}_n (X)^{1/n} \\right]," }, { "math_id": 4, "text": "vol_n" } ]
https://en.wikipedia.org/wiki?curid=11329698
113302
Surface tension
Tendency of a liquid surface to shrink to reduce surface area Surface tension is the tendency of liquid surfaces at rest to shrink into the minimum surface area possible. Surface tension is what allows objects with a higher density than water such as razor blades and insects (e.g. water striders) to float on a water surface without becoming even partly submerged. At liquid–air interfaces, surface tension results from the greater attraction of liquid molecules to each other (due to cohesion) than to the molecules in the air (due to adhesion). There are two primary mechanisms in play. One is an inward force on the surface molecules causing the liquid to contract. Second is a tangential force parallel to the surface of the liquid. This "tangential" force is generally referred to as the surface tension. The net effect is the liquid behaves as if its surface were covered with a stretched elastic membrane. But this analogy must not be taken too far as the tension in an elastic membrane is dependent on the amount of deformation of the membrane while surface tension is an inherent property of the liquid"–"air or liquid"–"vapour interface. Because of the relatively high attraction of water molecules to each other through a web of hydrogen bonds, water has a higher surface tension (72.8 millinewtons (mN) per meter at 20 °C) than most other liquids. Surface tension is an important factor in the phenomenon of capillarity. Surface tension has the dimension of force per unit length, or of energy per unit area. The two are equivalent, but when referring to energy per unit of area, it is common to use the term surface energy, which is a more general term in the sense that it applies also to solids. In materials science, surface tension is used for either surface stress or surface energy. Causes. Due to the cohesive forces, a molecule located away from the surface is pulled equally in every direction by neighboring liquid molecules, resulting in a net force of zero. The molecules at the surface do not have the "same" molecules on all sides of them and therefore are pulled inward. This creates some internal pressure and forces liquid surfaces to contract to the minimum area. There is also a tension parallel to the surface at the liquid-air interface which will resist an external force, due to the cohesive nature of water molecules. The forces of attraction acting between molecules of the same type are called cohesive forces, while those acting between molecules of different types are called adhesive forces. The balance between the cohesion of the liquid and its adhesion to the material of the container determines the degree of wetting, the contact angle, and the shape of meniscus. When cohesion dominates (specifically, adhesion energy is less than half of cohesion energy) the wetting is low and the meniscus is convex at a vertical wall (as for mercury in a glass container). On the other hand, when adhesion dominates (when adhesion energy is more than half of cohesion energy) the wetting is high and the similar meniscus is concave (as in water in a glass). Surface tension is responsible for the shape of liquid droplets. Although easily deformed, droplets of water tend to be pulled into a spherical shape by the imbalance in cohesive forces of the surface layer. In the absence of other forces, drops of virtually all liquids would be approximately spherical. The spherical shape minimizes the necessary "wall tension" of the surface layer according to Laplace's law. Another way to view surface tension is in terms of energy. A molecule in contact with a neighbor is in a lower state of energy than if it were alone. The interior molecules have as many neighbors as they can possibly have, but the boundary molecules are missing neighbors (compared to interior molecules) and therefore have higher energy. For the liquid to minimize its energy state, the number of higher energy boundary molecules must be minimized. The minimized number of boundary molecules results in a minimal surface area. As a result of surface area minimization, a surface will assume a smooth shape. Physics. Physical units. Surface tension, represented by the symbol "γ" (alternatively "σ" or "T"), is measured in force per unit length. Its SI unit is newton per meter but the cgs unit of dyne per centimeter is also used. For example, formula_0 Definition. Surface tension can be defined in terms of force or energy. In terms of force. Surface tension γ of a liquid is the force per unit length. In the illustration on the right, the rectangular frame, composed of three unmovable sides (black) that form a "U" shape, and a fourth movable side (blue) that can slide to the right. Surface tension will pull the blue bar to the left; the force F required to hold the movable side is proportional to the length L of the immobile side. Thus the ratio depends only on the intrinsic properties of the liquid (composition, temperature, etc.), not on its geometry. For example, if the frame had a more complicated shape, the ratio , with L the length of the movable side and F the force required to stop it from sliding, is found to be the same for all shapes. We therefore define the surface tension as formula_1 The reason for the is that the film has two sides (two surfaces), each of which contributes equally to the force; so the force contributed by a single side is "γL" In terms of energy. Surface tension γ of a liquid is the ratio of the change in the energy of the liquid to the change in the surface area of the liquid (that led to the change in energy). This can be easily related to the previous definition in terms of force: if F is the force required to stop the side from "starting" to slide, then this is also the force that would keep the side in the state of "sliding at a constant speed" (by Newton's Second Law). But if the side is moving to the right (in the direction the force is applied), then the surface area of the stretched liquid is increasing while the applied force is doing work on the liquid. This means that increasing the surface area increases the energy of the film. The work done by the force F in moving the side by distance Δ"x" is "W" "F"Δ"x"; at the same time the total area of the film increases by Δ"A" 2"L"Δ"x" (the factor of 2 is here because the liquid has two sides, two surfaces). Thus, multiplying both the numerator and the denominator of "γ" by Δ"x", we get formula_2 This work W is, by the usual arguments, interpreted as being stored as potential energy. Consequently, surface tension can be also measured in SI system as joules per square meter and in the cgs system as ergs per cm2. Since mechanical systems try to find a state of minimum potential energy, a free droplet of liquid naturally assumes a spherical shape, which has the minimum surface area for a given volume. The equivalence of measurement of energy per unit area to force per unit length can be proven by dimensional analysis. Effects. Water. Several effects of surface tension can be seen with ordinary water: Surfactants. Surface tension is visible in other common phenomena, especially when surfactants are used to decrease it: Surface curvature and pressure. If no force acts normal to a tensioned surface, the surface must remain flat. But if the pressure on one side of the surface differs from pressure on the other side, the pressure difference times surface area results in a normal force. In order for the surface tension forces to cancel the force due to pressure, the surface must be curved. The diagram shows how surface curvature of a tiny patch of surface leads to a net component of surface tension forces acting normal to the center of the patch. When all the forces are balanced, the resulting equation is known as the Young–Laplace equation: formula_3 where: The quantity in parentheses on the right hand side is in fact (twice) the mean curvature of the surface (depending on normalisation). Solutions to this equation determine the shape of water drops, puddles, menisci, soap bubbles, and all other shapes determined by surface tension (such as the shape of the impressions that a water strider's feet make on the surface of a pond). The table below shows how the internal pressure of a water droplet increases with decreasing radius. For not very small drops the effect is subtle, but the pressure difference becomes enormous when the drop sizes approach the molecular size. (In the limit of a single molecule the concept becomes meaningless.) Floating objects. When an object is placed on a liquid, its weight "F"w depresses the surface, and if surface tension and downward force become equal then it is balanced by the surface tension forces on either side "F"s, which are each parallel to the water's surface at the points where it contacts the object. Notice that small movement in the body may cause the object to sink. As the angle of contact decreases, surface tension decreases. The horizontal components of the two "F"s arrows point in opposite directions, so they cancel each other, but the vertical components point in the same direction and therefore add up to balance "F"w. The object's surface must not be wettable for this to happen, and its weight must be low enough for the surface tension to support it. If "m" denotes the mass of the needle and "g" acceleration due to gravity, we have formula_4 Liquid surface. To find the shape of the minimal surface bounded by some arbitrary shaped frame using strictly mathematical means can be a daunting task. Yet by fashioning the frame out of wire and dipping it in soap-solution, a locally minimal surface will appear in the resulting soap-film within seconds. The reason for this is that the pressure difference across a fluid interface is proportional to the mean curvature, as seen in the Young–Laplace equation. For an open soap film, the pressure difference is zero, hence the mean curvature is zero, and minimal surfaces have the property of zero mean curvature. Contact angles. The surface of any liquid is an interface between that liquid and some other medium. The top surface of a pond, for example, is an interface between the pond water and the air. Surface tension, then, is not a property of the liquid alone, but a property of the liquid's interface with another medium. If a liquid is in a container, then besides the liquid/air interface at its top surface, there is also an interface between the liquid and the walls of the container. The surface tension between the liquid and air is usually different (greater) than its surface tension with the walls of a container. And where the two surfaces meet, their geometry must be such that all forces balance. Where the two surfaces meet, they form a contact angle, θ, which is the angle the tangent to the surface makes with the solid surface. Note that the angle is measured "through the liquid", as shown in the diagrams above. The diagram to the right shows two examples. Tension forces are shown for the liquid–air interface, the liquid–solid interface, and the solid–air interface. The example on the left is where the difference between the liquid–solid and solid–air surface tension, "γ"ls − "γ"sa, is less than the liquid–air surface tension, "γ"la, but is nevertheless positive, that is formula_5 In the diagram, both the vertical and horizontal forces must cancel exactly at the contact point, known as equilibrium. The horizontal component of "f"la is canceled by the adhesive force, "f"A. formula_6 The more telling balance of forces, though, is in the vertical direction. The vertical component of "f"la must exactly cancel the difference of the forces along the solid surface, "f"ls − "f"sa. formula_7 Since the forces are in direct proportion to their respective surface tensions, we also have: formula_8 where This means that although the difference between the liquid–solid and solid–air surface tension, "γ"ls − "γ"sa, is difficult to measure directly, it can be inferred from the liquid–air surface tension, "γ"la, and the equilibrium contact angle, θ, which is a function of the easily measurable advancing and receding contact angles (see main article contact angle). This same relationship exists in the diagram on the right. But in this case we see that because the contact angle is less than 90°, the liquid–solid/solid–air surface tension difference must be negative: formula_9 Special contact angles. Observe that in the special case of a water–silver interface where the contact angle is equal to 90°, the liquid–solid/solid–air surface tension difference is exactly zero. Another special case is where the contact angle is exactly 180°. Water with specially prepared Teflon approaches this. Contact angle of 180° occurs when the liquid–solid surface tension is exactly equal to the liquid–air surface tension. formula_10 Liquid in a vertical tube. An old style mercury barometer consists of a vertical glass tube about 1 cm in diameter partially filled with mercury, and with a vacuum (called Torricelli's vacuum) in the unfilled volume (see diagram to the right). Notice that the mercury level at the center of the tube is higher than at the edges, making the upper surface of the mercury dome-shaped. The center of mass of the entire column of mercury would be slightly lower if the top surface of the mercury were flat over the entire cross-section of the tube. But the dome-shaped top gives slightly less surface area to the entire mass of mercury. Again the two effects combine to minimize the total potential energy. Such a surface shape is known as a convex meniscus. We consider the surface area of the entire mass of mercury, including the part of the surface that is in contact with the glass, because mercury does not adhere to glass at all. So the surface tension of the mercury acts over its entire surface area, including where it is in contact with the glass. If instead of glass, the tube was made out of copper, the situation would be very different. Mercury aggressively adheres to copper. So in a copper tube, the level of mercury at the center of the tube will be lower than at the edges (that is, it would be a concave meniscus). In a situation where the liquid adheres to the walls of its container, we consider the part of the fluid's surface area that is in contact with the container to have "negative" surface tension. The fluid then works to maximize the contact surface area. So in this case increasing the area in contact with the container decreases rather than increases the potential energy. That decrease is enough to compensate for the increased potential energy associated with lifting the fluid near the walls of the container. If a tube is sufficiently narrow and the liquid adhesion to its walls is sufficiently strong, surface tension can draw liquid up the tube in a phenomenon known as capillary action. The height to which the column is lifted is given by Jurin's law: formula_11 where Puddles on a surface. Pouring mercury onto a horizontal flat sheet of glass results in a puddle that has a perceptible thickness. The puddle will spread out only to the point where it is a little under half a centimetre thick, and no thinner. Again this is due to the action of mercury's strong surface tension. The liquid mass flattens out because that brings as much of the mercury to as low a level as possible, but the surface tension, at the same time, is acting to reduce the total surface area. The result of the compromise is a puddle of a nearly fixed thickness. The same surface tension demonstration can be done with water, lime water or even saline, but only on a surface made of a substance to which water does not adhere. Wax is such a substance. Water poured onto a smooth, flat, horizontal wax surface, say a waxed sheet of glass, will behave similarly to the mercury poured onto glass. The thickness of a puddle of liquid on a surface whose contact angle is 180° is given by: formula_12 where In reality, the thicknesses of the puddles will be slightly less than what is predicted by the above formula because very few surfaces have a contact angle of 180° with any liquid. When the contact angle is less than 180°, the thickness is given by: formula_13 For mercury on glass, "γ"Hg = 487 dyn/cm, "ρ"Hg = 13.5 g/cm3 and θ = 140°, which gives "h"Hg = 0.36 cm. For water on paraffin at 25 °C, γ = 72 dyn/cm, ρ = 1.0 g/cm3, and θ = 107° which gives "h"H2O = 0.44 cm. The formula also predicts that when the contact angle is 0°, the liquid will spread out into a micro-thin layer over the surface. Such a surface is said to be fully wettable by the liquid. Breakup of streams into drops. In day-to-day life all of us observe that a stream of water emerging from a faucet will break up into droplets, no matter how smoothly the stream is emitted from the faucet. This is due to a phenomenon called the Plateau–Rayleigh instability, which is entirely a consequence of the effects of surface tension. The explanation of this instability begins with the existence of tiny perturbations in the stream. These are always present, no matter how smooth the stream is. If the perturbations are resolved into sinusoidal components, we find that some components grow with time while others decay with time. Among those that grow with time, some grow at faster rates than others. Whether a component decays or grows, and how fast it grows is entirely a function of its wave number (a measure of how many peaks and troughs per centimeter) and the radii of the original cylindrical stream. Thermodynamics. Thermodynamic theories of surface tension. J.W. Gibbs developed the thermodynamic theory of capillarity based on the idea of surfaces of discontinuity. Gibbs considered the case of a sharp mathematical surface being placed somewhere within the microscopically fuzzy physical interface that exists between two homogeneous substances. Realizing that the exact choice of the surface's location was somewhat arbitrary, he left it flexible. Since the interface exists in thermal and chemical equilibrium with the substances around it (having temperature T and chemical potentials "μ"i), Gibbs considered the case where the surface may have excess energy, excess entropy, and excess particles, finding the natural free energy function in this case to be formula_14, a quantity later named as the grand potential and given the symbol formula_15. Considering a given subvolume formula_16 containing a surface of discontinuity, the volume is divided by the mathematical surface into two parts A and B, with volumes formula_17 and formula_18, with formula_19 exactly. Now, if the two parts A and B were homogeneous fluids (with pressures formula_20, formula_21) and remained perfectly homogeneous right up to the mathematical boundary, without any surface effects, the total grand potential of this volume would be simply formula_22. The surface effects of interest are a modification to this, and they can be all collected into a surface free energy term formula_23 so the total grand potential of the volume becomes: formula_24 For sufficiently macroscopic and gently curved surfaces, the surface free energy must simply be proportional to the surface area: formula_25 for surface tension formula_26 and surface area formula_27. As stated above, this implies the mechanical work needed to increase a surface area "A" is "dW" "γ dA", assuming the volumes on each side do not change. Thermodynamics requires that for systems held at constant chemical potential and temperature, all spontaneous changes of state are accompanied by a decrease in this free energy formula_15, that is, an increase in total entropy taking into account the possible movement of energy and particles from the surface into the surrounding fluids. From this it is easy to understand why decreasing the surface area of a mass of liquid is always spontaneous, provided it is not coupled to any other energy changes. It follows that in order to increase surface area, a certain amount of energy must be added. Gibbs and other scientists have wrestled with the arbitrariness in the exact microscopic placement of the surface. For microscopic surfaces with very tight curvatures, it is not correct to assume the surface tension is independent of size, and topics like the Tolman length come into play. For a macroscopic-sized surface (and planar surfaces), the surface placement does not have a significant effect on γ; however, it does have a very strong effect on the values of the surface entropy, surface excess mass densities, and surface internal energy,237 which are the partial derivatives of the surface tension function formula_28. Gibbs emphasized that for solids, the surface free energy may be completely different from surface stress (what he called surface tension):315 the surface free energy is the work required to "form" the surface, while surface stress is the work required to "stretch" the surface. In the case of a two-fluid interface, there is no distinction between forming and stretching because the fluids and the surface completely replenish their nature when the surface is stretched. For a solid, stretching the surface, even elastically, results in a fundamentally changed surface. Further, the surface stress on a solid is a directional quantity (a stress tensor) while surface energy is scalar. Fifteen years after Gibbs, J.D. van der Waals developed the theory of capillarity effects based on the hypothesis of a continuous variation of density. He added to the energy density the term formula_29 where "c" is the capillarity coefficient and "ρ" is the density. For the multiphase "equilibria", the results of the van der Waals approach practically coincide with the Gibbs formulae, but for modelling of the "dynamics" of phase transitions the van der Waals approach is much more convenient. The van der Waals capillarity energy is now widely used in the phase field models of multiphase flows. Such terms are also discovered in the dynamics of non-equilibrium gases. Thermodynamics of bubbles. The pressure inside an ideal spherical bubble can be derived from thermodynamic free energy considerations. The above free energy can be written as: formula_30 where formula_31 is the pressure difference between the inside (A) and outside (B) of the bubble, and formula_17 is the bubble volume. In equilibrium, "d"Ω = 0, and so, formula_32 For a spherical bubble, the volume and surface area are given simply by formula_33 and formula_34 Substituting these relations into the previous expression, we find formula_35 which is equivalent to the Young–Laplace equation when "Rx" "Ry". Influence of temperature. Surface tension is dependent on temperature. For that reason, when a value is given for the surface tension of an interface, temperature must be explicitly stated. The general trend is that surface tension decreases with the increase of temperature, reaching a value of 0 at the critical temperature. For further details see Eötvös rule. There are only empirical equations to relate surface tension and temperature: Both Guggenheim–Katayama and Eötvös take into account the fact that surface tension reaches 0 at the critical temperature, whereas Ramay and Shields fails to match reality at this endpoint. Influence of solute concentration. Solutes can have different effects on surface tension depending on the nature of the surface and the solute: What complicates the effect is that a solute can exist in a different concentration at the surface of a solvent than in its bulk. This difference varies from one solute–solvent combination to another. Gibbs isotherm states that: formula_40 Certain assumptions are taken in its deduction, therefore Gibbs isotherm can only be applied to ideal (very dilute) solutions with two components. Influence of particle size on vapor pressure. The Clausius–Clapeyron relation leads to another equation also attributed to Kelvin, as the Kelvin equation. It explains why, because of surface tension, the vapor pressure for small droplets of liquid in suspension is greater than standard vapor pressure of that same liquid when the interface is flat. That is to say that when a liquid is forming small droplets, the equilibrium concentration of its vapor in its surroundings is greater. This arises because the pressure inside the droplet is greater than outside. formula_41 The effect explains supersaturation of vapors. In the absence of nucleation sites, tiny droplets must form before they can evolve into larger droplets. This requires a vapor pressure many times the vapor pressure at the phase transition point. This equation is also used in catalyst chemistry to assess mesoporosity for solids. The effect can be viewed in terms of the average number of molecular neighbors of surface molecules (see diagram). The table shows some calculated values of this effect for water at different drop sizes: The effect becomes clear for very small drop sizes, as a drop of 1 nm radius has about 100 molecules inside, which is a quantity small enough to require a quantum mechanics analysis. Methods of measurement. Because surface tension manifests itself in various effects, it offers a number of paths to its measurement. Which method is optimal depends upon the nature of the liquid being measured, the conditions under which its tension is to be measured, and the stability of its surface when it is deformed. An instrument that measures surface tension is called tensiometer. Values. Surface tension of water. The surface tension of pure liquid water in contact with its vapor has been given by IAPWS as formula_42 where both T and the critical temperature "T"C = 647.096 K are expressed in kelvins. The region of validity the entire vapor–liquid saturation curve, from the triple point (0.01 °C) to the critical point. It also provides reasonable results when extrapolated to metastable (supercooled) conditions, down to at least −25 °C. This formulation was originally adopted by IAPWS in 1976 and was adjusted in 1994 to conform to the International Temperature Scale of 1990. The uncertainty of this formulation is given over the full range of temperature by IAPWS. For temperatures below 100 °C, the uncertainty is ±0.5%. Surface tension of seawater. Nayar et al. published reference data for the surface tension of seawater over the salinity range of and a temperature range of at atmospheric pressure. The range of temperature and salinity encompasses both the oceanographic range and the range of conditions encountered in thermal desalination technologies. The uncertainty of the measurements varied from 0.18 to 0.37 mN/m with the average uncertainty being 0.22 mN/m. Nayar et al. correlated the data with the following equation formula_43 where "γ"sw is the surface tension of seawater in mN/m, "γ"w is the surface tension of water in mN/m, S is the reference salinity in g/kg, and t is temperature in degrees Celsius. The average absolute percentage deviation between measurements and the correlation was 0.19% while the maximum deviation is 0.60%. The International Association for the Properties of Water and Steam (IAPWS) has adopted this correlation as an international standard guideline. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\gamma = 1 ~\\mathrm{\\frac{dyn}{cm}} = 1 ~\\mathrm{\\frac{erg}{cm^2}} = 1~\\mathrm{\\frac{10^{-7}\\,m\\cdot N}{10^{-4}\\, m^2}} = 0.001~\\mathrm{\\frac{N}{m}} = 0.001~\\mathrm{\\frac{J}{m^2}}." }, { "math_id": 1, "text": "\\gamma=\\frac{F}{2L}." }, { "math_id": 2, "text": "\\gamma=\\frac{F}{2L}=\\frac{F \\Delta x}{2 L \\Delta x}=\\frac{W}{\\Delta A} ." }, { "math_id": 3, "text": "\\Delta p = \\gamma \\left( \\frac{1}{R_x} + \\frac{1}{R_y} \\right)" }, { "math_id": 4, "text": " F_\\mathrm{w} = 2 F_\\mathrm{s} \\sin \\theta \\quad\\Leftrightarrow\\quad m g = 2 \\gamma L \\sin \\theta " }, { "math_id": 5, "text": "\\gamma_\\mathrm{la} > \\gamma_\\mathrm{ls} - \\gamma_\\mathrm{sa} > 0" }, { "math_id": 6, "text": "f_\\mathrm{A} = f_\\mathrm{la} \\sin \\theta" }, { "math_id": 7, "text": "f_\\mathrm{ls} - f_\\mathrm{sa} = -f_\\mathrm{la} \\cos \\theta" }, { "math_id": 8, "text": "\\gamma_\\mathrm{ls} - \\gamma_\\mathrm{sa} = -\\gamma_\\mathrm{la} \\cos \\theta" }, { "math_id": 9, "text": "\\gamma_\\mathrm{la} > 0 > \\gamma_\\mathrm{ls} - \\gamma_\\mathrm{sa}" }, { "math_id": 10, "text": "\\gamma_\\mathrm{la} = \\gamma_\\mathrm{ls} - \\gamma_\\mathrm{sa} > 0\\qquad \\theta = 180^\\circ" }, { "math_id": 11, "text": "h = \\frac {2\\gamma_\\mathrm{la} \\cos\\theta}{\\rho g r}" }, { "math_id": 12, "text": "h = 2 \\sqrt{\\frac{\\gamma} {g\\rho}}" }, { "math_id": 13, "text": "h = \\sqrt{\\frac{2\\gamma_\\mathrm{la}\\left( 1 - \\cos \\theta \\right)} {g\\rho}}." }, { "math_id": 14, "text": "U - TS - \\mu_1 N_1 - \\mu_2 N_2 \\cdots " }, { "math_id": 15, "text": "\\Omega" }, { "math_id": 16, "text": "V" }, { "math_id": 17, "text": "V_\\text{A}" }, { "math_id": 18, "text": "V_\\text{B}" }, { "math_id": 19, "text": "V = V_\\text{A} + V_\\text{B}" }, { "math_id": 20, "text": "p_\\text{A}" }, { "math_id": 21, "text": "p_\\text{B}" }, { "math_id": 22, "text": "-p_\\text{A} V_\\text{A} - p_\\text{B} V_\\text{B}" }, { "math_id": 23, "text": "\\Omega_\\text{S}" }, { "math_id": 24, "text": "\\Omega = -p_\\text{A} V_\\text{A} - p_\\text{B} V_\\text{B} + \\Omega_\\text{S}." }, { "math_id": 25, "text": "\\Omega_\\text{S} = \\gamma A," }, { "math_id": 26, "text": "\\gamma" }, { "math_id": 27, "text": "A" }, { "math_id": 28, "text": "\\gamma(T, \\mu_1, \\mu_2, \\cdots)" }, { "math_id": 29, "text": "c (\\nabla \\rho)^2," }, { "math_id": 30, "text": "\\Omega = -\\Delta P V_\\text{A} - p_\\text{B} V + \\gamma A" }, { "math_id": 31, "text": "\\Delta P = p_\\text{A} - p_\\text{B}" }, { "math_id": 32, "text": "\\Delta P\\,dV_\\text{A} = \\gamma\\, dA." }, { "math_id": 33, "text": "V_\\text{A} = \\tfrac43\\pi R^3 \\quad\\rightarrow\\quad dV_\\text{A} = 4\\pi R^2 \\,dR," }, { "math_id": 34, "text": "A = 4\\pi R^2 \\quad\\rightarrow\\quad dA = 8\\pi R\\, dR." }, { "math_id": 35, "text": "\\Delta P = \\frac{2}{R}\\gamma," }, { "math_id": 36, "text": "\\gamma V^{2/3} = k(T_\\mathrm{C}-T) ." }, { "math_id": 37, "text": "\\gamma V^{2/3} = k \\left(T_\\mathrm{C} - T - 6\\,\\mathrm{K}\\right)" }, { "math_id": 38, "text": "\\gamma = \\gamma^\\circ \\left( 1-\\frac{T}{T_\\mathrm C} \\right)^n " }, { "math_id": 39, "text": "K_2 T^{1/3}_\\mathrm{C} P^{2/3}_\\mathrm{C}," }, { "math_id": 40, "text": "\\Gamma = - \\frac{1}{RT} \\left( \\frac{\\partial \\gamma}{\\partial \\ln C} \\right)_{T,P} " }, { "math_id": 41, "text": "P_\\mathrm{v}^\\mathrm{fog}=P_\\mathrm{v}^\\circ e^{V 2\\gamma/(RT r_\\mathrm{k})}" }, { "math_id": 42, "text": "\\gamma_\\text{w} = 235.8\\left(1 - \\frac{T}{T_\\text{C}}\\right)^{1.256} \\left[1 - 0.625\\left(1 - \\frac{T}{T_\\text{C}}\\right)\\right]~\\text{mN/m}," }, { "math_id": 43, "text": " \\gamma_\\mathrm{sw} = \\gamma_\\mathrm{w} \\left( 1+ 3.766\\times 10^{-4} S +2.347\\times 10^{-6} S t\\right)" } ]
https://en.wikipedia.org/wiki?curid=113302
11333082
H tree
Right-angled fractal canopy In fractal geometry, the H tree is a fractal tree structure constructed from perpendicular line segments, each smaller by a factor of the square root of 2 from the next larger adjacent segment. It is so called because its repeating pattern resembles the letter "H". It has Hausdorff dimension 2, and comes arbitrarily close to every point in a rectangle. Its applications include VLSI design and microwave engineering. Construction. An H tree can be constructed by starting with a line segment of arbitrary length, drawing two shorter segments at right angles to the first through its endpoints, and continuing in the same vein, reducing (dividing) the length of the line segments drawn at each stage by formula_0. A variant of this construction could also be defined in which the length at each iteration is multiplied by a ratio less than formula_1, but for this variant the resulting shape covers only part of its bounding rectangle, with a fractal boundary. An alternative process that generates the same fractal set is to begin with a rectangle with sides in the ratio formula_2, and repeatedly bisect it into two smaller silver rectangles, at each stage connecting the two centroids of the two smaller rectangles by a line segment. A similar process can be performed with rectangles of any other shape, but the formula_2 rectangle leads to the line segment size decreasing uniformly by a formula_0 factor at each step while for other rectangles the length will decrease by different factors at odd and even levels of the recursive construction. Properties. The H tree is a self-similar fractal; its Hausdorff dimension is equal to 2. The points of the H tree come arbitrarily close to every point in a rectangle (the same as the starting rectangle in the constructing by centroids of subdivided rectangles). However, it does not include all points of the rectangle; for instance, the points on the perpendicular bisector of the initial line segment (other than the midpoint of this segment) are not included. Applications. In VLSI design, the H tree may be used as the layout for a complete binary tree using a total area that is proportional to the number of nodes of the tree. Additionally, the H tree forms a space efficient layout for trees in graph drawing, and as part of a construction of a point set for which the sum of squared edge lengths of the traveling salesman tour is large. It is commonly used as a clock distribution network for routing timing signals to all parts of a chip with equal propagation delays to each part, and has also been used as an interconnection network for VLSI multiprocessors. The planar H tree can be generalized to the three-dimensional structure via adding line segments on the direction perpendicular to the H tree plane. The resultant three-dimensional H tree has Hausdorff dimension equal to 3. The planar H tree and its three-dimensional version have been found to constitute artificial electromagnetic atoms in photonic crystals and metamaterials and might have potential applications in microwave engineering. Related sets. The H tree is an example of a fractal canopy, in which the angle between neighboring line segments is always 180 degrees. In its property of coming arbitrarily close to every point of its bounding rectangle, it also resembles a space-filling curve, although it is not itself a curve. Topologically, an H tree has properties similar to those of a dendroid. However, they are not dendroids: dendroids must be closed sets, and H trees are not closed (their closure is the whole rectangle). Variations of the same tree structure with thickened polygonal branches in place of the line segments of the H tree have been defined by Benoit Mandelbrot, and are sometimes called the Mandelbrot tree. In these variations, to avoid overlaps between the leaves of the tree and their thickened branches, the scale factor by which the size is reduced at each level must be slightly greater than formula_0. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\sqrt2" }, { "math_id": 1, "text": "1/\\sqrt2" }, { "math_id": 2, "text": "1:\\sqrt2" } ]
https://en.wikipedia.org/wiki?curid=11333082
1133565
Snub disphenoid
Convex polyhedron with 12 triangular faces In geometry, the snub disphenoid is a convex polyhedron with 12 equilateral triangles as its faces. It is an example of deltahedron and Johnson solid. It can be constructed in different approaches. This shape also has alternative names called Siamese dodecahedron, triangular dodecahedron, trigonal dodecahedron, or dodecadeltahedron. The applications of snub disphenoid can be visualized as an atom cluster surrounding a central atom, that is the dodecahedral molecular geometry. Its vertices may be placed in a sphere and can also be used as a minimum possible Lennard-Jones potential among all eight-sphere clusters. The dual polyhedron of the snub disphenoid is the elongated gyrobifastigium. Construction. Involving polyhedron. The snub disphenoid can be constructed in different ways. As suggested by the name, the snub disphenoid is constructed from tetragonal disphenoid by cutting all the edges from its faces, and adding equilateral triangles (the light blue colors in the following image) that are twisted in a certain angle between them. This process construction is known as snubification. The snub disphenoid may also be constructed from a triangular bipyramid, by cutting its two edges along the apices. These apices can be pushed toward each other resulting in the new two vertices pushed away. Alternatively, the snub disphenoid can be constructed from pentagonal bipyramid by cutting the two edges along that connecting the base of the bipyramid and then inserting two equilateral triangles between them. Another way to construct the snub disphenoid is started from the square antiprism, by replacing the two square faces with pairs of equilateral triangles. Another construction of the "snub disphenoid" is as a digonal gyrobianticupola. It has the same topology and symmetry but without equilateral triangles. It has 4 vertices in a square on a center plane as two anticupolae attached with rotational symmetry. A physical model of the snub disphenoid can be formed by folding a net formed by 12 equilateral triangles (a 12-iamond), shown. An alternative net suggested by John Montroll has fewer concave vertices on its boundary, making it more convenient for origami construction. By Cartesian coordinates. The eight vertices of the snub disphenoid may then be given Cartesian coordinates: formula_1 Here, formula_2 is the positive real solution of the cubic polynomial formula_3. The three variables formula_4, formula_5, and formula_6 is the expression of: formula_7 Because this construction involves the solution to a cubic equation, the snub disphenoid cannot be constructed with a compass and straightedge, unlike the other seven deltahedra. Properties. As a consequence of such constructions, the snub disphenoid has 12 equilateral triangles. A deltahedron is a polyhedron in which all faces are equilateral triangles. There are eight convex deltahedra, one of which is the snub disphenoid. More generally, the convex polyhedron in which all faces are regular polygon are the Johnson solids, and every convex deltahedron is Johnson solid. The snub disphenoid is among them, enumerated as the 84th Johnson solid formula_8. Measurement. A snub disphenoid with edge length formula_9 has a surface area: formula_10 the area of 12 equilateral triangles. Its volume can be calculated as the formula: formula_11 Symmetry and geodesic. The snub disphenoid has the same symmetries as a tetragonal disphenoid, the antiprismatic symmetry formula_0 of order 8: it has an axis of 180° rotational symmetry through the midpoints of its two opposite edges, two perpendicular planes of reflection symmetry through this axis, and four additional symmetry operations given by a reflection perpendicular to the axis followed by a quarter-turn and possibly another reflection parallel to the axis.. Up to symmetries and parallel translation, the snub disphenoid has five types of simple (non-self-crossing) closed geodesics. These are paths on the surface of the polyhedron that avoid the vertices and locally look like the shortest path: they follow straight line segments across each face of the polyhedron that they intersect, and when they cross an edge of the polyhedron they make complementary angles on the two incident faces to the edge. Intuitively, one could stretch a rubber band around the polyhedron along this path and it would stay in place: there is no way to locally change the path and make it shorter. For example, one type of geodesic crosses the two opposite edges of the snub disphenoid at their midpoints (where the symmetry axis exits the polytope) at an angle of formula_12. A second type of geodesic passes near the intersection of the snub disphenoid with the plane that perpendicularly bisects the symmetry axis (the equator of the polyhedron), crossing the edges of eight triangles at angles that alternate between formula_13 and formula_14. Shifting a geodesic on the surface of the polyhedron by a small amount (small enough that the shift does not cause it to cross any vertices) preserves the property of being a geodesic and preserves its length, so both of these examples have shifted versions of the same type that are less symmetrically placed. The lengths of the five simple closed geodesics on a snub disphenoid with unit-length edges are formula_15 (for the equatorial geodesic), formula_16, formula_17 (for the geodesic through the midpoints of opposite edges), formula_18, and formula_19. Except for the tetrahedron, which has infinitely many types of simple closed geodesics, the snub disphenoid has the most types of geodesics of any deltahedron. Representation by the graph. The snub disphenoid is 4-connected, meaning that it takes the removal of four vertices to disconnect the remaining vertices. It is one of only four 4-connected simplicial well-covered polyhedra, meaning that all of the maximal independent sets of its vertices have the same size. The other three polyhedra with this property are the regular octahedron, the pentagonal bipyramid, and an irregular polyhedron with 12 vertices and 20 triangular faces. Dual polyhedron. The dual polyhedron of the snub disphenoid is the elongated gyrobifastigium. It has right-angled pentagons and can self-tessellate space. Applications. Spheres centered at the vertices of the snub disphenoid form a cluster that, according to numerical experiments, has the minimum possible Lennard-Jones potential among all eight-sphere clusters. In the geometry of chemical compounds, a polyhedron may be visualized as the atom cluster surrounding a central atom. The dodecahedral molecular geometry describes the cluster for which it is a snub disphenoid. History and naming. This shape was called a "Siamese dodecahedron" in the paper by Hans Freudenthal and B. L. van der Waerden (1947) which first described the set of eight convex deltahedra. The "dodecadeltahedron" name was given to the same shape by , referring to the fact that it is a 12-sided deltahedron. There are other simplicial dodecahedra, such as the hexagonal bipyramid, but this is the only one that can be realized with equilateral faces. Bernal was interested in the shapes of holes left in irregular close-packed arrangements of spheres, so he used a restrictive definition of deltahedra, in which a deltahedron is a convex polyhedron with triangular faces that can be formed by the centers of a collection of congruent spheres, whose tangencies represent polyhedron edges, and such that there is no room to pack another sphere inside the cage created by this system of spheres. This restrictive definition disallows the triangular bipyramid (as forming two tetrahedral holes rather than a single hole), pentagonal bipyramid (because the spheres for its apexes interpenetrate, so it cannot occur in sphere packings), and icosahedron (because it has interior room for another sphere). Bernal writes that the snub disphenoid is "a very common coordination for the calcium ion in crystallography". In coordination geometry, it is usually known as the trigonal dodecahedron or simply as the dodecahedron. The "snub disphenoid" name comes from Norman Johnson's 1966 classification of the Johnson solids, convex polyhedra all of whose faces are regular. It exists first in a series of polyhedra with axial symmetry, so also can be given the name "digonal gyrobianticupola". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " D_{2 \\mathrm{d}} " }, { "math_id": 1, "text": " \\begin{align}\n (\\pm t, r, 0), &\\qquad (0, -r, \\pm t), \\\\\n (\\pm 1, -s, 0), &\\qquad (0, s, \\pm 1).\n\\end{align} " }, { "math_id": 2, "text": " q \\approx 0.16902 " }, { "math_id": 3, "text": " 2x^3 +11x^2+4x-1." }, { "math_id": 4, "text": " r " }, { "math_id": 5, "text": " s " }, { "math_id": 6, "text": " t " }, { "math_id": 7, "text": " r = \\sqrt{q} \\approx 0.41112, \\qquad s = \\sqrt{\\frac{1-q}{2q}} \\approx 1.56786, \\qquad t = 2rs = \\sqrt{2-2q} \\approx 1.28917." }, { "math_id": 8, "text": " J_{84} " }, { "math_id": 9, "text": " a " }, { "math_id": 10, "text": "A = 3 \\sqrt{3}a^2 \\approx 5.19615a^2, " }, { "math_id": 11, "text": "V \\approx 0.85949 a^3. " }, { "math_id": 12, "text": " \\pi/3 " }, { "math_id": 13, "text": " \\pi/2 " }, { "math_id": 14, "text": " \\pi/6 " }, { "math_id": 15, "text": "2\\sqrt{3}\\approx 3.464" }, { "math_id": 16, "text": "\\sqrt{13}\\approx 3.606" }, { "math_id": 17, "text": "4" }, { "math_id": 18, "text": "2\\sqrt{7}\\approx 5.292" }, { "math_id": 19, "text": "\\sqrt{19}\\approx 4.359" } ]
https://en.wikipedia.org/wiki?curid=1133565
11335800
Cyril Berry
British newspaper editor Cyril J J Berry (1918 – 4 November 2002) was a writer known for his book "First Steps in Winemaking", which has sold more than three million copies worldwide. Throughout the first half of the 20th century, homebrewing in Britain was limited by taxation, prohibition, and scarcity of ingredients during wartime. One of the earliest modern attempts to regulate private production was the Inland Revenue Act 1880 (43 &amp; 44 Vict. c. 20) in the United Kingdom; this required a 5-shilling homebrewing licence. In the UK, in April 1963, the UK Chancellor of the Exchequer, Reggie Maudling removed the need for the 1880 brewing licence. Following the end of sugar rationing in 1953 after the Second World War, and the repeal of the brewing licence, interest in brewing at home started to thrive. Berry was instrumental in this phenomenon as one of the founders of the first British amateur winemakers' circle in Andover, Hampshire and three other English counties in the 1950s. The movement grew quickly from these beginnings. By 1960 there were 86 known wine circles in the UK and over 100 by 1961. A 1962 estimate of membership put numbers at 30,000 in the UK alone. There are now hundreds of wine circles throughout the country and even virtual wine circles with online chat sessions and organised tastings. Berry was one of the founders of the National Association of Winemakers (UK) and served as its first chairman from 1960 to 1967. In 1963 he was instrumental in establishing the Winemaking National Guild of Judges (now National Guild of Wine and Beer Judges) and was one of its early chairmen. Berry also produced the "Amateur Winemaker" magazine and published "First Steps in Winemaking", "130 New Winemaking recipes", and "Home Brewed Beers and Stouts". "First Steps in Winemaking" is notable as a resource for winemaking technique and recipe and is still in print following its original publication in 1960. It includes methods for traditional grape wines, as well as "country wines" using seasonal fruit and vegetables, tinned and dried ingredients, and commercial juices. It is the source for the simplest common method for measuring alcohol by volume in wine: Prior to his retirement in 1967, Berry worked as a newspaper editor, most notably for the "Andover Advertiser". Berry served as mayor of Andover in 1972–73. He was an alumnus of the Andover Grammar School and published "Old Andover", , a collection of local photos and records dating from 1840 to 1960. Berry died in Nerja, Spain, in 2002. Publications. "First Steps in Winemaking", "130 New Winemaking Recipes", "Home-Brewed Beers and Stouts", References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "ABV = (Starting SG - Final SG)/7.36" } ]
https://en.wikipedia.org/wiki?curid=11335800
11335853
Pietro Mengoli
Italian mathematician Pietro Mengoli (1626, Bologna – June 7, 1686, Bologna) was an Italian mathematician and clergyman from Bologna, where he studied with Bonaventura Cavalieri at the University of Bologna, and succeeded him in 1647. He remained as professor there for the next 39 years of his life. Mengoli was pivotal figure in the development of calculus. He established the divergence of the harmonic series nearly forty years before Jacob Bernoulli, to whom the discovery is generally attributed; he gave a development in series of logarithms thirteen years before Nicholas Mercator published his famous treatise "Logarithmotechnia". Mengoli also gave a definition of the definite integral which is not substantially different from that given more than a century later by Augustin-Louis Cauchy. Biography. Born in 1626, Pietro Mengoli studied mathematics and mechanics at the University of Bologna. After the death of his teacher, Bonaventura Cavalieri (1647), Mengoli became a lecturer in the new chair of mechanics from 1649–50 and subsequently taught mathematics at the University of Bologna in the years from 1678 to 1685. He was awarded a doctorate in philosophy in 1650, and, three years later, in civil and canon law. "Novae quadraturae arithmeticae" (1650), "Via regia ad mathematicas" (1655) and "Geometria" (1659), his earliest writings, earned him wide reputation in Europe, especially in academic circles in London. In 1660 he was ordained a catholic priest. A decade of silence followed until, in 1670, the "Speculationi di musica" and "Refrattioni e parallasse solare" were published. During the 1670s Mengoli devoted himself to constructing a theory of metaphysics, in which he tried to demonstrate revealed truths "more geometrico". "Circolo" (1672), "Anno" (1673), "Arithmetica rationalis" (1674) and "Il mese" (1681) are works devoted to the topics of "middle mathematics', cosmology and biblical chronology, logic and metaphysics. Mengoli wrote also a treatise on music theory, "Speculazioni di musica" [Speculations on music], much appreciated in his time and reviewed and partly translated by Henry Oldenburg in the "Philosophical Transactions of the Royal Society". Mengoli died in Bologna in 1685. Contributions. Mengoli first posed the famous Basel problem in 1650, solved in 1735 by Leonhard Euler. In 1650, he also proved that the sum of the alternating harmonic series is equal to the natural logarithm of 2. He also proved that the harmonic series has no upper bound, and provided a proof that Wallis' product for formula_0 is correct. Mengoli anticipated the modern idea of limit of a sequence with his study of quasi-proportions in "Geometriae speciosae elementa" (1659). He used the term "quasi-infinite" for unbounded and "quasi-null" for vanishing. Mengoli proves theorems starting from clear hypotheses and explicitly stated properties, showing everything necessary ... proceeds to a step-by-step demonstration. In the margin he notes the theorems used in each line. Indeed, the work bears many similarities to a modern book and shows that Mengoli was ahead of his time in treating his subject with a high degree of rigor. Six square problem. Mengoli became enthralled with a Diophantine problem posed by Jacques Ozanam called the six-square problem: find three integers such that their differences are squares and that the differences of their squares are also three squares. At first he thought that there was no solution, and in 1674 published his reasoning in "Theorema Arthimeticum". But Ozanam then exhibited a solution: "x" = 2,288,168, "y" = 1,873,432, and "z" = 2,399,057. Humbled by his error, Mengoli made a study of Pythagorean triples to uncover the basis of this solution. He first solved an auxiliary Diophantine problem: find four numbers such that the sum of the first two is a square, the sum of the third and fourth is a square, their product is a square, and the ratio of the first two is greater than the ratio of the third to the fourth. He found two solutions: (112, 15, 35, 12) and (364, 27, 84, 13). Using these quadruples, and algebraic identities, he gave two solutions to the six-square problem beyond Ozanam’s solutions. Jacques de Billy also provided six-square problem solutions. Works. Pietro Mengoli's works were all published in Bologna: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=11335853
11336042
Turbulent Prandtl number
The turbulent Prandtl number (Prt) is a non-dimensional term defined as the ratio between the momentum eddy diffusivity and the heat transfer eddy diffusivity. It is useful for solving the heat transfer problem of turbulent boundary layer flows. The simplest model for Prt is the Reynolds analogy, which yields a turbulent Prandtl number of 1. From experimental data, Prt has an average value of 0.85, but ranges from 0.7 to 0.9 depending on the Prandtl number of the fluid in question. Definition. The introduction of eddy diffusivity and subsequently the turbulent Prandtl number works as a way to define a simple relationship between the extra shear stress and heat flux that is present in turbulent flow. If the momentum and thermal eddy diffusivities are zero (no apparent turbulent shear stress and heat flux), then the turbulent flow equations reduce to the laminar equations. We can define the eddy diffusivities for momentum transfer formula_0 and heat transfer formula_1 asformula_2 and formula_3where formula_4 is the apparent turbulent shear stress and formula_5 is the apparent turbulent heat flux.The turbulent Prandtl number is then defined asformula_6 The turbulent Prandtl number has been shown to not generally equal unity (e.g. Malhotra and Kang, 1984; Kays, 1994; McEligot and Taylor, 1996; and Churchill, 2002). It is a strong function of the molecular Prandtl number amongst other parameters and the Reynolds Analogy is not applicable when the molecular Prandtl number differs significantly from unity as determined by Malhotra and Kang; and elaborated by McEligot and Taylor and Churchill Application. Turbulent momentum boundary layer equation: formula_7Turbulent thermal boundary layer equation,formula_8 Substituting the eddy diffusivities into the momentum and thermal equations yieldsformula_9andformula_10Substitute into the thermal equation using the definition of the turbulent Prandtl number to get formula_11 Consequences. In the special case where the Prandtl number and turbulent Prandtl number both equal unity (as in the Reynolds analogy), the velocity profile and temperature profiles are identical. This greatly simplifies the solution of the heat transfer problem. If the Prandtl number and turbulent Prandtl number are different from unity, then a solution is possible by knowing the turbulent Prandtl number so that one can still solve the momentum and thermal equations. In a general case of three-dimensional turbulence, the concept of eddy viscosity and eddy diffusivity are not valid. Consequently, the turbulent Prandtl number has no meaning. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varepsilon_M" }, { "math_id": 1, "text": "\\varepsilon_H" }, { "math_id": 2, "text": "-\\overline{u'v'} = \\varepsilon_M \\frac{\\partial \\bar{u}}{\\partial y}" }, { "math_id": 3, "text": "-\\overline{v'T'} = \\varepsilon_H \\frac{\\partial \\bar{T}}{\\partial y}" }, { "math_id": 4, "text": "-\\overline{u'v'}" }, { "math_id": 5, "text": "-\\overline{v'T'}" }, { "math_id": 6, "text": "\\mathrm{Pr}_\\mathrm{t} = \\frac{\\varepsilon_M}{\\varepsilon_H}." }, { "math_id": 7, "text": "\\bar {u} \\frac{\\partial \\bar{u}}{\\partial x} + \\bar {v} \\frac{\\partial \\bar{u}}{\\partial y} = -\\frac{1}{\\rho} \\frac{d\\bar{P}}{dx} + \\frac{\\partial}{\\partial y} \\left [(\\nu \\frac{\\partial \\bar{u}}{\\partial y} - \\overline{u'v'}) \\right]." }, { "math_id": 8, "text": "\\bar {u} \\frac{\\partial \\bar{T}}{\\partial x} + \\bar {v} \\frac{\\partial \\bar{T}}{\\partial y} = \\frac{\\partial}{\\partial y} \\left (\\alpha \\frac{\\partial \\bar{T}}{\\partial y} - \\overline{v'T'} \\right)." }, { "math_id": 9, "text": "\\bar {u} \\frac{\\partial \\bar{u}}{\\partial x} + \\bar {v} \\frac{\\partial \\bar{u}}{\\partial y} = -\\frac{1}{\\rho} \\frac{d\\bar{P}}{dx} + \\frac{\\partial}{\\partial y} \\left [(\\nu + \\varepsilon_M) \\frac{\\partial \\bar{u}}{\\partial y}\\right]" }, { "math_id": 10, "text": "\\bar {u} \\frac{\\partial \\bar{T}}{\\partial x} + \\bar {v} \\frac{\\partial \\bar{T}}{\\partial y} = \\frac{\\partial}{\\partial y} \\left [(\\alpha + \\varepsilon_H) \\frac{\\partial \\bar{T}}{\\partial y}\\right]." }, { "math_id": 11, "text": "\\bar {u} \\frac{\\partial \\bar{T}}{\\partial x} + \\bar {v} \\frac{\\partial \\bar{T}}{\\partial y} = \\frac{\\partial}{\\partial y} \\left [(\\alpha + \\frac{\\varepsilon_M}{\\mathrm{Pr}_\\mathrm{t}}) \\frac{\\partial \\bar{T}}{\\partial y}\\right]." } ]
https://en.wikipedia.org/wiki?curid=11336042
11336559
Traveler's dilemma
In game theory, the traveler's dilemma (sometimes abbreviated TD) is a non-zero-sum game in which each player proposes a payoff. The lower of the two proposals wins; the lowball player receives the lowball payoff plus a small bonus, and the highball player receives the same lowball payoff, minus a small penalty. Surprisingly, the Nash equilibrium is for both players to aggressively lowball. The traveler's dilemma is notable in that naive play appears to outperform the Nash equilibrium; this apparent paradox also appears in the centipede game and the finitely-iterated prisoner's dilemma. Formulation. The original game scenario was formulated in 1994 by Kaushik Basu and goes as follows: "An airline loses two suitcases belonging to two different travelers. Both suitcases happen to be identical and contain identical antiques. An airline manager tasked to settle the claims of both travelers explains that the airline is liable for a maximum of $100 per suitcase—he is unable to find out directly the price of the antiques." &lt;br&gt; "To determine an honest appraised value of the antiques, the manager separates both travelers so they can't confer, and asks them to write down the amount of their value at no less than $2 and no larger than $100. He also tells them that if both write down the same number, he will treat that number as the true dollar value of both suitcases and reimburse both travelers that amount. However, if one writes down a smaller number than the other, this smaller number will be taken as the true dollar value, and both travelers will receive that amount along with a bonus/malus: $2 extra will be paid to the traveler who wrote down the lower value and a $2 deduction will be taken from the person who wrote down the higher amount. The challenge is: what strategy should both travelers follow to decide the value they should write down?" The two players attempt to maximize their own payoff, without any concern for the other player's payoff. Analysis. Backward induction only applies where there is perfect information. If it is used where there is information asymmetry- the Airline manager does not know the value of the antique- the result will be irrational behavior. This is what happens in the following analysis- One might expect a traveler's optimum choice to be $100; that is, the traveler values the antiques at the airline manager's maximum allowed price. Remarkably, and, to many, counter-intuitively, the Nash equilibrium solution is in fact just $2; that is, the traveler values the antiques at the airline manager's "minimum" allowed price. For an understanding of why $2 is the Nash equilibrium consider the following proof: The above analysis depends crucially on (1) imperfect information- the airline manager does not know the true value and (2) irrationality- in particular failure to use the Muth Rational strategy. Another proof goes as follows: Experimental results. The ($2, $2) outcome in this instance is the Nash equilibrium of the game. By definition this means that if your opponent chooses this Nash equilibrium value then your best choice is that Nash equilibrium value of $2. This will not be the optimum choice if there is a chance of your opponent choosing a higher value than $2. When the game is played experimentally, most participants select a value higher than the Nash equilibrium and closer to $100 (corresponding to the Pareto optimal solution). More precisely, the Nash equilibrium strategy solution proved to be a bad predictor of people's behavior in a traveler's dilemma with small bonus/malus and a rather good predictor if the bonus/malus parameter was big. Furthermore, the travelers are rewarded by deviating strongly from the Nash equilibrium in the game and obtain much higher rewards than would be realized with the purely rational strategy. These experiments (and others, such as focal points) show that the majority of people do not use purely rational strategies, but the strategies they do use are demonstrably optimal. This paradox could reduce the value of pure game theory analysis, but could also point to the benefit of an expanded reasoning that understands how it can be quite rational to make non-rational choices, at least in the context of games that have players that can be counted on to not play "rationally." For instance, Capraro has proposed a model where humans do not act a priori as single agents but they forecast how the game would be played if they formed coalitions and then they act so as to maximize the forecast. His model fits the experimental data on the Traveler's dilemma and similar games quite well. Recently, the traveler's dilemma was tested with decision undertaken in groups rather than individually, in order to test the assumption that groups decisions are more rational, delivering the message that, usually, two heads are better than one. Experimental findings show that groups are always more rational – i.e. their claims are closer to the Nash equilibrium - and more sensitive to the size of the bonus/malus. Some players appear to pursue a Bayesian Nash equilibrium. Similar games. The traveler's dilemma can be framed as a finitely repeated prisoner's dilemma. Similar paradoxes are attributed to the centipede game and to the p-beauty contest game (or more specifically, "Guess 2/3 of the average"). One variation of the original traveler's dilemma in which both travelers are offered only two integer choices, $2 or $3, is identical mathematically to the standard non-iterated Prisoner's dilemma and thus the traveler's dilemma can be viewed as an extension of prisoner's dilemma. (The minimum guaranteed payout is $1, and each dollar beyond that may be considered equivalent to a year removed from a three-year prison sentence.) These games tend to involve deep iterative deletion of dominated strategies in order to demonstrate the Nash equilibrium, and tend to lead to experimental results that deviate markedly from classical game-theoretical predictions. Payoff matrix. The canonical payoff matrix is shown below (if only integer inputs are taken into account): Denoting by formula_0 the set of strategies available to both players and by formula_1 the payoff function of one of them we can write formula_2 (Note that the other player receives formula_3 since the game is quantitatively symmetric).
[ { "math_id": 0, "text": "S = \\{2,...,100\\}" }, { "math_id": 1, "text": "F: S \\times S \\rightarrow \\mathbb{R}" }, { "math_id": 2, "text": "F(x,y) = \\min(x,y) + 2\\cdot\\sgn(y-x)" }, { "math_id": 3, "text": "F(y,x)" } ]
https://en.wikipedia.org/wiki?curid=11336559
11336817
Filter (higher-order function)
Computer programming function In functional programming, filter is a higher-order function that processes a data structure (usually a list) in some order to produce a new data structure containing exactly those elements of the original data structure for which a given predicate returns the boolean value codice_0. Example. In Haskell, the code example filter even [1..10] evaluates to the list 2, 4, …, 10 by applying the predicate codice_1 to every element of the list of integers 1, 2, …, 10 in that order and creating a new list of those elements for which the predicate returns the boolean value true, thereby giving a list containing only the even members of that list. Conversely, the code example filter (not . even) [1..10] evaluates to the list 1, 3, …, 9 by collecting those elements of the list of integers 1, 2, …, 10 for which the predicate codice_1 returns the boolean value false (with codice_3 being the function composition operator). Visual example. Below, you can see a view of each step of the filter process for a list of integers codice_4 according to the function :formula_0 This function express that if formula_1 is even the return value is formula_2, otherwise it's formula_3. This is the predicate. Language comparison. Filter is a standard function for many programming languages, e.g., Haskell, OCaml, Standard ML, or Erlang. Common Lisp provides the functions codice_5 and codice_6. Scheme Requests for Implementation (SRFI) 1 provides an implementation of filter for the language Scheme. C++ provides the algorithms codice_7 (mutating) and codice_8 (non-mutating); C++11 additionally provides codice_9 (non-mutating). Smalltalk provides the codice_10 method for collections. Filter can also be realized using list comprehensions in languages that support them. In Haskell, codice_11 can be implemented like this: filter :: (a -&gt; Bool) -&gt; [a] -&gt; [a] filter _ [] = [] filter p (x:xs) = [x | p x] ++ filter p xs Here, codice_12 denotes the empty list, codice_13 the list concatenation operation, and codice_14 denotes a list conditionally holding a value, codice_15, if the condition codice_16 holds (evaluates to codice_17). Variants. Filter creates its result without modifying the original list. Many programming languages also provide variants that destructively modify the list argument instead for faster performance. Other variants of filter (e.g., Haskell codice_18 and codice_19) are also common. A common memory optimization for purely functional programming languages is to have the input list and filtered result share the longest common tail (tail-sharing). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f(x) = \\begin{cases}\n\\mathrm{True} &\\text{ if } x \\equiv 0 \\pmod{2}\\\\\n\\mathrm{False} & \\text{ if } x \\equiv 1 \\pmod{2}.\n\\end{cases}" }, { "math_id": 1, "text": "x" }, { "math_id": 2, "text": "\\mathrm{True}" }, { "math_id": 3, "text": "\\mathrm{False}" } ]
https://en.wikipedia.org/wiki?curid=11336817
11337263
Milman's reverse Brunn–Minkowski inequality
In mathematics, particularly, in asymptotic convex geometry, Milman's reverse Brunn–Minkowski inequality is a result due to Vitali Milman that provides a reverse inequality to the famous Brunn–Minkowski inequality for convex bodies in "n"-dimensional Euclidean space R"n". Namely, it bounds the volume of the Minkowski sum of two bodies from above in terms of the volumes of the bodies. Introduction. Let "K" and "L" be convex bodies in R"n". The Brunn–Minkowski inequality states that formula_0 where vol denotes "n"-dimensional Lebesgue measure and the + on the left-hand side denotes Minkowski addition. In general, no reverse bound is possible, since one can find convex bodies "K" and "L" of unit volume so that the volume of their Minkowski sum is arbitrarily large. Milman's theorem states that one can replace one of the bodies by its image under a properly chosen volume-preserving linear map so that the left-hand side of the Brunn–Minkowski inequality is bounded by a constant multiple of the right-hand side. The result is one of the main structural theorems in the local theory of Banach spaces. Statement of the inequality. There is a constant "C", independent of "n", such that for any two centrally symmetric convex bodies "K" and "L" in R"n", there are volume-preserving linear maps "φ" and "ψ" from R"n" to itself such that for any real numbers "s", "t" &gt; 0 formula_1 One of the maps may be chosen to be the identity. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathrm{vol}(K+L)^{1/n} \\geq \\mathrm{vol}(K)^{1/n} + \\mathrm{vol}(L)^{1/n}~," }, { "math_id": 1, "text": "\\mathrm{vol} ( s \\, \\varphi K + t \\, \\psi L )^{1/n} \\leq C \\left( s\\, \\mathrm{vol} ( \\varphi K )^{1/n} + t\\, \\mathrm{vol} ( \\psi L )^{1/n} \\right)~." } ]
https://en.wikipedia.org/wiki?curid=11337263
11338826
Admissible heuristic
Computer science pathfinding concept In computer science, specifically in algorithms related to pathfinding, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the path. It is related to the concept of consistent heuristics. While all consistent heuristics are admissible, not all admissible heuristics are consistent. Search algorithms. An admissible heuristic is used to estimate the cost of reaching the goal state in an informed search algorithm. In order for a heuristic to be admissible to the search problem, the estimated cost must always be lower than or equal to the actual cost of reaching the goal state. The search algorithm uses the admissible heuristic to find an estimated optimal path to the goal state from the current node. For example, in A* search the evaluation function (where formula_0 is the current node) is: formula_1 where formula_2 = the evaluation function. formula_3 = the cost from the start node to the current node formula_4 = estimated cost from current node to goal. formula_4 is calculated using the heuristic function. With a non-admissible heuristic, the A* algorithm could overlook the optimal solution to a search problem due to an overestimation in formula_2. formula_0 is a node formula_5 is a heuristic formula_4 is cost indicated by formula_5 to reach a goal from formula_0 formula_6 is the optimal cost to reach a goal from formula_0 formula_4 is admissible if, formula_7 formula_8 Formulation. Construction An admissible heuristic can be derived from a relaxed version of the problem, or by information from pattern databases that store exact solutions to subproblems of the problem, or by using inductive learning methods. Examples. Two different examples of admissible heuristics apply to the fifteen puzzle problem: The Hamming distance is the total number of misplaced tiles. It is clear that this heuristic is admissible since the total number of moves to order the tiles correctly is at least the number of misplaced tiles (each tile not in place must be moved at least once). The cost (number of moves) to the goal (an ordered puzzle) is at least the Hamming distance of the puzzle. The Manhattan distance of a puzzle is defined as: formula_9 Consider the puzzle below in which the player wishes to move each tile such that the numbers are ordered. The Manhattan distance is an admissible heuristic in this case because every tile will have to be moved at least the number of spots in between itself and its correct position. The subscripts show the Manhattan distance for each tile. The total Manhattan distance for the shown puzzle is: formula_10 Optimality proof. If an admissible heuristic is used in an algorithm that, per iteration, progresses only the path of lowest evaluation (current cost + heuristic) of several candidate paths, terminates the moment its exploration reaches the goal and, crucially, never closes all optimal paths before terminating (something that's possible with A* search algorithm if special care isn't taken), then this algorithm can only terminate on an optimal path. To see why, consider the following proof by contradiction: Assume such an algorithm managed to terminate on a path T with a true cost Ttrue greater than the optimal path S with true cost Strue. This means that before terminating, the evaluated cost of T was less than or equal to the evaluated cost of S (or else S would have been picked). Denote these evaluated costs Teval and Seval respectively. The above can be summarized as follows, Strue &lt; Ttrue Teval ≤ Seval If our heuristic is admissible it follows that at this penultimate step Teval = Ttrue because any increase on the true cost by the heuristic on T would be inadmissible and the heuristic cannot be negative. On the other hand, an admissible heuristic would require that Seval ≤ Strue which combined with the above inequalities gives us Teval &lt; Ttrue and more specifically Teval ≠ Ttrue. As Teval and Ttrue cannot be both equal and unequal our assumption must have been false and so it must be impossible to terminate on a more costly than optimal path. As an example, let us say we have costs as follows:(the cost above/below a node is the heuristic, the cost at an edge is the actual cost) 0 10 0 100 0 START ---- O ----- GOAL 0| |100 O ------- O ------ O 100 1 100 1 100 So clearly we would start off visiting the top middle node, since the expected total cost, i.e. formula_2, is formula_11. Then the goal would be a candidate, with formula_2 equal to formula_12. Then we would clearly pick the bottom nodes one after the other, followed by the updated goal, since they all have formula_2 lower than the formula_2 of the current goal, i.e. their formula_2 is formula_13. So even though the goal was a candidate, we could not pick it because there were still better paths out there. This way, an admissible heuristic can ensure optimality. However, note that although an admissible heuristic can guarantee final optimality, it is not necessarily efficient. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "f(n) = g(n) + h(n)" }, { "math_id": 2, "text": "f(n)" }, { "math_id": 3, "text": "g(n)" }, { "math_id": 4, "text": "h(n)" }, { "math_id": 5, "text": "h" }, { "math_id": 6, "text": "h^*(n)" }, { "math_id": 7, "text": "\\forall n" }, { "math_id": 8, "text": "h(n) \\leq h^*(n)" }, { "math_id": 9, "text": "h(n)=\\sum_\\text{all tiles} \\mathit{distance}(\\text{tile, correct position})" }, { "math_id": 10, "text": "h(n)=3+1+0+1+2+3+3+4+3+2+4+4+4+1+1=36" }, { "math_id": 11, "text": "10 + 0 = 10" }, { "math_id": 12, "text": "10+100+0=110" }, { "math_id": 13, "text": "100, 101, 102, 102" } ]
https://en.wikipedia.org/wiki?curid=11338826
1133918
Snub square antiprism
85th Johnson solid (26 faces) In geometry, the snub square antiprism is the Johnson solid that can be constructed by snubbing the square antiprism. It is one of the elementary Johnson solids that do not arise from "cut and paste" manipulations of the Platonic and Archimedean solids, although it is a relative of the icosahedron that has fourfold symmetry instead of threefold. Construction and properties. The snub is the process of constructing polyhedra by cutting loose the edge's faces, twisting them, and then attaching equilateral triangles to their edges. As the name suggested, the snub square antiprism is constructed by snubbing the square antiprism, and this construction results in 24 equilateral triangles and 2 squares as its faces. The Johnson solids are the convex polyhedra whose faces are regular, and the snub square antiprism is one of them, enumerated as formula_1, the 85th Johnson solid. Let formula_2 be the positive root of the cubic polynomial formula_3 Furthermore, let formula_4 be defined by formula_5 Then, Cartesian coordinates of a snub square antiprism with edge length 2 are given by the union of the orbits of the points formula_6 under the action of the group generated by a rotation around the formula_7-axis by 90° and by a rotation by 180° around a straight line perpendicular to the formula_7-axis and making an angle of 22.5° with the formula_8-axis. It has the three-dimensional symmetry of dihedral group formula_0 of order 16. The surface area and volume of a snub square antiprism with edge length formula_9 can be calculated as: formula_10 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " D_{4d} " }, { "math_id": 1, "text": " J_{85} " }, { "math_id": 2, "text": " k \\approx 0.82354 " }, { "math_id": 3, "text": " 9x^3+3\\sqrt{3}\\left(5-\\sqrt{2}\\right)x^2-3\\left(5-2\\sqrt{2}\\right)x-17\\sqrt{3}+7\\sqrt{6}. " }, { "math_id": 4, "text": " h \\approx 1.35374 " }, { "math_id": 5, "text": " h = \\frac{\\sqrt{2}+8+2\\sqrt{3}k-3\\left(2+\\sqrt{2}\\right)k^2}{4\\sqrt{3-3k^2}}. " }, { "math_id": 6, "text": " (1,1,h),\\,\\left(1+\\sqrt{3}k,0,h-\\sqrt{3-3k^2}\\right) " }, { "math_id": 7, "text": " z " }, { "math_id": 8, "text": " x " }, { "math_id": 9, "text": " a " }, { "math_id": 10, "text": " \\begin{align}\n A = \\left(2+6\\sqrt{3}\\right)a^2 &\\approx 12.392a^2, \\\\\n V &\\approx 3.602 a^3.\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=1133918
11339537
Multiply-with-carry pseudorandom number generator
Method for generating sequences of random integers In computer science, multiply-with-carry (MWC) is a method invented by George Marsaglia for generating sequences of random integers based on an initial set from two to many thousands of randomly chosen seed values. The main advantages of the MWC method are that it invokes simple computer integer arithmetic and leads to very fast generation of sequences of random numbers with immense periods, ranging from around formula_0 to formula_1. As with all pseudorandom number generators, the resulting sequences are functions of the supplied seed values. General theory. An MWC generator is a special form of Lehmer random number generator formula_2 which allows efficient implementation of a prime modulus formula_3 much larger than the machine word size. Normal Lehmer generator implementations choose a modulus close to the machine word size. An MWC generator instead maintains its state in base formula_4, so multiplying by formula_4 is done implicitly by shifting one word. The base formula_4 is typically chosen to equal the computer's word size, as this makes arithmetic modulo formula_4 trivial. This may vary from formula_5 for a microcontroller to formula_6. (This article uses formula_7 for examples.) The initial state ("seed") values are arbitrary, except that they must not be all zero, nor all at the maximum permitted values (formula_8 and formula_9 ). (This is commonly done by choosing formula_10 between 1 and formula_11.). The MWC sequence is then a sequence of pairs formula_12 determined by formula_13 This is called a lag-1 MWC sequence. Sometimes an odd base is preferred, in which case formula_14 can be used, which is almost as simple to implement. A lag-formula_15 sequence is a generalization of the lag-1 sequence allowing longer periods. The lag-formula_15 MWC sequence is then a sequence of pairs formula_12 (for formula_16) determined by formula_17 and the MWC generator output is the sequence of formula_18's, formula_19 In this case, the initial state ("seed") values must not be all zero nor formula_8 and formula_20. The MWC multiplier formula_21 and lag formula_15 determine the modulus formula_22. In practice, formula_21 is chosen so the modulus is prime and the sequence has long period. If the modulus is prime, the period of a lag-formula_15 MWC generator is the order of formula_4 in the multiplicative group of numbers modulo formula_3. While it is theoretically possible to choose a non-prime modulus, a prime modulus eliminates the possibility of the initial seed sharing a common divisor with the modulus, which would reduce the generator's period. Because 2 is a quadratic residue of numbers of the form formula_23, formula_24 cannot be a primitive root of formula_22. Therefore, MWC generators with base formula_25 have their parameters chosen so their period is ("abr"−1)/2. This is one of the difficulties that use of "b" = 2"k" − 1 overcomes. The basic form of an MWC generator has parameters "a", "b" and "r", and "r"+1 words of state. The state consists of "r" residues modulo "b" 0 ≤ "x"0, "x"1, "x"2..., "x""r"−1 &lt; b, and a carry "c""r"−1 &lt; "a". Although the theory of MWC generators permits "a" &gt; "b", "a" is almost always chosen smaller for convenience of implementation. The state transformation function of an MWC generator is one step of Montgomery reduction modulo "p". The state is a large integer with most significant word "c""n"−1 and least significant word "x""n"−"r". Each step, "x""n"−"r"·("abr"−1) is added to this integer. This is done in two parts: −1·"x""n"−"r" is added to "x""n"−"r", resulting in a least significant word of zero. And second, "a"·"x""n"−"r" is added to the carry. This makes the integer one word longer, producing two new most significant words "xn" and "cn". So far, this has simply added a multiple of "p" to the state, resulting in a different representative of the same residue class modulo "p". But finally, the state is shifted down one word, dividing by "b". This discards the least significant word of zero (which, in practice, is never computed at all) and effectively multiplies the state by "b"−1 (mod "p"). Thus, a multiply-with-carry generator "is" a Lehmer generator with modulus "p" and multiplier "b"−1 (mod "p"). This is the same as a generator with multiplier "b", but producing output in reverse order, which does not affect the quality of the resultant pseudorandom numbers. Couture and L'Ecuyer have proved the surprising result that the lattice associated with a multiply-with-carry generator is very close to the lattice associated with the Lehmer generator it simulates. Thus, the mathematical techniques developed for Lehmer generators (such as the spectral test) can be applied to multiply-with-carry generators. Efficiency. A linear congruential generator with base "b" = 232 is implemented as formula_26 where "c" is a constant. If "a" ≡ 1 (mod 4) and "c" is odd, the resulting base-232 congruential sequence will have period 232. This can be computed using only the low 32 bits of the product of "a" and the current "x". However, many microprocessors can compute a full 64-bit product in almost the same time as the low 32 bits. Indeed, many compute the 64-bit product and then ignore the high half. A lag-1 multiply-with-carry generator allows us to make the period nearly 263 by using almost the same computer operations, except that the top half of the 64-bit product is not ignored after the product is computed. Instead, a 64-bit sum is computed, and the top half is used as a "new" carry value "c" rather than the fixed additive constant of the standard congruential sequence: Compute "ax"+"c" in 64 bits, then use the top half as the new "c", and bottom half as the new "x". Choice of multiplier. With multiplier "a" specified, each pair of input values "x", "c" is converted to a new pair, formula_27 If "x" and "c" are not both zero, then the period of the resulting multiply-with-carry sequence will be the order of "b" = 232 in the multiplicative group of residues modulo "abr" − 1, that is, the smallest "n" such that "bn" ≡ 1 (mod "abr" − 1). If "p" = "abr" − 1 is prime, then Fermat's little theorem guarantees that the order of any element must divide "p" − 1 = "abr" − 2, so one way to ensure a large order is to choose "a" such that "p" is a "safe prime", that is both "p" and ("p" − 1)/2 = "abr"/2 − 1 are prime. In such a case, for "b"= 232 and "r" = 1, the period will be "abr"/2 − 1, approaching 263, which in practice may be an acceptably large subset of the number of possible 32-bit pairs ("x", "c"). More specifically, in such a case, the order of "any" element divides "p" − 1, and there are only four possible divisors: 1, 2, "abr"/2 − 1, or "abr" − 2. The first two apply only to the elements 1 and −1, and quadratic reciprocity arguments show that the fourth option cannot apply to "b", so only the third option remains. Following are some maximal values of "a" for computer applications which satisfy the above safe prime condition, for lag-1 generators: While a safe prime ensures that almost "any" element of the group has a large order, the period of the generator is specifically the order of "b". For small moduli, more computationally expensive methods can be used to find multipliers "a" where the period is "ab"/2 − 1. The following are again maximum values of "a" of various sizes. MWC generators as repeating decimals. The output of a multiply-with-carry generator is equivalent to the radix-"b" expansion of a fraction with denominator "p" = "abr" − 1. Here is an example for the simple case of "b" = 10 and "r" = 1, so the result is a repeating decimal. Starting with formula_28, the MWC sequence formula_29 produces this sequence of states: 10,01,07,49,67,55,40,04,28,58,61,13,22,16,43,25,37,52,19,64,34,31, 10,01,07... with period 22. Consider just the sequence of "xi: 0,1,7,9,7,5,0,4,8,8,1,3,2,6,3,5,7,2,9,4,4,1, 0,1,7,9,7,5,0... Notice that if those repeated segments of "x" values are put "in reverse order": formula_30 we get the expansion "j"/("ab"−1) with "a"=7, "b"=10, "j=10": formula_31 This is true in general: The sequence of "x"s produced by a lag-"r" MWC generator: formula_32 when put in reverse order, will be the radix-"b" expansion of a fraction "j"/("abr" − 1) for some 0 &lt; "j" &lt; "abr". Equivalence to linear congruential generator. Continuing the above example, if we start with formula_33, and generate the ordinary congruential sequence formula_34, we get the period 22 sequence 31,10,1,7,49,67,55,40,4,28,58,61,13,22,16,43,25,37,52,19,64,34, 31,10,1,7... and that sequence, reduced mod 10, is 1,0,1,7,9,7,5,0,4,8,8,1,3,2,6,3,5,7,2,9,4,4, 1,0,1,7,9,7,5,0... the same sequence of "x"'s resulting from the MWC sequence. This is true in general, (but apparently only for lag-1 MWC sequences): Given initial values formula_35, the sequence formula_36 resulting from the lag-1 MWC sequence formula_37 is exactly the Lehmer random number generator output sequence "yn" = "ay""n" − 1 mod ("ab" − 1), reduced modulo "b". Choosing a different initial value "y"0 merely rotates the cycle of "x"'s. Complementary-multiply-with-carry generators. Establishing the period of a lag-"r" MWC generator usually entails choosing multiplier "a" so that "p" = "abr" − 1 is prime. Then "p" − 1 will have to be factored in order to find the order of "b" mod "p". If "p" is a safe prime, then this is simple, and the order of "b" will be either "p" − 1 or ("p" − 1)/2. In other cases, "p" − 1 may be difficult to factor. However, the algorithm also permits a "negative" multiplier. This leads to a slight modification of the MWC procedure, and produces a modulus "p" = |−"abr" − 1| = "abr" + 1. This makes "p" − 1 = "abr" easy to factor, making it practical to establish the period of very large generators. The modified procedure is called complementary-multiply-with-carry (CMWC), and the setup is the same as that for lag-"r" MWC: multiplier "a", base "b", and "r" + 1 seeds, "x"0, "x"1, "x"2, ..., "x""r"−1, and "c""r"−1. The modification is to the generation of a new pair ("x", "c"). Rearranging the computation to avoid negative numbers, the new "x" value is complemented by subtracting it from "b" − 1: formula_38 The resulting sequence of "x"'s produced by the CMWC RNG will have period the order of "b" in the multiplicative group of residues modulo "abr"+1, and the output "x"'s, in reverse order, will form the base "b" expansion of "j"/("abr"+1) for some 0 &lt; "j" &lt; "abr". Use of lag-"r" CMWC makes it much easier to find periods for "r"'s as large as 512, 1024, 2048, etc. Another advantage of this modified procedure is that the period is a multiple of "b", so the output is exactly equidistributed mod "b". (The ordinary MWC, over its full period, produces each possible output an equal number of times "except" that zero is produced one time less, a bias which is negligible if the period is long enough.) One "disadvantage" of the CMWC construction is that, with a power-of-two base, the maximum achievable period is less than for a similar-sized MWC generator; you lose several bits. Thus, an MWC generator is usually preferable for small lags. This can remedied by using "b" = 2"k"−1, or choosing a lag one word longer to compensate. Some examples: With "b" = 232, and "a" = 109111 or 108798 or 108517, the period of the lag-1024 CMWC formula_39 will be "a"·232762 = "ab"1024/64, about 109867. With "b" = 232 and "a" = 3636507990, "p" = "ab"1359 − 1 is a safe prime, so the MWC sequence based on that "a" has period 3636507990·243487 ≈ 1013101. With "b" = 232, a CMWC RNG with near record period may be based on the prime "p" = 15455296"b"42658 + 1. The order of "b" for that prime is 241489·21365056 ≈ 10410928. More general moduli. The MWC modulus of "abr"−1 is chosen to make computation particularly simple, but brings with it some disadvantages, notably that the period is at most half the modulus. There are several ways to generalize this, at the cost of more multiplications per iteration. First, it is possible to add additional terms to the product, producing a modulus of the form "arbr"+"asbs"−1. This requires computing "cnb" + "xn" = "arx""n"−"r" + "asx""n"−"s". (The carry is limited to one word if "ar" + "as" ≤ "b".) However, this does not fix the period issue, which depends on the low bits of the modulus. Fortunately, the Montgomery reduction algorithm permits other moduli, as long as they are relatively prime to the base "b", and this can be applied to permit a modulus of the form "arbr"−"a"0, for a wide range of values "a"0. Goresky and Klapper developed the theory of these generalized multiply-with-carry generators, proving, in particular, that choosing a negative "a"0 and "ar"–"a"0 &lt; "b" the carry value is always smaller than "b", making the implementation efficient. The more general form of the modulus improves also the quality of the generator, albeit one cannot always get full period. To implement a Goresky-Klapper generator one precomputes "a" (mod "b"), and changes the iteration as follows: formula_40 In the common case that "b" = 2"k", "a"0 must be odd for the inverse to exist. Implementation. The following is an implementation of the CMWC algorithm in the C programming language. Also, included in the program is a sample initialization function. In this implementation the base is 232−1 and lag "r"=4096. The period of the resulting generator is about formula_41. // C99 Complementary Multiply With Carry generator // How many bits in rand()? // https://stackoverflow.com/a/27593398 // CMWC working parts struct cmwc_state { uint32_t Q[CMWC_CYCLE]; uint32_t c; // must be limited with CMWC_C_MAX unsigned i; // Collect 32 bits of rand(). You are encouraged to use a better source instead. uint32_t rand32(void) uint32_t result = rand(); for (int bits = RAND_BITS; bits &lt; 32; bits += RAND_BITS) result = result « RAND_BITS | rand(); return result; // Init the state with seed void initCMWC(struct cmwc_state *state, unsigned int seed) srand(seed); for (int i = 0; i &lt; CMWC_CYCLE; i++) state-&gt;Q[i] = rand32(); do state-&gt;c = rand32(); while (state-&gt;c &gt;= CMWC_C_MAX); state-&gt;i = CMWC_CYCLE - 1; // CMWC engine uint32_t randCMWC(struct cmwc_state *state) //EDITED parameter *state was missing uint64_t const a = 18782; // as Marsaglia recommends uint32_t const m = 0xfffffffe; // as Marsaglia recommends uint64_t t; uint32_t x; state-&gt;i = (state-&gt;i + 1) &amp; (CMWC_CYCLE - 1); t = a * state-&gt;Q[state-&gt;i] + state-&gt;c; /* Let c = t / 0xffffffff, x = t mod 0xffffffff */ state-&gt;c = t » 32; x = t + state-&gt;c; if (x &lt; state-&gt;c) { x++; state-&gt;c++; return state-&gt;Q[state-&gt;i] = m - x; int main() struct cmwc_state cmwc; unsigned int seed = time(NULL); initCMWC(&amp;cmwc, seed); printf("Random CMWC: %u\n", randCMWC(&amp;cmwc)); The following are implementations of small-state MWC generators with 64-bit output using 128-bit multiplications. // C99 + __uint128_t MWC, 128 bits of state, period approx. 2^127 /* The state must be neither all zero, nor x = 2^64 - 1, c = MWC_A1 - 1. The condition 0 &lt; c &lt; MWC_A1 - 1 is thus sufficient. */ uint64_t x, c = 1; uint64_t inline next() { const __uint128_t t = MWC_A1 * (__uint128_t)x + c; c = t » 64; return x = t; // C99 + __uint128_t MWC, 256 bits of state, period approx. 2^255 /* The state must be neither all zero, nor x = y = z = 2^64 - 1, c = MWC_A3 - 1. The condition 0 &lt; c &lt; MWC_A3 - 1 is thus sufficient. */ uint64_t x, y, z, c = 1; uint64_t inline next() { const __uint128_t t = MWC_A3 * (__uint128_t)x + c; x = y; y = z; c = t » 64; return z = t; The following are implementations of small-state Goresky-Klapper's generalized MWC generators with 64-bit output using 128-bit multiplications. // C99 + __uint128_t Goresky-Klapper GMWC, 128 bits of state, period approx. 2^127 /* The state must be neither all zero, nor x = 2^64 - 1, c = GMWC_A1 + GMWC_MINUS_A0. The condition 0 &lt; c &lt; GMWC_A1 + GMWC_MINUS_A0 is thus sufficient. */ uint64_t x = 0, c = 1; uint64_t inline next() { const __uint128_t t = GMWC_A1 * (__uint128_t)x + c; x = GMWC_A0INV * (uint64_t)t; c = (t + GMWC_MINUSA0 * (__uint128_t)x) » 64; return x; // C99 + __uint128_t Goresky-Klapper GMWC, 256 bits of state, period approx. 2^255 /* The state must be neither all zero, nor x = y = z = 2^64 - 1, c = GMWC_A3 + GMWC_MINUS_A0. The condition 0 &lt; c &lt; GMWC_A3 + GMWC_MINUS_A0 is thus sufficient. */ uint64_t x, y, z, c = 1; /* The state can be seeded with any set of values, not all zeroes. */ uint64_t inline next() { const __uint128_t t = GMWC_A3 * (__uint128_t)x + c; x = y; y = z; z = GMWC_A0INV * (uint64_t)t; c = (t + GMWC_MINUSA0 * (__uint128_t)z) » 64; return z; Usage. Because of simplicity and speed, CMWC is known to be used in game development, particularly in modern roguelike games. It is informally known as the Mother of All PRNGs, a name originally coined by Marsaglia himself. In libtcod, CMWC4096 replaced MT19937 as the default PRNG. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2^{60}" }, { "math_id": 1, "text": "2^{2000000}" }, { "math_id": 2, "text": "x_n = bx_{n-1} \\bmod p" }, { "math_id": 3, "text": "p" }, { "math_id": 4, "text": "b" }, { "math_id": 5, "text": "b = 2^8" }, { "math_id": 6, "text": "b = 2^{64}" }, { "math_id": 7, "text": "b = 2^{32}" }, { "math_id": 8, "text": "x_0 = b-1" }, { "math_id": 9, "text": "c_0 = a-1" }, { "math_id": 10, "text": "c_0" }, { "math_id": 11, "text": "a-2" }, { "math_id": 12, "text": "x_n, c_n" }, { "math_id": 13, "text": "x_n=(ax_{n-1}+c_{n-1})\\,\\bmod\\,b,\\ c_n=\\left\\lfloor\\frac{ax_{n-1}+c_{n-1}}{b}\\right\\rfloor" }, { "math_id": 14, "text": "b = 2^k-1" }, { "math_id": 15, "text": "r" }, { "math_id": 16, "text": "n > r" }, { "math_id": 17, "text": "x_n=(ax_{n-r}+c_{n-1})\\,\\bmod\\,b,\\ c_n=\\left\\lfloor\\frac{ax_{n-r}+c_{n-1}}{b}\\right\\rfloor" }, { "math_id": 18, "text": "x" }, { "math_id": 19, "text": "x_r, x_{r+1}, x_{r+2}, ..." }, { "math_id": 20, "text": "c_{r-1} = a-1" }, { "math_id": 21, "text": "a" }, { "math_id": 22, "text": "p = ab^r-1" }, { "math_id": 23, "text": "8k \\pm 1" }, { "math_id": 24, "text": "b = 2^k" }, { "math_id": 25, "text": "2^k" }, { "math_id": 26, "text": "x_{n+1}=(ax_n+c)\\ \\bmod\\,2^{32}," }, { "math_id": 27, "text": "x\\leftarrow (ax+c)\\,\\bmod\\,2^{32},\\ \\ c\\leftarrow \\left\\lfloor\\frac{ax+c}{2^{32}}\\right\\rfloor." }, { "math_id": 28, "text": "c_0=1,x_0=0" }, { "math_id": 29, "text": "x_n=(7x_{n-1}+c_{n-1})\\,\\bmod\\,10,\\ c_n=\\left\\lfloor\\frac{7x_{n-1}+c_{n-1}}{10}\\right\\rfloor," }, { "math_id": 30, "text": "1449275\\cdots 9710\\,1449275\\cdots 9710\\,144\\cdots" }, { "math_id": 31, "text": "\\frac{10}{69}=0.1449275362318840579710\\,14492753623\\ldots" }, { "math_id": 32, "text": " x_n=(ax_{n-r}+c_{n-1})\\bmod\\,b\\,,\\ \\ c_n=\\left\\lfloor\\frac{ax_{n-r}+c_{n-1}}{b}\\right\\rfloor," }, { "math_id": 33, "text": "x_0=34" }, { "math_id": 34, "text": "x_n=7x_{n-1}\\,\\bmod\\,69" }, { "math_id": 35, "text": "x_0,c_0" }, { "math_id": 36, "text": "x_1,x_2,\\ldots" }, { "math_id": 37, "text": "x_n=(ax_{n-1}+c_{n-1})\\,\\bmod b\\,,\\ \\ c_n=\\left\\lfloor\\frac{ax_{n-1}+c_{n-1}}{b}\\right\\rfloor" }, { "math_id": 38, "text": "x_n=(b-1)-(ax_{n-r}+c_{n-1})\\,\\bmod\\,b,\\ c_n=\\left\\lfloor\\frac{ax_{n-r}+c_{n-1}}{b}\\right\\rfloor." }, { "math_id": 39, "text": "x_n=(b-1)-(ax_{n-1024}+c_{n-1})\\,\\bmod\\,b,\\ c_n=\\left\\lfloor\\frac{ax_{n-1024}+c_{n-1}}{b}\\right\\rfloor." }, { "math_id": 40, "text": "t = c_{n-1} + \\sum_1^r a_i x_{n-i},\n\\ x_n = a_0^{-1}t \\bmod b, \\ c_n = \\left\\lfloor\\frac{t - a_0 x_n}{b}\\right\\rfloor." }, { "math_id": 41, "text": "2^{131104}" } ]
https://en.wikipedia.org/wiki?curid=11339537
113424
Chi-squared distribution
Probability distribution and special case of gamma distribution In probability theory and statistics, the chi-squared distribution (also chi-square or formula_1-distribution) with formula_0 degrees of freedom is the distribution of a sum of the squares of formula_0 independent standard normal random variables. The chi-squared distribution formula_2 is a special case of the gamma distribution and the univariate Wishart distribution. Specifically if formula_3 then formula_4 (where formula_5 is the shape parameter and formula_6 the scale parameter of the gamma distribution) and formula_7. The scaled chi-squared distribution formula_8 is a reparametrization of the gamma distribution and the univariate Wishart distribution. Specifically if formula_9 then formula_10 and formula_11. The chi-squared distribution is one of the most widely used probability distributions in inferential statistics, notably in hypothesis testing and in construction of confidence intervals. This distribution is sometimes called the central chi-squared distribution, a special case of the more general noncentral chi-squared distribution. The chi-squared distribution is used in the common chi-squared tests for goodness of fit of an observed distribution to a theoretical one, the independence of two criteria of classification of qualitative data, and in finding the confidence interval for estimating the population standard deviation of a normal distribution from a sample standard deviation. Many other statistical tests also use this distribution, such as Friedman's analysis of variance by ranks. Definitions. If "Z"1, ..., "Z""k" are independent, standard normal random variables, then the sum of their squares, formula_12 is distributed according to the chi-squared distribution with k degrees of freedom. This is usually denoted as formula_13 The chi-squared distribution has one parameter: a positive integer k that specifies the number of degrees of freedom (the number of random variables being summed, "Z""i" s). Introduction. The chi-squared distribution is used primarily in hypothesis testing, and to a lesser extent for confidence intervals for population variance when the underlying distribution is normal. Unlike more widely known distributions such as the normal distribution and the exponential distribution, the chi-squared distribution is not as often applied in the direct modeling of natural phenomena. It arises in the following hypothesis tests, among others: It is also a component of the definition of the "t"-distribution and the "F"-distribution used in "t"-tests, analysis of variance, and regression analysis. The primary reason for which the chi-squared distribution is extensively used in hypothesis testing is its relationship to the normal distribution. Many hypothesis tests use a test statistic, such as the "t"-statistic in a "t"-test. For these hypothesis tests, as the sample size, n, increases, the sampling distribution of the test statistic approaches the normal distribution (central limit theorem). Because the test statistic (such as t) is asymptotically normally distributed, provided the sample size is sufficiently large, the distribution used for hypothesis testing may be approximated by a normal distribution. Testing hypotheses using a normal distribution is well understood and relatively easy. The simplest chi-squared distribution is the square of a standard normal distribution. So wherever a normal distribution could be used for a hypothesis test, a chi-squared distribution could be used. Suppose that formula_14 is a random variable sampled from the standard normal distribution, where the mean is formula_15 and the variance is formula_16: formula_17. Now, consider the random variable formula_18. The distribution of the random variable formula_19 is an example of a chi-squared distribution: formula_20. The subscript 1 indicates that this particular chi-squared distribution is constructed from only 1 standard normal distribution. A chi-squared distribution constructed by squaring a single standard normal distribution is said to have 1 degree of freedom. Thus, as the sample size for a hypothesis test increases, the distribution of the test statistic approaches a normal distribution. Just as extreme values of the normal distribution have low probability (and give small p-values), extreme values of the chi-squared distribution have low probability. An additional reason that the chi-squared distribution is widely used is that it turns up as the large sample distribution of generalized likelihood ratio tests (LRT). LRTs have several desirable properties; in particular, simple LRTs commonly provide the highest power to reject the null hypothesis (Neyman–Pearson lemma) and this leads also to optimality properties of generalised LRTs. However, the normal and chi-squared approximations are only valid asymptotically. For this reason, it is preferable to use the "t" distribution rather than the normal approximation or the chi-squared approximation for a small sample size. Similarly, in analyses of contingency tables, the chi-squared approximation will be poor for a small sample size, and it is preferable to use Fisher's exact test. Ramsey shows that the exact binomial test is always more powerful than the normal approximation. Lancaster shows the connections among the binomial, normal, and chi-squared distributions, as follows. De Moivre and Laplace established that a binomial distribution could be approximated by a normal distribution. Specifically they showed the asymptotic normality of the random variable formula_21 where formula_22 is the observed number of successes in formula_23 trials, where the probability of success is formula_24, and formula_25. Squaring both sides of the equation gives formula_26 Using formula_27, formula_28, and formula_25, this equation can be rewritten as formula_29 The expression on the right is of the form that Karl Pearson would generalize to the form formula_30 where formula_31 = Pearson's cumulative test statistic, which asymptotically approaches a formula_1 distribution; formula_32 = the number of observations of type formula_33; formula_34 = the expected (theoretical) frequency of type formula_33, asserted by the null hypothesis that the fraction of type formula_33 in the population is formula_35; and formula_36 = the number of cells in the table. In the case of a binomial outcome (flipping a coin), the binomial distribution may be approximated by a normal distribution (for sufficiently large formula_37). Because the square of a standard normal distribution is the chi-squared distribution with one degree of freedom, the probability of a result such as 1 heads in 10 trials can be approximated either by using the normal distribution directly, or the chi-squared distribution for the normalised, squared difference between observed and expected value. However, many problems involve more than the two possible outcomes of a binomial, and instead require 3 or more categories, which leads to the multinomial distribution. Just as de Moivre and Laplace sought for and found the normal approximation to the binomial, Pearson sought for and found a degenerate multivariate normal approximation to the multinomial distribution (the numbers in each category add up to the total sample size, which is considered fixed). Pearson showed that the chi-squared distribution arose from such a multivariate normal approximation to the multinomial distribution, taking careful account of the statistical dependence (negative correlations) between numbers of observations in different categories. Probability density function. The probability density function (pdf) of the chi-squared distribution is formula_38 where formula_39 denotes the gamma function, which has closed-form values for integer formula_0. For derivations of the pdf in the cases of one, two and formula_0 degrees of freedom, see Proofs related to chi-squared distribution. Cumulative distribution function. Its cumulative distribution function is: formula_40 where formula_41 is the lower incomplete gamma function and formula_42 is the regularized gamma function. In a special case of formula_43 this function has the simple form: formula_44 which can be easily derived by integrating formula_45 directly. The integer recurrence of the gamma function makes it easy to compute formula_46 for other small, even formula_0. Tables of the chi-squared cumulative distribution function are widely available and the function is included in many spreadsheets and all statistical packages. Letting formula_47, Chernoff bounds on the lower and upper tails of the CDF may be obtained. For the cases when formula_48 (which include all of the cases when this CDF is less than half): formula_49 The tail bound for the cases when formula_50, similarly, is formula_51 For another approximation for the CDF modeled after the cube of a Gaussian, see under Noncentral chi-squared distribution. Properties. Cochran's theorem. The following is a special case of Cochran's theorem. Theorem. If formula_52 are independent identically distributed (i.i.d.), standard normal random variables, then formula_53 where formula_54 &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;[Proof] Proof. Let formula_55 be a vector of formula_37 independent normally distributed random variables, and formula_56 their average. Then formula_57 where formula_58 is the identity matrix and formula_59 the all ones vector. formula_60 has one eigenvector formula_61 with eigenvalue formula_15, and formula_62 eigenvectors formula_63 (all orthogonal to formula_64) with eigenvalue formula_16, which can be chosen so that formula_65 is an orthogonal matrix. Since also formula_66, we have formula_67 which proves the claim. Additivity. It follows from the definition of the chi-squared distribution that the sum of independent chi-squared variables is also chi-squared distributed. Specifically, if formula_68 are independent chi-squared variables with formula_69, formula_70 degrees of freedom, respectively, then formula_71 is chi-squared distributed with formula_72 degrees of freedom. Sample mean. The sample mean of formula_37 i.i.d. chi-squared variables of degree formula_0 is distributed according to a gamma distribution with shape formula_5 and scale formula_6 parameters: formula_73 Asymptotically, given that for a shape parameter formula_74 going to infinity, a Gamma distribution converges towards a normal distribution with expectation formula_75 and variance formula_76, the sample mean converges towards: formula_77 Note that we would have obtained the same result invoking instead the central limit theorem, noting that for each chi-squared variable of degree formula_0 the expectation is formula_78 , and its variance formula_79 (and hence the variance of the sample mean formula_80 being formula_81). Entropy. The differential entropy is given by formula_82 where formula_83 is the Digamma function. The chi-squared distribution is the maximum entropy probability distribution for a random variate formula_84 for which formula_85 and formula_86 are fixed. Since the chi-squared is in the family of gamma distributions, this can be derived by substituting appropriate values in the Expectation of the log moment of gamma. For derivation from more basic principles, see the derivation in moment-generating function of the sufficient statistic. Noncentral moments. The noncentral moments (raw moments) of a chi-squared distribution with formula_0 degrees of freedom are given by formula_87 Cumulants. The cumulants are readily obtained by a power series expansion of the logarithm of the characteristic function: formula_88 Concentration. The chi-squared distribution exhibits strong concentration around its mean. The standard Laurent-Massart bounds are: formula_89 formula_90 One consequence is that, if formula_91 is a gaussian random vector in formula_92, then as the dimension formula_37 grows, the squared length of the vector is concentrated tightly around formula_37 with a width formula_93:formula_94where the exponent formula_5 can be chosen as any value in formula_95. Asymptotic properties. By the central limit theorem, because the chi-squared distribution is the sum of formula_0 independent random variables with finite mean and variance, it converges to a normal distribution for large formula_0. For many practical purposes, for formula_96 the distribution is sufficiently close to a normal distribution, so the difference is ignorable. Specifically, if formula_97, then as formula_0 tends to infinity, the distribution of formula_98 tends to a standard normal distribution. However, convergence is slow as the skewness is formula_99 and the excess kurtosis is formula_100. The sampling distribution of formula_101 converges to normality much faster than the sampling distribution of formula_1, as the logarithmic transform removes much of the asymmetry. Other functions of the chi-squared distribution converge more rapidly to a normal distribution. Some examples are: *As a special case, if formula_115 then formula_116 has the chi-squared distribution formula_117 Related distributions. A chi-squared variable with formula_0 degrees of freedom is defined as the sum of the squares of formula_0 independent standard normal random variables. If formula_146 is a formula_0-dimensional Gaussian random vector with mean vector formula_147 and rank formula_0 covariance matrix formula_148, then formula_149 is chi-squared distributed with formula_0 degrees of freedom. The sum of squares of statistically independent unit-variance Gaussian variables which do "not" have mean zero yields a generalization of the chi-squared distribution called the noncentral chi-squared distribution. If formula_146 is a vector of formula_0 i.i.d. standard normal random variables and formula_150 is a formula_151 symmetric, idempotent matrix with rank formula_152, then the quadratic form formula_153 is chi-square distributed with formula_152 degrees of freedom. If formula_154 is a formula_155 positive-semidefinite covariance matrix with strictly positive diagonal entries, then for formula_156 and formula_157 a random formula_24-vector independent of formula_84 such that formula_158 and formula_159 then formula_160 The chi-squared distribution is also naturally related to other distributions arising from the Gaussian. In particular, Generalizations. The chi-squared distribution is obtained as the sum of the squares of k independent, zero-mean, unit-variance Gaussian random variables. Generalizations of this distribution can be obtained by summing the squares of other types of Gaussian random variables. Several such distributions are described below. Linear combination. If formula_169 are chi square random variables and formula_170, then the distribution of formula_171 is a special case of a Generalized Chi-squared Distribution. A closed expression for this distribution is not known. It may be, however, approximated efficiently using the property of characteristic functions of chi-square random variables. Chi-squared distributions. Noncentral chi-squared distribution. The noncentral chi-squared distribution is obtained from the sum of the squares of independent Gaussian random variables having unit variance and "nonzero" means. Generalized chi-squared distribution. The generalized chi-squared distribution is obtained from the quadratic form z'Az where z is a zero-mean Gaussian vector having an arbitrary covariance matrix, and A is an arbitrary matrix. Gamma, exponential, and related distributions. The chi-squared distribution formula_172 is a special case of the gamma distribution, in that formula_173 using the rate parameterization of the gamma distribution (or formula_174 using the scale parameterization of the gamma distribution) where k is an integer. Because the exponential distribution is also a special case of the gamma distribution, we also have that if formula_175, then formula_176 is an exponential distribution. The Erlang distribution is also a special case of the gamma distribution and thus we also have that if formula_177 with even formula_0, then formula_84 is Erlang distributed with shape parameter formula_178 and scale parameter formula_179. Occurrence and applications. The chi-squared distribution has numerous applications in inferential statistics, for instance in chi-squared tests and in estimating variances. It enters the problem of estimating the mean of a normally distributed population and the problem of estimating the slope of a regression line via its role in Student's t-distribution. It enters all analysis of variance problems via its role in the F-distribution, which is the distribution of the ratio of two independent chi-squared random variables, each divided by their respective degrees of freedom. Following are some of the most common situations in which the chi-squared distribution arises from a Gaussian-distributed sample. The chi-squared distribution is also often encountered in magnetic resonance imaging. Computational methods. Table of "χ"2 values vs p-values. The formula_185-value is the probability of observing a test statistic "at least" as extreme in a chi-squared distribution. Accordingly, since the cumulative distribution function (CDF) for the appropriate degrees of freedom "(df)" gives the probability of having obtained a value "less extreme" than this point, subtracting the CDF value from 1 gives the "p"-value. A low "p"-value, below the chosen significance level, indicates statistical significance, i.e., sufficient evidence to reject the null hypothesis. A significance level of 0.05 is often used as the cutoff between significant and non-significant results. The table below gives a number of "p"-values matching to formula_186 for the first 10 degrees of freedom. These values can be calculated evaluating the quantile function (also known as "inverse CDF" or "ICDF") of the chi-squared distribution; e. g., the χ2 ICDF for "p" = 0.05 and df = 7 yields 2.1673 ≈ 2.17 as in the table above, noticing that 1 – "p" is the "p"-value from the table. History. This distribution was first described by the German geodesist and statistician Friedrich Robert Helmert in papers of 1875–6, where he computed the sampling distribution of the sample variance of a normal population. Thus in German this was traditionally known as the "Helmert'sche" ("Helmertian") or "Helmert distribution". The distribution was independently rediscovered by the English mathematician Karl Pearson in the context of goodness of fit, for which he developed his Pearson's chi-squared test, published in 1900, with computed table of values published in , collected in . The name "chi-square" ultimately derives from Pearson's shorthand for the exponent in a multivariate normal distribution with the Greek letter Chi, writing −½χ2 for what would appear in modern notation as −½xTΣ−1x (Σ being the covariance matrix). The idea of a family of "chi-squared distributions", however, is not due to Pearson but arose as a further development due to Fisher in the 1920s. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "\\chi^2" }, { "math_id": 2, "text": " \\chi^2_k " }, { "math_id": 3, "text": " X \\sim \\chi^2_k " }, { "math_id": 4, "text": " X \\sim \\text{Gamma}(\\alpha=\\frac{k}{2}, \\theta=2) " }, { "math_id": 5, "text": "\\alpha" }, { "math_id": 6, "text": "\\theta" }, { "math_id": 7, "text": " X \\sim \\text{W}_1(1,k) " }, { "math_id": 8, "text": "s^2 \\chi^2_k " }, { "math_id": 9, "text": " X \\sim s^2 \\chi^2_k " }, { "math_id": 10, "text": " X \\sim \\text{Gamma}(\\alpha=\\frac{k}{2}, \\theta=2 s^2) " }, { "math_id": 11, "text": " X \\sim \\text{W}_1(s^2,k) " }, { "math_id": 12, "text": "Q\\ = \\sum_{i=1}^k Z_i^2," }, { "math_id": 13, "text": " Q\\ \\sim\\ \\chi^2(k)\\ \\ \\text{or}\\ \\ Q\\ \\sim\\ \\chi^2_k." }, { "math_id": 14, "text": "Z" }, { "math_id": 15, "text": "0" }, { "math_id": 16, "text": "1" }, { "math_id": 17, "text": "Z \\sim N(0,1)" }, { "math_id": 18, "text": "Q = Z^2" }, { "math_id": 19, "text": "Q" }, { "math_id": 20, "text": "\\ Q\\ \\sim\\ \\chi^2_1" }, { "math_id": 21, "text": " \\chi = {m - Np \\over \\sqrt{Npq}} " }, { "math_id": 22, "text": "m" }, { "math_id": 23, "text": "N" }, { "math_id": 24, "text": "p" }, { "math_id": 25, "text": "q = 1 - p" }, { "math_id": 26, "text": " \\chi^2 = {(m - Np)^2\\over Npq} " }, { "math_id": 27, "text": "N = Np + N(1 - p)" }, { "math_id": 28, "text": "N = m + (N - m)" }, { "math_id": 29, "text": " \\chi^2 = {(m - Np)^2\\over Np} + {(N - m - Nq)^2\\over Nq} " }, { "math_id": 30, "text": " \\chi^2 = \\sum_{i=1}^n \\frac{(O_i - E_i)^2}{E_i} " }, { "math_id": 31, "text": " \\chi^2" }, { "math_id": 32, "text": "O_i" }, { "math_id": 33, "text": "i" }, { "math_id": 34, "text": "E_i = N p_i" }, { "math_id": 35, "text": " p_i" }, { "math_id": 36, "text": "n" }, { "math_id": 37, "text": "n" }, { "math_id": 38, "text": "\nf(x;\\,k) =\n\\begin{cases}\n \\dfrac{x^{k/2 -1} e^{-x/2}}{2^{k/2} \\Gamma\\left(\\frac k 2 \\right)}, & x > 0; \\\\ 0, & \\text{otherwise}.\n\\end{cases}\n" }, { "math_id": 39, "text": "\\Gamma(k/2)" }, { "math_id": 40, "text": "\n F(x;\\,k) = \\frac{\\gamma(\\frac{k}{2},\\,\\frac{x}{2})}{\\Gamma(\\frac{k}{2})} = P\\left(\\frac{k}{2},\\,\\frac{x}{2}\\right),\n " }, { "math_id": 41, "text": "\\gamma(s,t)" }, { "math_id": 42, "text": "P(s,t)" }, { "math_id": 43, "text": "k = 2" }, { "math_id": 44, "text": "\n F(x;\\,2) = 1 - e^{-x/2}\n " }, { "math_id": 45, "text": "f(x;\\,2)=\\frac{1}{2}e^{-x/2}" }, { "math_id": 46, "text": "F(x;\\,k)" }, { "math_id": 47, "text": "z \\equiv x/k" }, { "math_id": 48, "text": "0 < z < 1" }, { "math_id": 49, "text": " F(z k;\\,k) \\leq (z e^{1-z})^{k/2}." }, { "math_id": 50, "text": "z > 1" }, { "math_id": 51, "text": "\n 1-F(z k;\\,k) \\leq (z e^{1-z})^{k/2}.\n " }, { "math_id": 52, "text": "Z_1,...,Z_n" }, { "math_id": 53, "text": "\\sum_{t=1}^n(Z_t - \\bar Z)^2 \\sim \\chi^2_{n-1}" }, { "math_id": 54, "text": "\\bar Z = \\frac{1}{n} \\sum_{t=1}^n Z_t." }, { "math_id": 55, "text": "Z\\sim\\mathcal{N}(\\bar 0,1\\!\\!1)" }, { "math_id": 56, "text": "\\bar Z" }, { "math_id": 57, "text": "\n \\sum_{t=1}^n(Z_t-\\bar Z)^2 ~=~ \\sum_{t=1}^n Z_t^2 -n\\bar Z^2 ~=~ Z^\\top[1\\!\\!1 -{\\textstyle\\frac1n}\\bar 1\\bar 1^\\top]Z ~=:~ Z^\\top\\!M Z\n" }, { "math_id": 58, "text": "1\\!\\!1" }, { "math_id": 59, "text": "\\bar 1" }, { "math_id": 60, "text": "M" }, { "math_id": 61, "text": "b_1:={\\textstyle\\frac{1}{\\sqrt{n}}} \\bar 1" }, { "math_id": 62, "text": "n-1" }, { "math_id": 63, "text": "b_2,...,b_n" }, { "math_id": 64, "text": "b_1" }, { "math_id": 65, "text": "Q:=(b_1,...,b_n)" }, { "math_id": 66, "text": "X:=Q^\\top\\!Z\\sim\\mathcal{N}(\\bar 0,Q^\\top\\!1\\!\\!1 Q) =\\mathcal{N}(\\bar 0,1\\!\\!1)" }, { "math_id": 67, "text": "\n \\sum_{t=1}^n(Z_t-\\bar Z)^2 ~=~ Z^\\top\\!M Z ~=~ X^\\top\\!Q^\\top\\!M Q X ~=~ X_2^2+...+X_n^2 ~\\sim~ \\chi^2_{n-1},\n" }, { "math_id": 68, "text": "X_i,i=\\overline{1,n}" }, { "math_id": 69, "text": "k_i" }, { "math_id": 70, "text": "i=\\overline{1,n} " }, { "math_id": 71, "text": "Y = X_1 + \\cdots + X_n" }, { "math_id": 72, "text": "k_1 + \\cdots + k_n" }, { "math_id": 73, "text": " \\overline X = \\frac{1}{n} \\sum_{i=1}^n X_i \\sim \\operatorname{Gamma}\\left(\\alpha=n\\, k /2, \\theta= 2/n \\right) \\qquad \\text{where } X_i \\sim \\chi^2(k)" }, { "math_id": 74, "text": " \\alpha " }, { "math_id": 75, "text": " \\mu = \\alpha\\cdot \\theta " }, { "math_id": 76, "text": " \\sigma^2 = \\alpha\\, \\theta^2 " }, { "math_id": 77, "text": " \\overline X \\xrightarrow{n \\to \\infty} N(\\mu = k, \\sigma^2 = 2\\, k /n ) " }, { "math_id": 78, "text": " k " }, { "math_id": 79, "text": " 2\\,k " }, { "math_id": 80, "text": " \\overline{X}" }, { "math_id": 81, "text": " \\sigma^2 = \\frac{2k}{n} " }, { "math_id": 82, "text": "\n h = \\int_{0}^\\infty f(x;\\,k)\\ln f(x;\\,k) \\, dx\n = \\frac k 2 + \\ln \\left[2\\,\\Gamma \\left(\\frac k 2 \\right)\\right] + \\left(1-\\frac k 2 \\right)\\, \\psi\\!\\left(\\frac k 2 \\right),\n " }, { "math_id": 83, "text": "\\psi(x)" }, { "math_id": 84, "text": "X" }, { "math_id": 85, "text": "\\operatorname{E}(X)=k" }, { "math_id": 86, "text": "\\operatorname{E}(\\ln(X))=\\psi(k/2)+\\ln(2)" }, { "math_id": 87, "text": "\n \\operatorname{E}(X^m) = k (k+2) (k+4) \\cdots (k+2m-2) = 2^m \\frac{\\Gamma\\left(m+\\frac{k}{2}\\right)}{\\Gamma\\left(\\frac{k}{2}\\right)}.\n " }, { "math_id": 88, "text": "\\kappa_n = 2^{n-1}(n-1)!\\,k" }, { "math_id": 89, "text": "\\operatorname{P}(X - k \\ge 2 \\sqrt{k x} + 2x) \\le \\exp(-x)" }, { "math_id": 90, "text": "\\operatorname{P}(k - X \\ge 2 \\sqrt{k x}) \\le \\exp(-x)" }, { "math_id": 91, "text": "v \\sim N(0, 1)^n" }, { "math_id": 92, "text": "\\R^n" }, { "math_id": 93, "text": "n^{1/2 + \\alpha}" }, { "math_id": 94, "text": "Pr(\\|v\\|^2 \\in [n - 2n^{1/2+\\alpha}, n + 2n^{1/2+\\alpha} + 2n^{\\alpha}]) \\geq 1-e^{-n^\\alpha}" }, { "math_id": 95, "text": "(0, 1/2)" }, { "math_id": 96, "text": "k>50" }, { "math_id": 97, "text": "X \\sim \\chi^2(k)" }, { "math_id": 98, "text": "(X-k)/\\sqrt{2k}" }, { "math_id": 99, "text": "\\sqrt{8/k}" }, { "math_id": 100, "text": "12/k" }, { "math_id": 101, "text": "\\ln(\\chi^2)" }, { "math_id": 102, "text": "\\sqrt{2X}" }, { "math_id": 103, "text": "\\sqrt{2k-1}" }, { "math_id": 104, "text": "\\sqrt[3]{X/k}" }, { "math_id": 105, "text": " 1-\\frac{2}{9k}" }, { "math_id": 106, "text": "\\frac{2}{9k} ." }, { "math_id": 107, "text": "k\\bigg(1-\\frac{2}{9k}\\bigg)^3\\;" }, { "math_id": 108, "text": "k\\to\\infty" }, { "math_id": 109, "text": " (\\chi^2_k-k)/\\sqrt{2k} ~ \\xrightarrow{d}\\ N(0,1) \\," }, { "math_id": 110, "text": " \\chi_k^2 \\sim {\\chi'}^2_k(0)" }, { "math_id": 111, "text": " \\lambda = 0 " }, { "math_id": 112, "text": "Y \\sim \\mathrm{F}(\\nu_1, \\nu_2)" }, { "math_id": 113, "text": "X = \\lim_{\\nu_2 \\to \\infty} \\nu_1 Y" }, { "math_id": 114, "text": "\\chi^2_{\\nu_{1}}" }, { "math_id": 115, "text": "Y \\sim \\mathrm{F}(1, \\nu_2)\\," }, { "math_id": 116, "text": "X = \\lim_{\\nu_2 \\to \\infty} Y\\," }, { "math_id": 117, "text": "\\chi^2_{1}" }, { "math_id": 118, "text": " \\|\\boldsymbol{N}_{i=1,\\ldots,k} (0,1) \\|^2 \\sim \\chi^2_k " }, { "math_id": 119, "text": "X \\sim \\chi^2_\\nu\\," }, { "math_id": 120, "text": "c>0 \\," }, { "math_id": 121, "text": "cX \\sim \\Gamma(k=\\nu/2, \\theta=2c)\\," }, { "math_id": 122, "text": "X \\sim \\chi^2_k" }, { "math_id": 123, "text": "\\sqrt{X} \\sim \\chi_k" }, { "math_id": 124, "text": "X \\sim \\chi^2_2" }, { "math_id": 125, "text": "X \\sim \\operatorname{Exp}(1/2)" }, { "math_id": 126, "text": "X \\sim \\chi^2_{2k}" }, { "math_id": 127, "text": "X \\sim \\operatorname{Erlang}(k, 1/2)" }, { "math_id": 128, "text": " X \\sim \\operatorname{Erlang}(k,\\lambda)" }, { "math_id": 129, "text": " 2\\lambda X\\sim \\chi^2_{2k}" }, { "math_id": 130, "text": "X \\sim \\operatorname{Rayleigh}(1)\\," }, { "math_id": 131, "text": "X^2 \\sim \\chi^2_2\\," }, { "math_id": 132, "text": "X \\sim \\operatorname{Maxwell}(1)\\," }, { "math_id": 133, "text": "X^2 \\sim \\chi^2_3\\," }, { "math_id": 134, "text": "X \\sim \\chi^2_\\nu" }, { "math_id": 135, "text": "\\tfrac{1}{X} \\sim \\operatorname{Inv-}\\chi^2_\\nu\\, " }, { "math_id": 136, "text": "X \\sim \\chi^2_{\\nu_1}\\," }, { "math_id": 137, "text": "Y \\sim \\chi^2_{\\nu_2}\\," }, { "math_id": 138, "text": "\\tfrac{X}{X+Y} \\sim \\operatorname{Beta}(\\tfrac{\\nu_1}{2}, \\tfrac{\\nu_2}{2})\\," }, { "math_id": 139, "text": " X \\sim \\operatorname{U}(0,1)\\, " }, { "math_id": 140, "text": " -2\\log(X) \\sim \\chi^2_2\\," }, { "math_id": 141, "text": "X_i \\sim \\operatorname{Laplace}(\\mu,\\beta)\\," }, { "math_id": 142, "text": "\\sum_{i=1}^n \\frac{2 |X_i-\\mu|}{\\beta} \\sim \\chi^2_{2n}\\," }, { "math_id": 143, "text": "X_i" }, { "math_id": 144, "text": "\\mu,\\alpha,\\beta" }, { "math_id": 145, "text": "\\sum_{i=1}^n \\frac{2 |X_i-\\mu|^\\beta}{\\alpha} \\sim \\chi^2_{2n/\\beta}\\," }, { "math_id": 146, "text": "Y" }, { "math_id": 147, "text": "\\mu" }, { "math_id": 148, "text": "C" }, { "math_id": 149, "text": "X = (Y-\\mu )^{T}C^{-1}(Y-\\mu)" }, { "math_id": 150, "text": "A" }, { "math_id": 151, "text": "k\\times k" }, { "math_id": 152, "text": "k-n" }, { "math_id": 153, "text": "Y^TAY" }, { "math_id": 154, "text": "\\Sigma" }, { "math_id": 155, "text": "p\\times p" }, { "math_id": 156, "text": "X\\sim N(0,\\Sigma)" }, { "math_id": 157, "text": "w" }, { "math_id": 158, "text": "w_1+\\cdots+w_p=1" }, { "math_id": 159, "text": "w_i\\geq 0, i=1,\\ldots,p," }, { "math_id": 160, "text": "\\frac{1}{\\left(\\frac{w_1}{X_1},\\ldots,\\frac{w_p}{X_p}\\right)\\Sigma\\left(\\frac{w_1}{X_1},\\ldots,\\frac{w_p}{X_p}\\right)^\\top} \\sim \\chi_1^2." }, { "math_id": 161, "text": "Y \\sim F(k_1, k_2)" }, { "math_id": 162, "text": "Y = \\frac{ {X_1}/{k_1} }{ {X_2}/{k_2} }" }, { "math_id": 163, "text": "X_1 \\sim \\chi^2_{k_1}" }, { "math_id": 164, "text": "X_2 \\sim \\chi^2_{k_2}" }, { "math_id": 165, "text": "X_1 + X_2\\sim \\chi^2_{k_1+k_2}" }, { "math_id": 166, "text": "X_1" }, { "math_id": 167, "text": "X_2" }, { "math_id": 168, "text": "X_1+X_2" }, { "math_id": 169, "text": "X_1,\\ldots,X_n" }, { "math_id": 170, "text": "a_1,\\ldots,a_n\\in\\mathbb{R}_{>0}" }, { "math_id": 171, "text": "X=\\sum_{i=1}^n a_i X_i" }, { "math_id": 172, "text": "X \\sim \\chi_k^2" }, { "math_id": 173, "text": "X \\sim \\Gamma \\left(\\frac{k}2,\\frac{1}2\\right)" }, { "math_id": 174, "text": "X \\sim \\Gamma \\left(\\frac{k}2,2 \\right)" }, { "math_id": 175, "text": "X \\sim \\chi_2^2" }, { "math_id": 176, "text": "X\\sim \\operatorname{Exp}\\left(\\frac 1 2\\right)" }, { "math_id": 177, "text": "X \\sim\\chi_k^2" }, { "math_id": 178, "text": "k/2" }, { "math_id": 179, "text": "1/2" }, { "math_id": 180, "text": "X_1, ..., X_n" }, { "math_id": 181, "text": "N(\\mu, \\sigma^2)" }, { "math_id": 182, "text": "\\sum_{i=1}^n(X_i - \\overline{X_i})^2 \\sim \\sigma^2 \\chi^2_{n-1}" }, { "math_id": 183, "text": "\\overline{X_i} = \\frac{1}{n} \\sum_{i=1}^n X_i" }, { "math_id": 184, "text": "X_i \\sim N(\\mu_i, \\sigma^2_i), i= 1, \\ldots, k" }, { "math_id": 185, "text": "p" }, { "math_id": 186, "text": " \\chi^2 " }, { "math_id": 187, "text": "(0, \\infty)" }, { "math_id": 188, "text": " f(x)= \\frac{2\\beta^{\\alpha/2} x^{\\alpha-1} \\exp(-\\beta x^2+ \\gamma x )}{\\Psi{\\left(\\frac{\\alpha}{2}, \\frac{ \\gamma}{\\sqrt{\\beta}}\\right)}}" }, { "math_id": 189, "text": "\\Psi(\\alpha,z)={}_1\\Psi_1\\left(\\begin{matrix}\\left(\\alpha,\\frac{1}{2}\\right)\\\\(1,0)\\end{matrix};z \\right)" } ]
https://en.wikipedia.org/wiki?curid=113424
11344467
Quillen adjunction
A special kind of adjunction between categories named after Daniel Quillen In homotopy theory, a branch of mathematics, a Quillen adjunction between two closed model categories C and D is a special kind of adjunction between categories that induces an adjunction between the homotopy categories Ho(C) and Ho(D) via the total derived functor construction. Quillen adjunctions are named in honor of the mathematician Daniel Quillen. Formal definition. Given two closed model categories C and D, a Quillen adjunction is a pair ("F", "G"): C formula_0 D of adjoint functors with "F" left adjoint to "G" such that "F" preserves cofibrations and trivial cofibrations or, equivalently by the closed model axioms, such that "G" preserves fibrations and trivial fibrations. In such an adjunction "F" is called the left Quillen functor and "G" is called the right Quillen functor. Properties. It is a consequence of the axioms that a left (right) Quillen functor preserves weak equivalences between cofibrant (fibrant) objects. The total derived functor theorem of Quillen says that the total left derived functor L"F": Ho(C) → Ho(D) is a left adjoint to the total right derived functor R"G": Ho(D) → Ho(C). This adjunction (LF", RG") is called the derived adjunction. If ("F", "G") is a Quillen adjunction as above such that "F"("c") → "d" with "c" cofibrant and "d" fibrant is a weak equivalence in D if and only if "c" → "G"("d") is a weak equivalence in C then it is called a Quillen equivalence of the closed model categories C and D. In this case the derived adjunction is an adjoint equivalence of categories so that L"F"("c") → "d" is an isomorphism in Ho(D) if and only if "c" → R"G"("d") is an isomorphism in Ho(C).
[ { "math_id": 0, "text": "\\leftrightarrows" } ]
https://en.wikipedia.org/wiki?curid=11344467
1134614
Arnoldi iteration
Iterative method for approximating eigenvectors In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non-Hermitian) matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices. The Arnoldi method belongs to a class of linear algebra algorithms that give a partial result after a small number of iterations, in contrast to so-called "direct methods" which must complete to give any useful results (see for example, Householder transformation). The partial result in this case being the first few vectors of the basis the algorithm is building. When applied to Hermitian matrices it reduces to the Lanczos algorithm. The Arnoldi iteration was invented by W. E. Arnoldi in 1951. Krylov subspaces and the power iteration. An intuitive method for finding the largest (in absolute value) eigenvalue of a given "m" × "m" matrix formula_0 is the power iteration: starting with an arbitrary initial vector b, calculate "Ab", "A"2"b", "A"3"b", ... normalizing the result after every application of the matrix "A". This sequence converges to the eigenvector corresponding to the eigenvalue with the largest absolute value, formula_1. However, much potentially useful computation is wasted by using only the final result, formula_2. This suggests that instead, we form the so-called "Krylov matrix": formula_3 The columns of this matrix are not in general orthogonal, but we can extract an orthogonal basis, via a method such as Gram–Schmidt orthogonalization. The resulting set of vectors is thus an orthogonal basis of the "Krylov subspace", formula_4. We may expect the vectors of this basis to span good approximations of the eigenvectors corresponding to the formula_5 largest eigenvalues, for the same reason that formula_2 approximates the dominant eigenvector. The Arnoldi iteration. The Arnoldi iteration uses the modified Gram–Schmidt process to produce a sequence of orthonormal vectors, "q"1, "q"2, "q"3, ..., called the "Arnoldi vectors", such that for every "n", the vectors "q"1, ..., "q""n" span the Krylov subspace formula_6. Explicitly, the algorithm is as follows: Start with an arbitrary vector "q"1 with norm 1. Repeat for "k" = 2, 3, ... "q""k" := "A" "q""k"−1 for "j" from 1 to "k" − 1 "h""j","k"−1 := "q""j"* "q""k" "q""k" := "q""k" − h"j","k"−1 "q""j" "h""k","k"−1 := ‖"q""k"‖ "q""k" := "q""k" / "h""k","k"−1 The "j"-loop projects out the component of formula_7 in the directions of formula_8. This ensures the orthogonality of all the generated vectors. The algorithm breaks down when "q""k" is the zero vector. This happens when the minimal polynomial of "A" is of degree "k". In most applications of the Arnoldi iteration, including the eigenvalue algorithm below and GMRES, the algorithm has converged at this point. Every step of the "k"-loop takes one matrix-vector product and approximately 4"mk" floating point operations. In the programming language Python with support of the NumPy library: import numpy as np def arnoldi_iteration(A, b, n: int): """Compute a basis of the (n + 1)-Krylov subspace of the matrix A. This is the space spanned by the vectors {b, Ab, ..., A^n b}. Parameters A : array_like An m × m array. b : array_like Initial vector (length m). n : int One less than the dimension of the Krylov subspace, or equivalently the *degree* of the Krylov space. Must be &gt;= 1. Returns Q : numpy.array An m x (n + 1) array, where the columns are an orthonormal basis of the Krylov subspace. h : numpy.array An (n + 1) x n array. A on basis Q. It is upper Hessenberg. eps = 1e-12 h = np.zeros((n + 1, n)) Q = np.zeros((A.shape[0], n + 1)) # Normalize the input vector Q[:, 0] = b / np.linalg.norm(b, 2) # Use it as the first Krylov vector for k in range(1, n + 1): v = np.dot(A, Q[:, k - 1]) # Generate a new candidate vector for j in range(k): # Subtract the projections on previous vectors h[j, k - 1] = np.dot(Q[:, j].conj(), v) v = v - h[j, k - 1] * Q[:, j] h[k, k - 1] = np.linalg.norm(v, 2) if h[k, k - 1] &gt; eps: # Add the produced vector to the list, unless Q[:, k] = v / h[k, k - 1] else: # If that happens, stop iterating. return Q, h return Q, h Properties of the Arnoldi iteration. Let "Q""n" denote the "m"-by-"n" matrix formed by the first "n" Arnoldi vectors "q"1, "q"2, ..., "q""n", and let "H""n" be the (upper Hessenberg) matrix formed by the numbers "h""j","k" computed by the algorithm: formula_9 The orthogonalization method has to be specifically chosen such that the lower Arnoldi/Krylov components are removed from higher Krylov vectors. As formula_10 can be expressed in terms of "q"1, ..., "q""i"+1 by construction, they are orthogonal to "q""i"+2, ..., "q""n", We then have formula_11 The matrix "H""n" can be viewed as "A" in the subspace formula_6 with the Arnoldi vectors as an orthogonal basis; "A" is orthogonally projected onto formula_6. The matrix "H""n" can be characterized by the following optimality condition. The characteristic polynomial of "H""n" minimizes ||"p"("A")"q"1||2 among all monic polynomials of degree "n". This optimality problem has a unique solution if and only if the Arnoldi iteration does not break down. The relation between the "Q" matrices in subsequent iterations is given by formula_12 where formula_13 is an ("n"+1)-by-"n" matrix formed by adding an extra row to "H""n". Finding eigenvalues with the Arnoldi iteration. The idea of the Arnoldi iteration as an eigenvalue algorithm is to compute the eigenvalues in the Krylov subspace. The eigenvalues of "H""n" are called the "Ritz eigenvalues". Since "H""n" is a Hessenberg matrix of modest size, its eigenvalues can be computed efficiently, for instance with the QR algorithm, or somewhat related, Francis' algorithm. Also Francis' algorithm itself can be considered to be related to power iterations, operating on nested Krylov subspace. In fact, the most basic form of Francis' algorithm appears to be to choose "b" to be equal to "Ae"1, and extending "n" to the full dimension of "A". Improved versions include one or more shifts, and higher powers of "A" may be applied in a single steps. This is an example of the Rayleigh-Ritz method. It is often observed in practice that some of the Ritz eigenvalues converge to eigenvalues of "A". Since "H""n" is "n"-by-"n", it has at most "n" eigenvalues, and not all eigenvalues of "A" can be approximated. Typically, the Ritz eigenvalues converge to the largest eigenvalues of "A". To get the smallest eigenvalues of "A", the inverse (operation) of "A" should be used instead. This can be related to the characterization of "H""n" as the matrix whose characteristic polynomial minimizes ||"p"("A")"q"1|| in the following way. A good way to get "p"("A") small is to choose the polynomial "p" such that "p"("x") is small whenever "x" is an eigenvalue of "A". Hence, the zeros of "p" (and thus the Ritz eigenvalues) will be close to the eigenvalues of "A". However, the details are not fully understood yet. This is in contrast to the case where "A" is Hermitian. In that situation, the Arnoldi iteration becomes the Lanczos iteration, for which the theory is more complete. Restarted Arnoldi iteration. Due to practical storage consideration, common implementations of Arnoldi methods typically restart after a fixed number of iterations. One approach is the Implicitly Restarted Arnoldi Method (IRAM) by Lehoucq and Sorensen, which was popularized in the free and open source software package ARPACK. Another approach is the Krylov-Schur Algorithm by G. W. Stewart, which is more stable and simpler to implement than IRAM. See also. The generalized minimal residual method (GMRES) is a method for solving "Ax" = "b" based on Arnoldi iteration. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "\\lambda_{1}" }, { "math_id": 2, "text": "A^{n-1}b" }, { "math_id": 3, "text": "K_{n} = \\begin{bmatrix}b & Ab & A^{2}b & \\cdots & A^{n-1}b \\end{bmatrix}." }, { "math_id": 4, "text": "\\mathcal{K}_{n}" }, { "math_id": 5, "text": "n" }, { "math_id": 6, "text": "\\mathcal{K}_n" }, { "math_id": 7, "text": "q_k" }, { "math_id": 8, "text": "q_1,\\dots,q_{k-1}" }, { "math_id": 9, "text": " H_n = Q_n^* A Q_n. " }, { "math_id": 10, "text": " A q_i " }, { "math_id": 11, "text": " H_n = \\begin{bmatrix}\n h_{1,1} & h_{1,2} & h_{1,3} & \\cdots & h_{1,n} \\\\\n h_{2,1} & h_{2,2} & h_{2,3} & \\cdots & h_{2,n} \\\\\n 0 & h_{3,2} & h_{3,3} & \\cdots & h_{3,n} \\\\\n \\vdots & \\ddots & \\ddots & \\ddots & \\vdots \\\\\n 0 & \\cdots & 0 & h_{n,n-1} & h_{n,n}\n\\end{bmatrix}. " }, { "math_id": 12, "text": " A Q_n = Q_{n+1} \\tilde{H}_n " }, { "math_id": 13, "text": " \\tilde{H}_n = \\begin{bmatrix}\n h_{1,1} & h_{1,2} & h_{1,3} & \\cdots & h_{1,n} \\\\\n h_{2,1} & h_{2,2} & h_{2,3} & \\cdots & h_{2,n} \\\\\n 0 & h_{3,2} & h_{3,3} & \\cdots & h_{3,n} \\\\\n \\vdots & \\ddots & \\ddots & \\ddots & \\vdots \\\\\n \\vdots & & 0 & h_{n,n-1} & h_{n,n} \\\\\n 0 & \\cdots & \\cdots & 0 & h_{n+1,n}\n\\end{bmatrix} " } ]
https://en.wikipedia.org/wiki?curid=1134614
1134659
Block code
Family of error-correcting codes that encode data in blocks In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks. There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of "all" block codes in a unified way. Such limitations often take the form of "bounds" that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors. Examples of block codes are Reed–Solomon codes, Hamming codes, Hadamard codes, Expander codes, Golay codes, Reed–Muller codes and Polar codes. These examples also belong to the class of linear codes, and hence they are called linear block codes. More particularly, these codes are known as algebraic block codes, or cyclic block codes, because they can be generated using boolean polynomials. Algebraic block codes are typically hard-decoded using algebraic decoders. The term "block code" may also refer to any error-correcting code that acts on a block of formula_0 bits of input data to produce formula_1 bits of output data formula_2. Consequently, the block coder is a "memoryless" device. Under this definition codes such as turbo codes, terminated convolutional codes and other iteratively decodable codes (turbo-like codes) would also be considered block codes. A non-terminated convolutional encoder would be an example of a non-block (unframed) code, which has "memory" and is instead classified as a "tree code". This article deals with "algebraic block codes". The block code and its parameters. Error-correcting codes are used to reliably transmit digital data over unreliable communication channels subject to channel noise. When a sender wants to transmit a possibly very long data stream using a block code, the sender breaks the stream up into pieces of some fixed size. Each such piece is called "message" and the procedure given by the block code encodes each message individually into a codeword, also called a "block" in the context of block codes. The sender then transmits all blocks to the receiver, who can in turn use some decoding mechanism to (hopefully) recover the original messages from the possibly corrupted received blocks. The performance and success of the overall transmission depends on the parameters of the channel and the block code. Formally, a block code is an injective mapping formula_3. Here, formula_4 is a finite and nonempty set and formula_0 and formula_1 are integers. The meaning and significance of these three parameters and other parameters related to the code are described below. The alphabet Σ. The data stream to be encoded is modeled as a string over some alphabet formula_4. The size formula_5 of the alphabet is often written as formula_6. If formula_7, then the block code is called a "binary" block code. In many applications it is useful to consider formula_6 to be a prime power, and to identify formula_4 with the finite field formula_8. The message length "k". Messages are elements formula_9 of formula_10, that is, strings of length formula_0. Hence the number formula_0 is called the message length or dimension of a block code. The block length "n". The block length formula_1 of a block code is the number of symbols in a block. Hence, the elements formula_11 of formula_12 are strings of length formula_1 and correspond to blocks that may be received by the receiver. Hence they are also called received words. If formula_13 for some message formula_9, then formula_11 is called the codeword of formula_9. The rate "R". The rate of a block code is defined as the ratio between its message length and its block length: formula_14. A large rate means that the amount of actual message per transmitted block is high. In this sense, the rate measures the transmission speed and the quantity formula_15 measures the overhead that occurs due to the encoding with the block code. It is a simple information theoretical fact that the rate cannot exceed formula_16 since data cannot in general be losslessly compressed. Formally, this follows from the fact that the code formula_17 is an injective map. The distance "d". The distance or minimum distance d of a block code is the minimum number of positions in which any two distinct codewords differ, and the relative distance formula_18 is the fraction formula_19. Formally, for received words formula_20, let formula_21 denote the Hamming distance between formula_22 and formula_23, that is, the number of positions in which formula_22 and formula_23 differ. Then the minimum distance formula_24 of the code formula_17 is defined as formula_25. Since any code has to be injective, any two codewords will disagree in at least one position, so the distance of any code is at least formula_16. Besides, the distance equals the minimum weight for linear block codes because: formula_26. A larger distance allows for more error correction and detection. For example, if we only consider errors that may change symbols of the sent codeword but never erase or add them, then the number of errors is the number of positions in which the sent codeword and the received word differ. A code with distance d allows the receiver to detect up to formula_27 transmission errors since changing formula_27 positions of a codeword can never accidentally yield another codeword. Furthermore, if no more than formula_28 transmission errors occur, the receiver can uniquely decode the received word to a codeword. This is because every received word has at most one codeword at distance formula_28. If more than formula_28 transmission errors occur, the receiver cannot uniquely decode the received word in general as there might be several possible codewords. One way for the receiver to cope with this situation is to use list decoding, in which the decoder outputs a list of all codewords in a certain radius. Popular notation. The notation formula_29 describes a block code over an alphabet formula_4 of size formula_6, with a block length formula_1, message length formula_0, and distance formula_24. If the block code is a linear block code, then the square brackets in the notation formula_30 are used to represent that fact. For binary codes with formula_7, the index is sometimes dropped. For maximum distance separable codes, the distance is always formula_31, but sometimes the precise distance is not known, non-trivial to prove or state, or not needed. In such cases, the formula_24-component may be missing. Sometimes, especially for non-block codes, the notation formula_32 is used for codes that contain formula_33 codewords of length formula_1. For block codes with messages of length formula_0 over an alphabet of size formula_6, this number would be formula_34. Examples. As mentioned above, there are a vast number of error-correcting codes that are actually block codes. The first error-correcting code was the Hamming(7,4) code, developed by Richard W. Hamming in 1950. This code transforms a message consisting of 4 bits into a codeword of 7 bits by adding 3 parity bits. Hence this code is a block code. It turns out that it is also a linear code and that it has distance 3. In the shorthand notation above, this means that the Hamming(7,4) code is a formula_35 code. Reed–Solomon codes are a family of formula_30 codes with formula_31 and formula_6 being a prime power. Rank codes are family of formula_30 codes with formula_36. Hadamard codes are a family of formula_37 codes with formula_38 and formula_39. Error detection and correction properties. A codeword formula_40could be considered as a point in the formula_1-dimension space formula_12 and the code formula_41 is the subset of formula_12. A code formula_41 has distance formula_24 means that formula_42, there is no other codeword in the "Hamming ball" centered at formula_11 with radius formula_27, which is defined as the collection of formula_1-dimension words whose "Hamming distance" to formula_11 is no more than formula_27. Similarly, formula_43 with (minimum) distance formula_24 has the following properties: Lower and upper bounds of block codes. Family of codes. formula_48 is called " family of codes", where formula_49 is an formula_50 code with monotonic increasing formula_51. Rate of family of codes C is defined as formula_52 Relative distance of family of codes C is defined as formula_53 To explore the relationship between formula_54 and formula_55, a set of lower and upper bounds of block codes are known. formula_56 Singleton bound. The Singleton bound is that the sum of the rate and the relative distance of a block code cannot be much larger than 1: formula_57. In other words, every block code satisfies the inequality formula_58. Reed–Solomon codes are non-trivial examples of codes that satisfy the singleton bound with equality. Plotkin bound. For formula_7, formula_59. In other words, formula_60. For the general case, the following Plotkin bounds holds for any formula_61 with distance d: For any q-ary code with distance formula_18, formula_64 Gilbert–Varshamov bound. formula_65, where formula_66, formula_67 is the q-ary entropy function. Johnson bound. Define formula_68. Let formula_69 be the maximum number of codewords in a Hamming ball of radius e for any code formula_70 of distance d. Then we have the "Johnson Bound" : formula_71, if formula_72 formula_73 Sphere packings and lattices. Block codes are tied to the sphere packing problem which has received some attention over the years. In two dimensions, it is easy to visualize. Take a bunch of pennies flat on the table and push them together. The result is a hexagon pattern like a bee's nest. But block codes rely on more dimensions which cannot easily be visualized. The powerful Golay code used in deep space communications uses 24 dimensions. If used as a binary code (which it usually is), the dimensions refer to the length of the codeword as defined above. The theory of coding uses the "N"-dimensional sphere model. For example, how many pennies can be packed into a circle on a tabletop or in 3 dimensions, how many marbles can be packed into a globe. Other considerations enter the choice of a code. For example, hexagon packing into the constraint of a rectangular box will leave empty space at the corners. As the dimensions get larger, the percentage of empty space grows smaller. But at certain dimensions, the packing uses all the space and these codes are the so-called perfect codes. There are very few of these codes. Another property is the number of neighbors a single codeword may have. Again, consider pennies as an example. First we pack the pennies in a rectangular grid. Each penny will have 4 near neighbors (and 4 at the corners which are farther away). In a hexagon, each penny will have 6 near neighbors. Respectively, in three and four dimensions, the maximum packing is given by the 12-face and 24-cell with 12 and 24 neighbors, respectively. When we increase the dimensions, the number of near neighbors increases very rapidly. In general, the value is given by the kissing numbers. The result is that the number of ways for noise to make the receiver choose a neighbor (hence an error) grows as well. This is a fundamental limitation of block codes, and indeed all codes. It may be harder to cause an error to a single neighbor, but the number of neighbors can be large enough so the total error probability actually suffers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "(n,k)" }, { "math_id": 3, "text": "C:\\Sigma^k \\to \\Sigma^n" }, { "math_id": 4, "text": "\\Sigma" }, { "math_id": 5, "text": "|\\Sigma|" }, { "math_id": 6, "text": "q" }, { "math_id": 7, "text": "q=2" }, { "math_id": 8, "text": "\\mathbb F_q" }, { "math_id": 9, "text": "m" }, { "math_id": 10, "text": "\\Sigma^k" }, { "math_id": 11, "text": "c" }, { "math_id": 12, "text": "\\Sigma^n" }, { "math_id": 13, "text": "c=C(m)" }, { "math_id": 14, "text": "R=k/n" }, { "math_id": 15, "text": "1-R" }, { "math_id": 16, "text": "1" }, { "math_id": 17, "text": "C" }, { "math_id": 18, "text": "\\delta" }, { "math_id": 19, "text": "d/n" }, { "math_id": 20, "text": "c_1,c_2\\in\\Sigma^n" }, { "math_id": 21, "text": "\\Delta(c_1,c_2)" }, { "math_id": 22, "text": "c_1" }, { "math_id": 23, "text": "c_2" }, { "math_id": 24, "text": "d" }, { "math_id": 25, "text": "d := \\min_{m_1,m_2\\in\\Sigma^k\\atop m_1\\neq m_2} \\Delta[C(m_1),C(m_2)]" }, { "math_id": 26, "text": "\\min_{m_1,m_2\\in\\Sigma^k\\atop m_1\\neq m_2} \\Delta[C(m_1),C(m_2)] = \\min_{m_1,m_2\\in\\Sigma^k\\atop m_1\\neq m_2} \\Delta[\\mathbf{0},C(m_1)+C(m_2)] = \\min_{m\\in\\Sigma^k\\atop m\\neq\\mathbf{0}} w[C(m)] = w_\\min" }, { "math_id": 27, "text": "d-1" }, { "math_id": 28, "text": "(d-1)/2" }, { "math_id": 29, "text": "(n,k,d)_q" }, { "math_id": 30, "text": "[n,k,d]_q" }, { "math_id": 31, "text": "d=n-k+1" }, { "math_id": 32, "text": "(n,M,d)_q" }, { "math_id": 33, "text": "M" }, { "math_id": 34, "text": "M=q^k" }, { "math_id": 35, "text": "[7,4,3]_2" }, { "math_id": 36, "text": "d \\leq n-k+1" }, { "math_id": 37, "text": "[n,k,d]_2" }, { "math_id": 38, "text": "n=2^{k-1}" }, { "math_id": 39, "text": "d=2^{k-2}" }, { "math_id": 40, "text": "c \\in \\Sigma^n" }, { "math_id": 41, "text": "\\mathcal{C}" }, { "math_id": 42, "text": "\\forall c\\in \\mathcal{C}" }, { "math_id": 43, "text": " \\mathcal{C}" }, { "math_id": 44, "text": "\\textstyle\\left\\lfloor {{d-1} \\over 2}\\right\\rfloor" }, { "math_id": 45, "text": "\\textstyle\\left \\lfloor {{d-1} \\over 2}\\right \\rfloor" }, { "math_id": 46, "text": "y" }, { "math_id": 47, "text": "i^{th}" }, { "math_id": 48, "text": "C =\\{C_i\\}_{i\\ge1}" }, { "math_id": 49, "text": "C_i" }, { "math_id": 50, "text": "(n_i,k_i,d_i)_q" }, { "math_id": 51, "text": "n_i" }, { "math_id": 52, "text": "R(C)=\\lim_{i\\to\\infty}{k_i \\over n_i}" }, { "math_id": 53, "text": "\\delta(C)=\\lim_{i\\to\\infty}{d_i \\over n_i}" }, { "math_id": 54, "text": "R(C)" }, { "math_id": 55, "text": "\\delta(C)" }, { "math_id": 56, "text": " R \\le 1- {1 \\over n} \\cdot \\log_{q} \\cdot \\left[\\sum_{i=0}^{\\left\\lfloor {{\\delta \\cdot n-1}\\over 2}\\right\\rfloor}\\binom{n}{i}(q-1)^i\\right]" }, { "math_id": 57, "text": " R + \\delta \\le 1+\\frac{1}{n}" }, { "math_id": 58, "text": "k+d \\le n+1 " }, { "math_id": 59, "text": "R+2\\delta\\le1" }, { "math_id": 60, "text": "k + 2d \\le n" }, { "math_id": 61, "text": "C \\subseteq \\mathbb{F}_q^{n} " }, { "math_id": 62, "text": "d=\\left(1-{1 \\over q}\\right)n, |C| \\le 2qn " }, { "math_id": 63, "text": "d > \\left(1-{1 \\over q}\\right)n, |C| \\le {qd \\over {qd -\\left(q-1\\right)n}} " }, { "math_id": 64, "text": "R \\le 1- \\left({q \\over {q-1}}\\right) \\delta + o\\left(1\\right)" }, { "math_id": 65, "text": "R\\ge1-H_q\\left(\\delta\\right)-\\epsilon" }, { "math_id": 66, "text": "0 \\le \\delta \\le 1-{1\\over q}, 0\\le \\epsilon \\le 1- H_q\\left(\\delta\\right)" }, { "math_id": 67, "text": " H_q\\left(x\\right) ~\\overset{\\underset{\\mathrm{def}}{}}{=}~ -x\\cdot\\log_q{x \\over {q-1}}-\\left(1-x\\right)\\cdot\\log_q{\\left(1-x\\right)} " }, { "math_id": 68, "text": "J_q\\left(\\delta\\right) ~\\overset{\\underset{\\mathrm{def}}{}}{=}~ \\left(1-{1\\over q}\\right)\\left(1-\\sqrt{1-{q \\delta \\over{q-1}}}\\right) " }, { "math_id": 69, "text": "J_q\\left(n, d, e\\right)" }, { "math_id": 70, "text": "C \\subseteq \\mathbb{F}_q^n" }, { "math_id": 71, "text": "J_q\\left(n,d,e\\right)\\le qnd" }, { "math_id": 72, "text": "{e \\over n} \\le {{q-1}\\over q}\\left( {1-\\sqrt{1-{q \\over{q-1}}\\cdot{d \\over n}}}\\, \\right)=J_q\\left({d \\over n}\\right)" }, { "math_id": 73, "text": "R={\\log_q{|C|} \\over n} \\le 1-H_q\\left(J_q\\left(\\delta\\right)\\right)+o\\left(1\\right) " } ]
https://en.wikipedia.org/wiki?curid=1134659
113469
Joule–Thomson effect
Phenomenon of non-ideal fluids changing temperature while being forced through small spaces In thermodynamics, the Joule–Thomson effect (also known as the Joule–Kelvin effect or Kelvin–Joule effect) describes the temperature change of a "real" gas or liquid (as differentiated from an ideal gas) when it is expanding; typically caused by the pressure loss from flow through a valve or porous plug while keeping it insulated so that no heat is exchanged with the environment. This procedure is called a "throttling process" or "Joule–Thomson process". The effect is purely an effect due to deviation from ideality, as any ideal gas has no JT effect. At room temperature, all gases except hydrogen, helium, and neon cool upon expansion by the Joule–Thomson process when being throttled through an orifice; these three gases rise in temperature when forced through a porous plug at room temperature, but lowers in temperature when already at lower temperatures. Most liquids such as hydraulic oils will be warmed by the Joule–Thomson throttling process. The temperature at which the JT effect switches sign is the inversion temperature. The gas-cooling throttling process is commonly exploited in refrigeration processes such as liquefiers in air separation industrial process. In hydraulics, the warming effect from Joule–Thomson throttling can be used to find internally leaking valves as these will produce heat which can be detected by thermocouple or thermal-imaging camera. Throttling is a fundamentally irreversible process. The throttling due to the flow resistance in supply lines, heat exchangers, regenerators, and other components of (thermal) machines is a source of losses that limits their performance. Since it is a constant-enthalpy process, it can be used to experimentally measure the lines of constant enthalpy (isenthalps) on the formula_0 diagram of a gas. Combined with the specific heat capacity "at constant pressure" formula_1 it allows the complete measurement of the thermodynamic potential for the gas. History. The effect is named after James Prescott Joule and William Thomson, 1st Baron Kelvin, who discovered it in 1852. It followed upon earlier work by Joule on Joule expansion, in which a gas undergoes free expansion in a vacuum and the temperature is unchanged, if the gas is ideal. Description. The "adiabatic" (no heat exchanged) expansion of a gas may be carried out in a number of ways. The change in temperature experienced by the gas during expansion depends not only on the initial and final pressure, but also on the manner in which the expansion is carried out. The temperature change produced during a Joule–Thomson expansion is quantified by the Joule–Thomson coefficient, formula_2. This coefficient may be either positive (corresponding to cooling) or negative (heating); the regions where each occurs for molecular nitrogen, N2, are shown in the figure. Note that most conditions in the figure correspond to N2 being a supercritical fluid, where it has some properties of a gas and some of a liquid, but can not be really described as being either. The coefficient is negative at both very high and very low temperatures; at very high pressure it is negative at all temperatures. The maximum inversion temperature (621 K for N2) occurs as zero pressure is approached. For N2 gas at low pressures, formula_2 is negative at high temperatures and positive at low temperatures. At temperatures below the gas-liquid coexistence curve, N2 condenses to form a liquid and the coefficient again becomes negative. Thus, for N2 gas below 621 K, a Joule–Thomson expansion can be used to cool the gas until liquid N2 forms. Physical mechanism. There are two factors that can change the temperature of a fluid during an adiabatic expansion: a change in internal energy or the conversion between potential and kinetic internal energy. Temperature is the measure of thermal kinetic energy (energy associated with molecular motion); so a change in temperature indicates a change in thermal kinetic energy. The internal energy is the sum of thermal kinetic energy and thermal potential energy. Thus, even if the internal energy does not change, the temperature can change due to conversion between kinetic and potential energy; this is what happens in a free expansion and typically produces a decrease in temperature as the fluid expands. If work is done on or by the fluid as it expands, then the total internal energy changes. This is what happens in a Joule–Thomson expansion and can produce larger heating or cooling than observed in a free expansion. In a Joule–Thomson expansion the enthalpy remains constant. The enthalpy, formula_3, is defined as formula_4 where formula_5 is internal energy, formula_6 is pressure, and formula_7 is volume. Under the conditions of a Joule–Thomson expansion, the change in formula_8 represents the work done by the fluid (see the proof below). If formula_8 increases, with formula_3 constant, then formula_5 must decrease as a result of the fluid doing work on its surroundings. This produces a decrease in temperature and results in a positive Joule–Thomson coefficient. Conversely, a decrease in formula_8 means that work is done on the fluid and the internal energy increases. If the increase in kinetic energy exceeds the increase in potential energy, there will be an increase in the temperature of the fluid and the Joule–Thomson coefficient will be negative. For an ideal gas, formula_8 does not change during a Joule–Thomson expansion. As a result, there is no change in internal energy; since there is also no change in thermal potential energy, there can be no change in thermal kinetic energy and, therefore, no change in temperature. In real gases, formula_8 does change. The ratio of the value of formula_8 to that expected for an ideal gas at the same temperature is called the compressibility factor, formula_9. For a gas, this is typically less than unity at low temperature and greater than unity at high temperature (see the discussion in compressibility factor). At low pressure, the value of formula_9 always moves towards unity as a gas expands. Thus at low temperature, formula_9 and formula_8 will increase as the gas expands, resulting in a positive Joule–Thomson coefficient. At high temperature, formula_9 and formula_8 decrease as the gas expands; if the decrease is large enough, the Joule–Thomson coefficient will be negative. For liquids, and for supercritical fluids under high pressure, formula_8 increases as pressure increases. This is due to molecules being forced together, so that the volume can barely decrease due to higher pressure. Under such conditions, the Joule–Thomson coefficient is negative, as seen in the figure above. The physical mechanism associated with the Joule–Thomson effect is closely related to that of a shock wave, although a shock wave differs in that the change in bulk kinetic energy of the gas flow is not negligible. The Joule–Thomson (Kelvin) coefficient. The rate of change of temperature formula_10 with respect to pressure formula_6 in a Joule–Thomson process (that is, at constant enthalpy formula_3) is the "Joule–Thomson (Kelvin) coefficient" formula_2. This coefficient can be expressed in terms of the gas's specific volume formula_7, its heat capacity at constant pressure formula_11, and its coefficient of thermal expansion formula_12 as: formula_13 See the below for the proof of this relation. The value of formula_2 is typically expressed in °C/bar (SI units: K/Pa) and depends on the type of gas and on the temperature and pressure of the gas before expansion. Its pressure dependence is usually only a few percent for pressures up to 100 bar. All real gases have an "inversion point" at which the value of formula_2 changes sign. The temperature of this point, the "Joule–Thomson inversion temperature", depends on the pressure of the gas before expansion. In a gas expansion the pressure decreases, so the sign of formula_14 is negative by definition. With that in mind, the following table explains when the Joule–Thomson effect cools or warms a real gas: Helium and hydrogen are two gases whose Joule–Thomson inversion temperatures at a pressure of one atmosphere are very low (e.g., about 40 K, −233 °C for helium). Thus, helium and hydrogen warm when expanded at constant enthalpy at typical room temperatures. On the other hand, nitrogen and oxygen, the two most abundant gases in air, have inversion temperatures of 621 K (348 °C) and 764 K (491 °C) respectively: these gases can be cooled from room temperature by the Joule–Thomson effect. For an ideal gas, formula_15 is always equal to zero: ideal gases neither warm nor cool upon being expanded at constant enthalpy. Theoretical models. For a Van der Waals gas, the coefficient isformula_17with inversion temperature formula_18. For the Dieterici gas, the reduced inversion temperature is formula_19, and the relation between reduced pressure and reduced inversion temperature is formula_16. This is plotted on the right. The critical point falls inside the region where the gas cools on expansion. The outside region is where the gas warms on expansion. Applications. In practice, the Joule–Thomson effect is achieved by allowing the gas to expand through a throttling device (usually a valve) which must be very well insulated to prevent any heat transfer to or from the gas. No external work is extracted from the gas during the expansion (the gas must not be expanded through a turbine, for example). The cooling produced in the Joule–Thomson expansion makes it a valuable tool in refrigeration. The effect is applied in the Linde technique as a standard process in the petrochemical industry, where the cooling effect is used to liquefy gases, and in many cryogenic applications (e.g. for the production of liquid oxygen, nitrogen, and argon). A gas must be below its inversion temperature to be liquefied by the Linde cycle. For this reason, simple Linde cycle liquefiers, starting from ambient temperature, cannot be used to liquefy helium, hydrogen, or neon. They must first be cooled to their inversion temperatures, which are -233 C (helium), -71 C (hydrogen), and -42 C (neon). Proof that the specific enthalpy remains constant. In thermodynamics so-called "specific" quantities are quantities per unit mass (kg) and are denoted by lower-case characters. So "h", "u", and "v" are the specific enthalpy, specific internal energy, and specific volume (volume per unit mass, or reciprocal density), respectively. In a Joule–Thomson process the specific enthalpy "h" remains constant. To prove this, the first step is to compute the net work done when a mass "m" of the gas moves through the plug. This amount of gas has a volume of "V"1 = "m" "v"1 in the region at pressure "P"1 (region 1) and a volume "V"2 = "m" "v"2 when in the region at pressure "P"2 (region 2). Then in region 1, the "flow work" done "on" the amount of gas by the rest of the gas is: W1 = "m" "P"1"v"1. In region 2, the work done "by" the amount of gas on the rest of the gas is: W2 = "m" "P"2"v"2. So, the total work done "on" the mass "m" of gas is formula_20 The change in internal energy minus the total work done "on" the amount of gas is, by the first law of thermodynamics, the total heat supplied to the amount of gas. formula_21 In the Joule–Thomson process, the gas is insulated, so no heat is absorbed. This means that formula_22 where "u"1 and "u"2 denote the specific internal energies of the gas in regions 1 and 2, respectively. Using the definition of the specific enthalpy "h = u + Pv", the above equation implies that formula_23 where h1 and "h"2 denote the specific enthalpies of the amount of gas in regions 1 and 2, respectively. Throttling in the "T"-"s" diagram. A very convenient way to get a quantitative understanding of the throttling process is by using diagrams such as "h"-"T" diagrams, "h"-"P" diagrams, and others. Commonly used are the so-called "T"-"s" diagrams. Figure 2 shows the "T"-"s" diagram of nitrogen as an example. Various points are indicated as follows: As shown before, throttling keeps "h" constant. E.g. throttling from 200 bar and 300K (point a in fig. 2) follows the isenthalpic (line of constant specific enthalpy) of 430kJ/kg. At 1 bar it results in point b which has a temperature of 270K. So throttling from 200 bar to 1 bar gives a cooling from room temperature to below the freezing point of water. Throttling from 200 bar and an initial temperature of 133K (point c in fig. 2) to 1 bar results in point d, which is in the two-phase region of nitrogen at a temperature of 77.2K. Since the enthalpy is an extensive parameter the enthalpy in d ("h"d) is equal to the enthalpy in e ("h"e) multiplied with the mass fraction of the liquid in d ("x"d) plus the enthalpy in f ("h"f) multiplied with the mass fraction of the gas in d (1 − "x"d). So formula_24 With numbers: 150 = "x"d 28 + (1 − "x"d) 230 so "x"d is about 0.40. This means that the mass fraction of the liquid in the liquid–gas mixture leaving the throttling valve is 40%. Derivation of the Joule–Thomson coefficient. It is difficult to think physically about what the Joule–Thomson coefficient, formula_2, represents. Also, modern determinations of formula_2 do not use the original method used by Joule and Thomson, but instead measure a different, closely related quantity. Thus, it is useful to derive relationships between formula_2 and other, more conveniently measured quantities, as described below. The first step in obtaining these results is to note that the Joule–Thomson coefficient involves the three variables "T", "P", and "H". A useful result is immediately obtained by applying the cyclic rule; in terms of these three variables that rule may be written formula_25 Each of the three partial derivatives in this expression has a specific meaning. The first is formula_2, the second is the constant pressure heat capacity, formula_11, defined by formula_26 and the third is the inverse of the "isothermal Joule–Thomson coefficient", formula_27, defined by formula_28. This last quantity is more easily measured than formula_2 . Thus, the expression from the cyclic rule becomes formula_29 This equation can be used to obtain Joule–Thomson coefficients from the more easily measured isothermal Joule–Thomson coefficient. It is used in the following to obtain a mathematical expression for the Joule–Thomson coefficient in terms of the volumetric properties of a fluid. To proceed further, the starting point is the fundamental equation of thermodynamics in terms of enthalpy; this is formula_30 Now "dividing through" by d"P", while holding temperature constant, yields formula_31 The partial derivative on the left is the isothermal Joule–Thomson coefficient, formula_27, and the one on the right can be expressed in terms of the coefficient of thermal expansion via a Maxwell relation. The appropriate relation is formula_32 where "α" is the cubic coefficient of thermal expansion. Replacing these two partial derivatives yields formula_33 This expression can now replace formula_27 in the earlier equation for formula_2 to obtain: formula_34 This provides an expression for the Joule–Thomson coefficient in terms of the commonly available properties heat capacity, molar volume, and thermal expansion coefficient. It shows that the Joule–Thomson inversion temperature, at which formula_2 is zero, occurs when the coefficient of thermal expansion is equal to the inverse of the temperature. Since this is true at all temperatures for ideal gases (see expansion in gases), the Joule–Thomson coefficient of an ideal gas is zero at all temperatures. Joule's second law. It is easy to verify that for an ideal gas defined by suitable microscopic postulates that "αT" = 1, so the temperature change of such an ideal gas at a Joule–Thomson expansion is zero. For such an ideal gas, this theoretical result implies that: "The internal energy of a fixed mass of an ideal gas depends only on its temperature (not pressure or volume)." This rule was originally found by Joule experimentally for real gases and is known as Joule's second law. More refined experiments found important deviations from it. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(p, T)" }, { "math_id": 1, "text": "c_P = (\\partial h / \\partial T)_P" }, { "math_id": 2, "text": "\\mu_{\\mathrm{JT}}" }, { "math_id": 3, "text": "H" }, { "math_id": 4, "text": "H = U + PV" }, { "math_id": 5, "text": "U" }, { "math_id": 6, "text": "P" }, { "math_id": 7, "text": "V" }, { "math_id": 8, "text": "PV" }, { "math_id": 9, "text": "Z" }, { "math_id": 10, "text": "T" }, { "math_id": 11, "text": "C_{\\mathrm{p}}" }, { "math_id": 12, "text": "\\alpha" }, { "math_id": 13, "text": "\\mu_{\\mathrm{JT}} = \\left( {\\partial T \\over \\partial P} \\right)_H = \\frac V {C_{\\mathrm{p}}}(\\alpha T - 1)\\," }, { "math_id": 14, "text": "\\partial P" }, { "math_id": 15, "text": "\\mu_\\text{JT}" }, { "math_id": 16, "text": "\\tilde p = (8-\\tilde T_I) e^{\\frac 52 - \\frac 4{8-\\tilde T_I}}" }, { "math_id": 17, "text": "\\mu_\\text{JT}=-\\frac{V_m}{C_{p}} \\frac{R T V_m^{2} b-2 a(V_m-b)^{2}}{R T V_m^{3}-2 a(V_m-b)^{2}}." }, { "math_id": 18, "text": "\\frac{2a}{bR}\\left(1 - \\frac{b}{V_m}\\right)^2" }, { "math_id": 19, "text": "\\tilde T_I = 8 - 4/\\tilde V_m" }, { "math_id": 20, "text": "W = mP_1 v_1 - mP_2 v_2." }, { "math_id": 21, "text": " U - W = Q " }, { "math_id": 22, "text": "\\begin{align}\n (mu_2 - mu_1) &- (mP_1 v_1 - mP_2 v_2) = 0 \\\\\n mu_1 + mP_1 v_1 &= mu_2 + mP_2 v_2 \\\\\n u_1 + P_1 v_1 &= u_2 + P_2 v_2\n\\end{align}" }, { "math_id": 23, "text": "h_1 = h_2" }, { "math_id": 24, "text": "h_d = x_d h_e + (1 - x_d) h_f." }, { "math_id": 25, "text": "\\left(\\frac{\\partial T}{\\partial P}\\right)_H\\left(\\frac{\\partial H}{\\partial T}\\right)_P \\left(\\frac{\\partial P}{\\partial H}\\right)_T = -1." }, { "math_id": 26, "text": "C_{\\mathrm{p}} = \\left(\\frac{\\partial H}{\\partial T}\\right)_P " }, { "math_id": 27, "text": "\\mu_{\\mathrm{T}}" }, { "math_id": 28, "text": "\\mu_{\\mathrm{T}} = \\left(\\frac{\\partial H}{\\partial P}\\right)_T " }, { "math_id": 29, "text": "\\mu_{\\mathrm{JT}} = - \\frac{\\mu_{\\mathrm{T}}} {C_p}." }, { "math_id": 30, "text": "\\mathrm{d}H = T \\mathrm{d}S + V \\mathrm{d}P." }, { "math_id": 31, "text": "\\left(\\frac{\\partial H}{\\partial P}\\right)_T = T\\left(\\frac{\\partial S}{\\partial P}\\right)_T + V" }, { "math_id": 32, "text": "\\left(\\frac{\\partial S}{\\partial P}\\right)_T= -\\left(\\frac{\\partial V}{\\partial T}\\right)_P= -V\\alpha\\," }, { "math_id": 33, "text": "\\mu_{\\mathrm{T}} = - T V\\alpha\\ + V. " }, { "math_id": 34, "text": "\\mu_{\\mathrm{JT}} \\equiv \\left( \\frac{\\partial T}{\\partial P} \\right)_H = \\frac V {C_{\\mathrm{p}}} (\\alpha T - 1).\\," } ]
https://en.wikipedia.org/wiki?curid=113469
11347091
Valve audio amplifier technical specification
Technical specifications and detailed information on the valve audio amplifier Technical specifications and detailed information on the valve audio amplifier, including its development history. Circuitry and performance. Characteristics of valves. Valves (also known as vacuum tubes) are very high input impedance (near infinite in most circuits) and high-output impedance devices. They are also high-voltage / low-current devices. The characteristics of valves as gain devices have direct implications for their use as audio amplifiers, notably that power amplifiers need output transformers (OPTs) to translate a high-output-impedance high-voltage low-current signal into a lower-voltage high-current signal needed to drive modern low-impedance loudspeakers (cf. transistors and FETs which are relatively low voltage devices but able to carry large currents directly). Another consequence is that since the output of one stage is often at ~100 V offset from the input of the next stage, direct coupling is normally not possible and stages need to be coupled using a capacitor or transformer. Capacitors have little effect on the performance of amplifiers. Interstage transformer coupling is a source of distortion and phase shift, and was avoided from the 1940s for high-quality applications; transformers also add cost, bulk, and weight. Basic circuits. The following circuits are simplified conceptual circuits only, real world circuits also require a smoothed or regulated power supply, heater for the filaments (the details depending on if the selected valve types are directly or indirectly heated), and the cathode resistors are often bypassed, etc. The common cathode gain stage. The basic gain stage for a valve amplifier is the auto-biased common cathode stage, in which an anode resistor, the valve, and a cathode resistor form a potential divider across the supply rails. The resistance of the valve varies as a function of the voltage on the grid, relative to the voltage on the cathode. In the auto-bias configuration, the "operating point" is obtained by setting DC potential of the input grid at zero volts relative to ground via a high-value "grid leak" resistor. The anode current is set by the value of the grid voltage relative to the cathode and this voltage is now dependent upon the value of the resistance selected for the cathode branch of the circuit. The anode resistor acts as the load for the circuit and is typically order of 3-4 times the anode resistance of the valve type in use. The output from the circuit is the voltage at the junction between the anode and anode resistor. This output varies relative to changes in the input voltage and is a function of the voltage amplification of the valve "mu" and the values chosen for the various circuit elements. Almost all audio preamplifier circuits are built using cascaded common cathode stages. The signal is usually coupled from stage to stage via a coupling capacitor or a transformer, although direct coupling is done in unusual cases. The cathode resistor may or may not be bypassed with a capacitor. Feedback may also be applied to the cathode resistor. The single-ended triode (SET) power amplifier. A simple SET power amplifier can be constructed by cascading two stages, using an output transformer as the load. Differential stages. Two triodes with the cathodes coupled together to form a differential pair. This stage has the ability to cancel common mode (equal on both inputs) signals, and if operated in class A also has the merit of having the ability to largely reject any supply variations (since they affect both sides of the differential stage equally), and conversely the total current drawn by the stage is almost constant (if one side draws more instantaneously the other draws less), resulting in minimal variation in the supply rail sag, and this possibly also interstage distortion. Two power valves (may be triodes or tetrodes) being differentially driven to form a push–pull output stage, driving a push–pull transformer load. This output stage makes much better use of the transformer core than the single-ended output stage. The long-tail pair. A "long tail" is a constant current (CC) load as the shared cathode feed to a differential pair. In theory the more constant current linearises the differential stage. The CC may be approximated by a resistor dropping a large voltage, or may be generated by an active circuit (either valve, transistor or FET based) The long-tail pair can also be used as a phase splitter. It is often used in guitar amplifiers (where it is referred to as the "phase inverter") to drive the power section. The concertina phase splitter. As an alternate to the long-tail pair, the "concertina" uses a single triode as a variable resistance within a potential divider formed by Ra and Rk either side of the valve. The result is that the voltage at the anode swings exactly and opposite to the voltage at the cathode, giving a perfectly balanced phase split. the disadvantage of this stage (cf the differential long-tail pair) is that it does not give any gain. Using a double triode (typically octal or noval) to form a SET input buffer (giving gain) to then feed a concertina phase splitter is a classic push–pull front end, typically followed by a driver (triode) and (triode or pentode) output stage (in ultra linear in many cases) to form the classic push–pull amplifier circuit. The push–pull power amplifier. The push–pull output circuit shown is a simplified variation of the Williamson topology, which comprises four stages: Cascode. The cascode (a contraction of the phrase "cascade to cathode") is a two-stage amplifier composed of a transconductance amplifier followed by a current buffer. In valve circuits, the cascode is often constructed from two triodes connected in series, with one operating as a common grid and thus acting as a voltage regulator, providing a nearly constant anode voltage to the other, which operates as a common cathode. This improves input-output isolation (or reverse transmission) by eliminating the Miller effect and thus contributes to a much higher bandwidth, higher input impedance, high output impedance, and higher gain than a single-triode stage. Tetrode/pentode stages. The tetrode has a screen grid (g2) which is between the anode and the first grid and normally serves, like the cascode, to eliminate the Miller effect and therefore also allows a higher bandwidth and/or higher gain than a triode, but at the expense of linearity and noise performance. A pentode has an additional suppressor grid (g3) to eliminate the tetrode kink. This is used for improved performance rather than extra gain and is usually not accessible externally. Some of these valves use aligned grids to minimise grid current and beam plates instead of a third grid, these are known as "beam tetrodes". It was realised (and many pentodes were specifically designed to permit) that by strapping the screens to the grid/anode a tetrode/pentode just became a triode again, as such making these late design valves very flexible. "Triode strapped" tetrodes are often used in modern amplifier designs that are optimised for quality rather than power output. Ultra-linear. In 1937, Alan Blumlein originated a configuration between a "triode strapped" tetrode and normal tetrode, that connects the extra grid (screen) of a tetrode to a tap from the OPT "part way between" the anode voltage and the supply voltage. This electrical compromise gives a gain and linearity equal to the best traits of both extremes. In a 1951 engineering paper published by David Hafler and Herbert Keroes, they determined that when the screen tap was set to approximately 43% of anode voltage, an optimized condition within the output stage occurred, which they referred to as "ultra-linear". By the late 1950s, this design became the dominant configuration for high-fidelity PP amplifiers. Output transformerless. Julius Futterman pioneered a type of amplifier known as "output transformerless" (OTL). These use paralleled valves to match with speaker impedances (typically 8 ohms). This design require numerous valves, run hot, and because they attempt to match impedances in a way fundamentally different from a transformer, they often have a unique sound quality. 6080 triodes, designed for regulated power supplies, were low-impedance types sometimes pressed into transformerless use. Single-ended triode (SET) power amplifiers. Some valve amplifiers use the single-ended triode (SET) topology that uses the gain device in class A. SETs are extremely simple and have low parts count. Such amplifiers are expensive because of the output transformers required. This type of design results in an extremely simple distortion spectrum comprising a monotonically decaying series of harmonics. Some consider this distortion characteristic is a factor in the attractiveness of the sound such designs produce. Compared with modern designs SETs adopt a minimalist approach, and often have just two stages, a single stage triode voltage amplifier followed by a triode power stage. However, variations using some form of active current source or load, not considered a gain stage, are used. The typical valve using this topology in (rare) current commercial production is the 300B, which yields about 5 watts in SE mode. Rare amplifiers of this type use valves such as the 211 or 845, capable of about 18 watts. These valves are bright emitter transmitting valves, and have thoriated tungsten filaments which glow like light bulbs when powered. See paragraphs further down regarding high-power commercially available SET amplifiers offering up to 40 watts with no difficulty, following the development of output transformers to overcome the above restrictions. The pictures below are of a commercial SET amplifier, and also a prototype of a hobbyist amplifier. One reason for SETs being (usually) limited to low power is the extreme difficulty (and consequent expense) of making an output transformer that can handle the plate current without saturating, while avoiding excessively large capacitive parasitics. Push–pull (PP) / differential power amplifiers. The use of differential ("push–pull") output stages cancels standing bias current drawn through the output transformer by each of the output valves individually, greatly reducing the problem of core saturation and thus facilitating the construction of more powerful amplifiers at the same time as using smaller, wider bandwidth and cheaper transformers. The cancellation of the differential output valves also largely cancels the (dominant) even-order harmonic distortion products of the output stage, resulting in less THD, albeit dominated now by odd-order harmonics and no longer monotonic. Ideally, cancellation of even-order distortion is perfect, but it the real world it is not, even with closely matched valves. PP OPTs usually have a gap to prevent saturation, though less than required by a single-ended circuit. Since the 1950s the vast majority of high-quality valve amplifiers, and almost all higher-power valve amplifiers have been of the push–pull type. Push–pull output stages can use triodes for lowest Zout and best linearity, but often use tetrodes or pentodes which give greater gain and power. Many output valves such as KT88, EL34, and EL84 were specifically designed to be operated in either triode or tetrode mode, and some amplifiers can be switched between these modes. Post-Williamson, most commercial amplifiers have used tetrodes in the "ultra-linear" configuration. Class A. Class A pure triode PP stages are sufficiently linear that they can be operated without feedback, although modest NFB to reduce distortion, reduce Zout, and control gain may be desirable. Their power efficiency is, however, much less than class AB (and, of course, class B); significantly less output power is available for the same anode dissipation. Class A PP designs have no crossover distortion and distortion becomes negligible as signal amplitude is reduced. The effect of this is that class A amplifiers perform extremely well with music that has a low average level (with negligible distortion) with momentary peaks. A disadvantage of Class A operation for power valves is a shortened life, because the valves are always fully "on" and dissipate maximum power all of the time. Signal amplifier valves not operating at high power are not affected in this way. Power supply regulation (variation of voltage available with current drawn) is not an issue, as average current is essentially constant; AB amplifiers, which draw current dependent upon signal level, require attention to supply regulation. Class AB and B. Class B and AB amplifiers are more efficient than class A, and can deliver higher power output levels from a given power supply and set of valves. However, the price for this is that they suffer from crossover distortion, of more or less constant amplitude regardless of signal amplitude. This means that class AB and B amplifiers produce their lowest distortion percentage at near maximum amplitude, with poorer distortion performance at low levels. As the circuit changes from pure class A, through AB1 and AB2, to B, open-loop crossover distortion worsens. Class AB and B amplifiers use NFB to reduce open-loop distortion. Measured distortion spectra from such amplifiers show that distortion percentage is dramatically reduced by NFB, but the residual distortion is shifted towards higher harmonics. In a class B push–pull amplifier, output valve current which must be provided by the power supply ranges from nearly zero for zero signal to a maximum at maximum signal. Consequently, for linear response to transient signal changes the power supply must have good regulation. Only class A can be used in single-ended mode, as part of the signal would otherwise be cut off. The driver stage for class AB2 and B valve amplifiers must be capable of supplying some signal current to the power valve grids ("driving power"). Biasing. The biasing of a push–pull output stage can be adjusted (at the design stage, usually not in a finished amplifier) between class A (giving best open-loop linearity) through classes AB1 and AB2, to class B (giving greatest power and efficiency from a given power supply, output valves and output transformer). Most commercial valve amplifiers operate in Class AB1 (typically pentodes in the ultra-linear configuration), trading open-loop linearity against higher power; some run in pure class A. Circuit topology. The typical topology for a PP amplifier has an input stage, a phase splitter, a driver and the output stage, although there are many variations of the input stage / phase splitter, and sometimes two of the listed functions are combined in one valve stage. The dominant phase splitter topologies today are the concertina, floating paraphase, and some variation of the long-tail pair. The gallery shows a modern home-constructed, fully differential, pure class A amplifier of about 15 watts output power without negative feedback, using 6SN7 low-power dual triodes and KT88 power tetrodes. Output transformers. Because of their inability to drive low impedance loads directly, valve audio amplifiers must employ output transformers to step down the impedance to match the loudspeakers. Output transformers are not perfect devices and will always introduce some odd harmonic distortion and amplitude variation with frequency to the output signal. In addition, transformers introduce frequency-dependent phase shifts which limit the overall negative feedback which can be used, to keep within the Nyquist stability criteria at high frequencies and avoid oscillation. In recent years, however, the development of improved transformer designs and winding techniques greatly reduce these unwanted effects within the desired pass-band, moving them further out to the margins. Negative feedback (NFB). Following its invention by Harold Stephen Black, negative feedback (NFB) has been almost universally adopted in amplifiers of all types, to substantially reduce distortion, flatten frequency response, and reduce the effect of component variations. This is especially needed with non-class-A amplifiers. Feedback very much reduces distortion percentage, but the distortion spectrum becomes more complex, with a far higher contribution from higher harmonics; the high harmonics, if at an audible level, are much more undesirable than lower ones, so that the improvement due to lower overall distortion is partly cancelled by its nature. It is reported that under some circumstances the absolute amplitude of higher harmonics may increase with feedback, although total distortion decreases. NFB reduces output impedance (Zout) (which may vary as a function of frequency in some circuits). This has two important consequences: Valve noise and noise figure. Like any amplifying device, valves add noise to the signal to be amplified. Noise is due to device imperfections plus unavoidable temperature-dependent thermal fluctuations (systems are usually assumed to be at room temperature, "T" = 295 K). Thermal fluctuations cause an electrical noise power of formula_0, where formula_1 is the Boltzmann constant and "B" the bandwidth. Correspondingly, the voltage noise of a resistance "R" into an open circuit is formula_2 and the current noise into a short circuit is formula_3. The noise figure is defined as the ratio of the noise power at the output of the amplifier to the noise power that would be present at the output if the amplifier were noiseless (due to amplification of thermal noise of the signal source). An equivalent definition is: noise figure is the factor by which insertion of the amplifier degrades the signal to noise ratio. It is often expressed in decibels (dB). An amplifier with a 0 dB noise figure would be perfect. The noise properties of valves at audio frequencies can be modelled well by a perfect noiseless valve having a source of voltage noise in series with the grid. For the EF86 low-noise audio pentode valve, for example, this voltage noise is specified (see e.g., the Valvo, Telefunken or Philips data sheets) as 2 microvolts integrated over a frequency range of approximately 25 Hz to 10 kHz. (This refers to the integrated noise, see below for the frequency dependence of the noise spectral density.) This equals the voltage noise of a 25 kΩ resistor. Thus, if the signal source has an impedance of 25 kΩ or more, the noise of the valve is actually smaller than the noise of the source. For a source of 25 kΩ, the noise generated by valve and source are the same, so the total noise power at the output of the amplifier is the square root of two times the noise power at the output of the perfect amplifier. It is not simply double because the noise sources are random and there is some partial cancellation in the combined noise. The noise figure is then 1.414, or 1.5 dB. For higher impedances, such as 250 kΩ, the EF86's voltage noise is 1/101/2 lower than the sources's own noise, and the noise figure is ~1 dB. For a low-impedance source of 250 Ω, on the other hand, the noise contribution of the valve is 10 times larger than the signal source, and the noise figure is approximately ten, or 10 dB. To obtain low noise figure, the impedance of the source can be increased by a transformer. This is eventually limited by the input capacitance of the valve, which sets a limit on how high the signal impedance can be made if a certain bandwidth is desired. The noise voltage density of a given valve is a function of frequency. At frequencies above 10 kHz or so, it is basically constant ("white noise"). White noise is often expressed by an equivalent noise resistance, which is defined as the resistance which produces the same voltage noise as present at the valve input. For triodes, it is approximately (2-3)/"g"m, where "g"m is the transconductivity. For pentodes, it is higher, about (5-7)/"g"m. Valves with high "g"m thus tend to have lower noise at high frequencies. In the audio frequency range (below 1–100 kHz), "1/"f"" noise becomes dominant, which rises like 1/"f". Thus, valves with low noise at high frequency do not necessarily have low noise in the audio frequency range. For special low-noise audio valves, the frequency at which 1/"f" noise takes over is reduced as far as possible, maybe to something like a kilohertz. It can be reduced by choosing very pure materials for the cathode nickel, and running the valve at an optimized (generally low) anode current. Microphony. Unlike solid-state devices, valves are assemblies of mechanical parts whose arrangement determines their functioning, and which cannot be totally rigid. If a valve is jarred, either by the equipment being moved or by acoustic vibrations from the loudspeakers, or any sound source, it will produce an output signal, as if it were some sort of microphone (the effect is consequently called microphony). All valves are subject to this to some extent; low-level voltage amplifier valves for audio are designed to be resistant to this effect, with extra internal supports. The EF86 mentioned in the context of noise is also designed for low microphony, though its high gain makes it particularly susceptible. Modern audiophile hi-fi amplification. For high-end audio, where cost is not the primary consideration, valve amplifiers have remained popular and indeed during the 1990s made a commercial resurgence. Circuits designed since then in most cases remain similar to circuits from the valve age, but benefit from advances in ancillary component quality (including capacitors) as well as general progress across the electronics industry which gives designers increasingly powerful insight into circuit operation. Solid-state power supplies are more compact, efficient, and can have very good regulation. Semiconductor power amplifiers do not have the severe limitations on output power imposed by thermionic devices; accordingly loudspeaker design has evolved in the direction of smaller. more convenient, loudspeakers, trading off power efficiency for small size, giving speakers of similar quality but smaller size which require much greater power for the same loudness than hitherto. In response, many modern valve push–pull amplifiers are more powerful than earlier designs, reflecting the need to drive inefficient speakers. Modern valve preamplifiers. When valve amplifiers were the norm, user-adjustable "tone controls" (a simple two-band non-graphic equaliser) and electronic filters were used to allow the listener to change frequency response according to taste and room acoustics; this has become uncommon. Some modern equipment uses graphic equalisers, but valve preamplifiers tend not to supply these facilities (except for RIAA and similar equalisation needed for vinyl and shellac discs). Modern signal sources, unlike vinyl discs, supply line level signals without need for equalisation. It is common to drive valve power amps directly from such source, using passive volume and input source switching integrated into the amplifier, or with a minimalist "line level" control amplifier which is little more than passive volume and switching, plus a buffer amplifier stage to drive the interconnects. However, there is some small demand for valve preamps and filter circuits for studio microphone amplifiers, equalising preamplifiers for vinyl discs, and exceptionally for active crossovers. Modern valve power amplifiers. Commercial single-ended triode amplifiers. When valve amplifiers were the norm, SETs more-or-less disappeared from western products except for low-power designs (up to 5 watts), with push–pull indirectly heated triodes or triode-connected valves such as EL84 becoming the norm. However, the far east never abandoned valves, and especially the SET circuit; indeed the extreme interest in all things audiophile in Japan and other far eastern countries sustained great interest in this approach. Since the 1990s a niche market has developed again in the west for low-power commercial SET amplifies (up to 7 watts), notably using the 300B valve in recent years, which has become fashionable and expensive. Lower-power amplifiers based on other vintage valve types such as 2A3 and 45 are also made. Even more rarely, higher powered SETs are produced commercially, usually using the 211 or 845 transmitting valves, which are able to deliver 20 watts, operating at 1000 V. Notable amplifiers in this class are those from Audio Note corporation (designed in Japan), including the "Ongaku", voted amplifier of the year during the late 1990s. A very small number of hand-built products of this class sell at very high prices (from US$10,000). The Wavac 833 may be the world's most expensive hi-fi amplifier, delivering around 150 watts using an 833A valve. Aside from this Wavac and a very few other high-power SETs, SET amplifiers usually need to be carefully paired with very efficient speakers, notably horn and transmission-line enclosures and full-range drivers such as those made by Klipsch and Lowther, which invariably have their own quirks, offsetting their advantages of very high efficiency and minimalism. Some companies such as the Chinese company "Ming Da" make low power SETs using valves other than the 300B, such as KT90 (a development of the KT88) and up to the more powerful sister of the 845, the 805ASE, with output power of 40 watts over the full audio range from 20 Hz. This is made possible by an output transformer design which does not saturate at high levels and has high efficiency. Commercial push–pull (PP) amplifiers. Mainstream modern loudspeakers give good sound quality in a compact size, but are much less power-efficient than older designs and require powerful amplifiers to drive them. This makes them unsuitable for use with valve amplifiers, particularly lower-power single-ended designs. Valve hi-fi power amplifier designs since the 1970s have had to move mainly to class AB1 push–pull (PP) circuits. Tetrodes and pentodes, sometimes in ultra-linear configuration, with significant negative feedback, are the usual configuration. Some class A push–pull amplifiers are made commercially. Some amplifiers can be switched between classes A and AB; some can be switched into triode mode. Major manufacturers in the PP valve market include: Hobbyist amplifier construction. The simplicity of valve amplifiers, especially single-ended designs, makes them viable for home construction. This has some advantages: Construction. Point-to-point hand-wiring tends to be used rather than circuit boards in low-volume high-end commercial constructions as well as by hobbyists. This construction style is satisfactory due to ease of construction, adapted to the number of physically large and chassis mounted components (valve sockets, large supply capacitors, transformers), the need to twist heater wiring to minimise hum, and as a side effect benefiting from the fact that "flying" wiring minimises capacitive effects. One picture below shows circuit constructed using "standard" modern industrial parts (630 V MKP capacitors/metal film resistors). One advantage a hobbyist has over a commercial producer is the ability to use higher quality parts that are not reliably available in production volumes (or at a commercially viable cost price). For example, the "silver top getter" Sylvania brown base 6SN7s in use in the external picture date from the 1960s. Another picture shows exactly the same circuit constructed using Russian military production Teflon capacitors and non-inductive planar film resistors, of the same values. The wiring of a commercial amplifier is also shown for comparison Unusual designs. Very high power SETs. Very occasionally, very-high-power valves (usually designed for use in radio transmitters) from decades ago are pressed into service to create one-off SET designs (usually at very high cost). Examples include valves 211 and 833. The main problem with these designs is constructing output transformers able to sustain the plate current and resultant flux density without core saturation over the full audio-frequency spectrum. This problem increases with power level. Another problem is that the voltages for such amplifiers often pass well beyond 1 kV, which forms an effective disincentive to commercial products of this type. Parallel push–pull (PPP) amplifiers. Many modern commercial amplifiers (and some hobbyist constructions) place multiple pairs of output valves of readily obtainable types in parallel to increase power, operating from the same voltage required by a single pair. A beneficial side effect is that the output impedance of the valves, and thus the transformer turns ratio needed, is reduced, making it easier to construct a wide bandwidth transformer. Some high-power commercial amplifiers use arrays of standard valves (e.g. EL34, KT88) in the parallel push–pull (PPP) configuration (e.g. Jadis, Audio Research, McIntosh, Ampeg SVT). Some home-constructed amplifiers use pairs of high-power transmitting valves (e.g. 813) to yield 100 watts or more of output power per pair in class AB1 (ultra-linear). Output transformerless amplifiers (OTL). The output transformer (OPT) is a major component in all mainstream valve power amplifiers, accounting for significant cost, size, and weight. It is a compromise, balancing the needs for low stray capacitance, low losses in iron and copper, operation without saturation at the required direct current, good linearity, etc. One approach to avoid the problems of OPTs is to avoid the OPT entirely, and directly couple the amplifier to the loudspeaker, as is done with most solid-state amplifiers. Some designs without output transformers (OTLs) were produced by Julius Futterman in the 1960s and '70s, and more recently in different embodiments by others. Valves normally match much higher impedances than that of a loudspeaker. Low-impedance valve types and purpose-designed circuits are required. Reasonable efficiency and moderate Zout (damping factor) can be achieved. These effects mean that OTLs have selective speaker load requirements, just like any other amplifier. Generally a speaker of at least 8 ohms is required, although larger OTLs are often quite comfortable with 4 ohm loads. Electrostatic speakers (often considered difficult to drive) often work especially well with OTLs. The more recent and more successful OTL circuits employ an output circuit generally known as a Circlotron. The Circlotron has about one-half the output impedance of the Futterman-style (totem-pole) circuits. The Circlotron is fully symmetrical and does not require large amounts of feedback to reduce output impedance and distortion. Successful embodiments use the 6AS7G and the Russian 6C33-CB power triodes. A common myth is that a short-circuit in an output valve may result in the loudspeaker being connected directly across the power supply and destroyed. In practice, the older Futterman-style amplifiers have been known to damage speakers, due not to shorts but to oscillation. The Circlotron amplifiers often feature direct-coupled outputs, but proper engineering (with a few well-placed fuses) ensures that damage to a speaker is no more likely than with an output transformer. Modern OTLs are often more reliable, sound better, and are less expensive than many transformer-coupled valve approaches. Direct coupled amplifiers for electrostatics and headphones. In a sense this niche is a subset of OTLs however it merits treating separately because unlike an OTL for a loudspeaker, which has to push the extremes of a valve circuit's ability to deliver relatively high currents at low voltages into a low impedance load, some headphone types have impedances high enough for normal valve types to drive reasonably as OTLs, and in particular electrostatic loudspeakers and headphones which can be driven directly at hundreds of volts but minimal currents. Once more there are some safety issues associated with direct drive for electrostatic loudspeakers, which in extremis may use transmitting valves operating at over 1 kV. Such systems are potentially lethal. Class A push-pull amplifiers. Shunt Regulated Push-Pull amplifier (SRPP) and Mu-follower is a family of class A push-pull amplifiers featuring two valves. They form high gain and low output impedance inverting stage, providing very low non-linear distortion and great PSRR. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "k_B T B" }, { "math_id": 1, "text": "k_B" }, { "math_id": 2, "text": "\\sqrt{4 k_B\\cdot T\\cdot B\\cdot R}" }, { "math_id": 3, "text": "\\sqrt{4 k_B\\cdot T\\cdot B/R}" } ]
https://en.wikipedia.org/wiki?curid=11347091
11347127
Borell–Brascamp–Lieb inequality
In mathematics, the Borell–Brascamp–Lieb inequality is an integral inequality due to many different mathematicians but named after Christer Borell, Herm Jan Brascamp and Elliott Lieb. The result was proved for "p" &gt; 0 by Henstock and Macbeath in 1953. The case "p" = 0 is known as the Prékopa–Leindler inequality and was re-discovered by Brascamp and Lieb in 1976, when they proved the general version below; working independently, Borell had done the same in 1975. The nomenclature of "Borell–Brascamp–Lieb inequality" is due to Cordero-Erausquin, McCann and Schmuckenschläger, who in 2001 generalized the result to Riemannian manifolds such as the sphere and hyperbolic space. Statement of the inequality in R"n". Let 0 &lt; "λ" &lt; 1, let −1 / "n" ≤ "p" ≤ +∞, and let "f", "g", "h" : R"n" → [0, +∞) be integrable functions such that, for all "x" and "y" in R"n", formula_0 where formula_1 and formula_2. Then formula_3
[ { "math_id": 0, "text": "h \\left( (1 - \\lambda) x + \\lambda y \\right) \\geq M_{p} \\left( f(x), g(y), \\lambda \\right)," }, { "math_id": 1, "text": "\n\\begin{align}\nM_{p} (a, b, \\lambda) =\n\\begin{cases}\n &\\left( (1 - \\lambda) a^{p} + \\lambda b^{p} \\right)^{1/p} \\; \\quad \\text{if} \\quad ab\\neq 0\\\\\n&0 \\quad \\text{if} \\quad ab=0\n\\end{cases}\n\\end{align}\n" }, { "math_id": 2, "text": "M_{0}(a,b,\\lambda) = a^{1-\\lambda}b^{\\lambda}" }, { "math_id": 3, "text": "\\int_{\\mathbb{R}^{n}} h(x) \\, \\mathrm{d} x \\geq M_{p / (n p + 1)} \\left( \\int_{\\mathbb{R}^{n}} f(x) \\, \\mathrm{d} x, \\int_{\\mathbb{R}^{n}} g(x) \\, \\mathrm{d} x, \\lambda \\right)." } ]
https://en.wikipedia.org/wiki?curid=11347127
113507
Spatial anti-aliasing
Minimising distortion artifacts when representing a high-resolution image at a lower resolution In digital signal processing, spatial anti-aliasing is a technique for minimizing the distortion artifacts (aliasing) when representing a high-resolution image at a lower resolution. Anti-aliasing is used in digital photography, computer graphics, digital audio, and many other applications. Anti-aliasing means removing signal components that have a higher frequency than is able to be properly resolved by the recording (or sampling) device. This removal is done before (re)sampling at a lower resolution. When sampling is performed without removing this part of the signal, it causes undesirable artifacts such as black-and-white noise. In signal acquisition and audio, anti-aliasing is often done using an analog anti-aliasing filter to remove the out-of-band component of the input signal prior to sampling with an analog-to-digital converter. In digital photography, optical anti-aliasing filters made of birefringent materials smooth the signal in the spatial optical domain. The anti-aliasing filter essentially blurs the image slightly in order to reduce the resolution to or below that achievable by the digital sensor (the larger the pixel pitch, the lower the achievable resolution at the sensor level). Examples. In computer graphics, anti-aliasing improves the appearance of "jagged" polygon edges, or "jaggies", so they are smoothed out on the screen. However, it incurs a performance cost for the graphics card and uses more video memory. The level of anti-aliasing determines how smooth polygon edges are (and how much video memory it consumes). Near the top of an image with a receding checker-board pattern, the image is both difficult to recognise and not aesthetically appealing. In contrast, when anti-aliased the checker-board near the top blends into grey, which is usually the desired effect when the resolution is insufficient to show the detail. Even near the bottom of the image, the edges appear much smoother in the anti-aliased image. Multiple methods exist, including the sinc filter, which is considered a better anti-aliasing algorithm. When magnified, it can be seen how anti-aliasing interpolates the brightness of the pixels at the boundaries to produce grey pixels since the space is occupied by both black and white tiles. These help make the sinc filter antialiased image appear much smoother than the original. In a simple diamond image, anti-aliasing blends the boundary pixels; this reduces the aesthetically jarring effect of the sharp, step-like boundaries that appear in the aliased graphic. Anti-aliasing is often applied in rendering text on a computer screen, to suggest smooth contours that better emulate the appearance of text produced by conventional ink-and-paper printing. Particularly with fonts displayed on typical LCD screens, it is common to use subpixel rendering techniques like ClearType. Sub-pixel rendering requires special colour-balanced anti-aliasing filters to turn what would be severe colour distortion into barely-noticeable colour fringes. Equivalent results can be had by making individual sub-pixels addressable as if they were full pixels, and supplying a hardware-based anti-aliasing filter as is done in the OLPC XO-1 laptop's display controller. Pixel geometry affects all of this, whether the anti-aliasing and sub-pixel addressing are done in software or hardware. Simplest approach to anti-aliasing. The most basic approach to anti-aliasing a pixel is determining what percentage of the pixel is occupied by a given region in the vector graphic - in this case a pixel-sized square, possibly transposed over several pixels - and using that percentage as the colour. A very basic plot of a single, white-on-black anti-aliased point using that method can be done as follows: def plot_antialiased_point(x: float, y: float): """Plot a single, white-on-black anti-aliased point.""" for rounded_x in floor(x) to ceil(x): for rounded_y in floor(y) to ceil(y): percent_x = 1 - abs(x - rounded_x) percent_y = 1 - abs(y - rounded_y) percent = percent_x * percent_y draw_pixel(coordinates=(rounded_x, rounded_y), color=percent (range 0-1)) This method is generally best suited for simple graphics, such as basic lines or curves, and applications that would otherwise have to convert absolute coordinates to pixel-constrained coordinates, such as 3D graphics. It is a fairly fast function, but it is relatively low-quality, and gets slower as the complexity of the shape increases. For purposes requiring very high-quality graphics or very complex vector shapes, this will probably not be the best approach. Note: The codice_0 routine above cannot blindly set the colour value to the percent calculated. It must add the new value to the existing value at that location up to a maximum of 1. Otherwise, the brightness of each pixel will be equal to the darkest value calculated in time for that location which produces a very bad result. For example, if one point sets a brightness level of 0.90 for a given pixel and another point calculated later barely touches that pixel and has a brightness of 0.05, the final value set for that pixel should be 0.95, not 0.05. For more sophisticated shapes, the algorithm may be generalized as rendering the shape to a pixel grid with higher resolution than the target display surface (usually a multiple that is a power of 2 to reduce distortion), then using bicubic interpolation to determine the average intensity of each real pixel on the display surface. Signal processing approach to anti-aliasing. In this approach, the ideal image is regarded as a "signal". The image displayed on the screen is taken as samples, at each ("x,y") pixel position, of a filtered version of the signal. Ideally, one would understand how the human brain would process the original signal, and provide an on-screen image that will yield the most similar response by the brain. The most widely accepted analytic tool for such problems is the Fourier transform; this decomposes a signal into basis functions of different frequencies, known as frequency components, and gives us the amplitude of each frequency component in the signal. The waves are of the form: formula_0 where "j" and "k" are arbitrary non-negative integers. There are also frequency components involving the sine functions in one or both dimensions, but for the purpose of this discussion, the cosine will suffice. The numbers "j" and "k" together are the "frequency" of the component: "j" is the frequency in the "x" direction, and "k" is the frequency in the "y" direction. The goal of an anti-aliasing filter is to greatly reduce frequencies above a certain limit, known as the Nyquist frequency, so that the signal will be accurately represented by its samples, or nearly so, in accordance with the sampling theorem; there are many different choices of detailed algorithm, with different filter transfer functions. Current knowledge of human visual perception is not sufficient, in general, to say what approach will look best. Two dimensional considerations. The previous discussion assumes that the rectangular mesh sampling is the dominant part of the problem. The filter usually considered optimal is not rotationally symmetrical, as shown in this first figure; this is because the data is sampled on a square lattice, not using a continuous image. This sampling pattern is the justification for doing signal processing along each axis, as it is traditionally done on one dimensional data. Lanczos resampling is based on convolution of the data with a discrete representation of the sinc function. If the resolution is not limited by the rectangular sampling rate of either the source or target image, then one should ideally use rotationally symmetrical filter or interpolation functions, as though the data were a two dimensional function of continuous x and y. The sinc function of the radius has too long a tail to make a good filter (it is not even square-integrable). A more appropriate analog to the one-dimensional sinc is the two-dimensional Airy disc amplitude, the 2D Fourier transform of a circular region in 2D frequency space, as opposed to a square region. One might consider a Gaussian plus enough of its second derivative to flatten the top (in the frequency domain) or sharpen it up (in the spatial domain), as shown. Functions based on the Gaussian function are natural choices, because convolution with a Gaussian gives another Gaussian whether applied to x and y or to the radius. Similarly to wavelets, another of its properties is that it is halfway between being localized in the configuration (x and y) and in the spectral (j and k) representation. As an interpolation function, a Gaussian alone seems too spread out to preserve the maximum possible detail, and thus the second derivative is added. As an example, when printing a photographic negative with plentiful processing capability and on a printer with a hexagonal pattern, there is no reason to use sinc function interpolation. Such interpolation would treat diagonal lines differently from horizontal and vertical lines, which is like a weak form of aliasing. Practical real-time anti-aliasing approximations. There are only a handful of primitives used at the lowest level in a real-time rendering engine (either software or hardware accelerated). These include "points", "lines" and "triangles". If one is to draw such a primitive in white against a black background, it is possible to design such a primitive to have fuzzy edges, achieving some sort of anti-aliasing. However, this approach has difficulty dealing with adjacent primitives (such as triangles that share an edge). To approximate the uniform averaging algorithm, one may use an extra buffer for sub-pixel data. The initial (and least memory-hungry) approach used 16 extra bits per pixel, in a 4 × 4 grid. If one renders the primitives in a careful order, such as front-to-back, it is possible to create a reasonable image. Since this requires that the primitives be in some order, and hence interacts poorly with an application programming interface such as OpenGL, the latest methods simply have two or more full sub-pixels per pixel, including full color information for each sub-pixel. Some information may be shared between the sub-pixels (such as the Z-buffer.) Mipmapping. There is also an approach specialised for texture mapping called mipmapping, which works by creating lower resolution, pre-filtered versions of the texture map. When rendering the image, the appropriate-resolution mipmap is chosen and hence the texture pixels (texels) are already filtered when they arrive on the screen. Mipmapping is generally combined with various forms of texture filtering in order to improve the final result. Example of an image with extreme pseudo-random aliasing. Because fractals have unlimited detail and no noise other than arithmetic round-off error, they illustrate aliasing more clearly than do photographs or other measured data. The escape times, which are converted to colours at the exact centres of the pixels, go to infinity at the border of the set, so colours from centres near borders are unpredictable, due to aliasing. This example has edges in about half of its pixels, so it shows much aliasing. The first image is uploaded at its original sampling rate. (Since most modern software anti-aliases, one may have to download the full-size version to see all of the aliasing.) The second image is calculated at five times the sampling rate and down-sampled with anti-aliasing. Assuming that one would really like something like the average colour over each pixel, this one is getting closer. It is clearly more orderly than the first. In order to properly compare these images, viewing them at full-scale is necessary. It happens that, in this case, there is additional information that can be used. By re-calculating with a "distance estimator" algorithm, points were identified that are very close to the edge of the set, so that unusually fine detail is aliased in from the rapidly changing escape times near the edge of the set. The colours derived from these calculated points have been identified as unusually unrepresentative of their pixels. The set changes more rapidly there, so a single point sample is less representative of the whole pixel. Those points were replaced, in the third image, by interpolating the points around them. This reduces the noisiness of the image but has the side effect of brightening the colours. So this image is not exactly the same that would be obtained with an even larger set of calculated points. To show what was discarded, the rejected points, blended into a grey background, are shown in the fourth image. Finally, "Budding Turbines" is so regular that systematic (Moiré) aliasing can clearly be seen near the main "turbine axis" when it is downsized by taking the nearest pixel. The aliasing in the first image appears random because it comes from all levels of detail, below the pixel size. When the lower level aliasing is suppressed, to make the third image and then that is down-sampled once more, without anti-aliasing, to make the fifth image, the order on the scale of the third image appears as systematic aliasing in the fifth image. Pure down-sampling of an image has the following effect (viewing at full-scale is recommended): Super sampling / full-scene anti-aliasing. Super sampling anti-aliasing (SSAA), also called full-scene anti-aliasing (FSAA), is used to avoid aliasing (or "jaggies") on full-screen images. SSAA was the first type of anti-aliasing available with early video cards. But due to its tremendous computational cost and the advent of multisample anti-aliasing (MSAA) support on GPUs, it is no longer widely used in real time applications. MSAA provides somewhat lower graphic quality, but also tremendous savings in computational power. The resulting image of SSAA may seem softer, and should also appear more realistic. However, while useful for photo-like images, a simple anti-aliasing approach (such as super-sampling and then averaging) may actually worsen the appearance of some types of line art or diagrams (making the image appear fuzzy), especially where most lines are horizontal or vertical. In these cases, a prior grid-fitting step may be useful (see hinting). In general, super-sampling is a technique of collecting data points at a greater resolution (usually by a power of two) than the final data resolution. These data points are then combined (down-sampled) to the desired resolution, often just by a simple average. The combined data points have less visible aliasing artifacts (or moiré patterns). Full-scene anti-aliasing by super-sampling usually means that each full frame is rendered at double (2x) or quadruple (4x) the display resolution, and then down-sampled to match the display resolution. Thus, a 2x FSAA would render 4 super-sampled pixels for each single pixel of each frame. Rendering at larger resolutions will produce better results; however, more processor power is needed, which can degrade performance and frame rate. Sometimes FSAA is implemented in hardware in such a way that a graphical application is unaware the images are being super-sampled and then down-sampled before being displayed. Object-based anti-aliasing. A graphics rendering system creates an image based on objects constructed of polygonal primitives; the aliasing effects in the image can be reduced by applying an anti-aliasing scheme only to the areas of the image representing silhouette edges of the objects. The silhouette edges are anti-aliased by creating anti-aliasing primitives which vary in opacity. These anti-aliasing primitives are joined to the silhouetted edges, and create a region in the image where the objects appear to blend into the background. The method has some important advantages over classical methods based on the accumulation buffer since it generates full-scene anti-aliasing in only two passes and does not require the use of additional memory required by the accumulation buffer. Object-based anti-aliasing was first developed at Silicon Graphics for their Indy workstation. Anti-aliasing and gamma compression. Digital images are usually stored in a gamma-compressed format, but most optical anti-aliasing filters are linear. So to down-sample an image in a way that would match optical blurring, one should first convert it to a linear format, then apply the anti-aliasing filter, and finally convert it back to a gamma compressed format. Using linear arithmetic on a gamma-compressed image results in values which are slightly different from the ideal filter. This error is larger when dealing with high contrast areas, causing high contrast areas to become dimmer: bright details (such as a cat's whiskers) become visually thinner, and dark details (such as tree branches) become thicker, relative to the optically anti-aliased image. Each pixel is individually distorted, meaning outlines become unsmooth after anti-aliasing. Because the conversion to and from a linear format greatly slows down the process, and because the differences are usually subtle, most image editing software, including Final Cut Pro and Adobe Photoshop, process images in the gamma-compressed domain. Most modern GPUs support storing textures in memory in sRGB format, and can perform transformation to linear space and back transparently, with essentially no loss in performance. History. Important early works in the history of anti-aliasing include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ \\cos (2j \\pi x) \\cos (2k \\pi y)" } ]
https://en.wikipedia.org/wiki?curid=113507
11350899
Unduloid
In geometry, an unduloid, or onduloid, is a surface with constant nonzero mean curvature obtained as a surface of revolution of an elliptic catenary: that is, by rolling an ellipse along a fixed line, tracing the focus, and revolving the resulting curve around the line. In 1841 Delaunay proved that the only surfaces of revolution with constant mean curvature were the surfaces obtained by rotating the roulettes of the conics. These are the plane, cylinder, sphere, the catenoid, the unduloid and nodoid. Formula. Let formula_0 represent the normal Jacobi sine function and formula_1 be the normal Jacobi elliptic function and let formula_2 represent the normal elliptic integral of the first kind and formula_3 represent the normal elliptic integral of the second kind. Let "a" be the length of the ellipse's major axis, and "e" be the eccentricity of the ellipse. Let "k" be a fixed value between 0 and 1 called the modulus. Given these variables, formula_4 formula_5 The formula for the surface of revolution that is the unduloid is then formula_6 Properties. One interesting property of the unduloid is that the mean curvature is constant. In fact, the mean curvature across the entire surface is always the reciprocal of twice the major axis length: 1/(2"a"). Also, geodesics on an unduloid obey the Clairaut relation, and their behavior is therefore predictable. Occurrence in material science. Unduloids are not a common pattern in nature. However, there are a few circumstances in which they form. First documented in 1970, passing a strong electric current through a thin (0.16—1.0mm), horizontally mounted, hard-drawn (non-tempered) silver wire will result in unduloids forming along its length. This phenomenon was later discovered to also occur in molybdenum wire. Unduloids have also been formed with ferrofluids. By passing a current axially through a cylinder coated with a viscous magnetic fluid film, the magnetic dipoles of the fluid interact with the magnetic field of the current, creating a droplet pattern along the cylinder’s length. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\operatorname{sn}(u,k)" }, { "math_id": 1, "text": "\\operatorname{dn}(u,k)" }, { "math_id": 2, "text": "\\operatorname{F}(z,k)" }, { "math_id": 3, "text": "\\operatorname{E}(z,k)" }, { "math_id": 4, "text": "\\operatorname{x}(u) = -a(1-e)( \\operatorname{F}(\\operatorname{sn}(u,k),k) + \\operatorname{F}(1,k)) - a(1+e)( \\operatorname{E}( \\operatorname{sn}(u,k),k) + \\operatorname{E}(1,k)) \\, " }, { "math_id": 5, "text": "\\operatorname{y}(u) = a(1+e)\\operatorname{dn}(u,k) \\, " }, { "math_id": 6, "text": "\\operatorname{X}(u,v) = \\langle \\operatorname{x}(u), \\operatorname{y}(u) \\cos(v), \\operatorname{y}(u) \\sin(v)\\rangle \\, " } ]
https://en.wikipedia.org/wiki?curid=11350899
11351089
Propensity probability
Interpretation of probability The propensity theory of probability is a probability interpretation in which the probability is thought of as a physical propensity, disposition, or tendency of a given type of situation to yield an outcome of a certain kind, or to yield a long-run relative frequency of such an outcome. Propensities are not relative frequencies, but purported "causes" of the observed stable relative frequencies. Propensities are invoked to "explain why" repeating a certain kind of experiment will generate a given outcome type at a persistent rate. Stable long-run frequencies are a manifestation of invariant "single-case" probabilities. Frequentists are unable to take this approach, since relative frequencies do not exist for single tosses of a coin, but only for large ensembles or collectives. These single-case probabilities are known as propensities or chances. In addition to explaining the emergence of stable relative frequencies, the idea of propensity is motivated by the desire to make sense of single-case probability attributions in quantum mechanics, such as the probability of decay of a particular atom at a particular moment. History. A propensity theory of probability was given by Charles Sanders Peirce. Karl Popper. A later propensity theory was proposed by philosopher Karl Popper, who had only slight acquaintance with the writings of Charles S. Peirce, however. Popper noted that the outcome of a physical experiment is produced by a certain set of "generating conditions". When we repeat an experiment, as the saying goes, we really perform another experiment with a (more or less) similar set of generating conditions. To say that a set of generating conditions "G" has propensity "p" of producing the outcome "E" means that those exact conditions, if repeated indefinitely, would produce an outcome sequence in which "E" occurred with limiting relative frequency "p". Thus the propensity p for E to occur depends upon G:formula_0. For Popper then, a deterministic experiment would have propensity 0 or 1 for each outcome, since those generating conditions would have the same outcome on each trial. In other words, non-trivial propensities (those that differ from 0 and 1) imply something less than determinism and yet still causal dependence on the generating conditions. Recent work. A number of other philosophers, including David Miller and Donald A. Gillies, have proposed propensity theories somewhat similar to Popper's, in that propensities are defined in terms of either long-run or infinitely long-run relative frequencies. Other propensity theorists ("e.g." Ronald Giere) do not explicitly define propensities at all, but rather see propensity as defined by the theoretical role it plays in science. They argue, for example, that physical magnitudes such as electrical charge cannot be explicitly defined either, in terms of more basic things, but only in terms of what they do (such as attracting and repelling other electrical charges). In a similar way, propensity is whatever fills the various roles that physical probability plays in science. Other theories have been offered by D. H. Mellor, and Ian Hacking. Ballentine developed an axiomatic propensity theory building on the work of Paul Humphreys. They show that the causal nature of the condition in propensity conflicts with an axiom needed for Bayes' theorem. Principal principle of David Lewis. What roles does physical probability play in science? What are its properties? One central property of chance is that, when known, it constrains rational belief to take the same numerical value. David Lewis called this the principal principle, The principle states: Thus, for example, suppose you are certain that a particular biased coin has propensity 0.32 to land heads every time it is tossed. What is then the correct credence? According to the Principal Principle, the correct credence is .32. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Pr(E, G)=p" } ]
https://en.wikipedia.org/wiki?curid=11351089
113515
Black–Scholes model
Mathematical model of financial markets The Black–Scholes or Black–Scholes–Merton model is a mathematical model for the dynamics of a financial market containing derivative investment instruments. From the parabolic partial differential equation in the model, known as the Black–Scholes equation, one can deduce the Black–Scholes formula, which gives a theoretical estimate of the price of European-style options and shows that the option has a "unique" price given the risk of the security and its expected return (instead replacing the security's expected return with the risk-neutral rate). The equation and model are named after economists Fischer Black and Myron Scholes. Robert C. Merton, who first wrote an academic paper on the subject, is sometimes also credited. The main principle behind the model is to hedge the option by buying and selling the underlying asset in a specific way to eliminate risk. This type of hedging is called "continuously revised delta hedging" and is the basis of more complicated hedging strategies such as those used by investment banks and hedge funds. The model is widely used, although often with some adjustments, by options market participants. The model's assumptions have been relaxed and generalized in many directions, leading to a plethora of models that are currently used in derivative pricing and risk management. The insights of the model, as exemplified by the Black–Scholes formula, are frequently used by market participants, as distinguished from the actual prices. These insights include no-arbitrage bounds and risk-neutral pricing (thanks to continuous revision). Further, the Black–Scholes equation, a partial differential equation that governs the price of the option, enables pricing using numerical methods when an explicit formula is not possible. The Black–Scholes formula has only one parameter that cannot be directly observed in the market: the average future volatility of the underlying asset, though it can be found from the price of other options. Since the option value (whether put or call) is increasing in this parameter, it can be inverted to produce a "volatility surface" that is then used to calibrate other models, e.g. for OTC derivatives. History. Economists Fischer Black and Myron Scholes demonstrated in 1968 that a dynamic revision of a portfolio removes the expected return of the security, thus inventing the "risk neutral argument". They based their thinking on work previously done by market researchers and practitioners including Paul Samuelson, Louis Bachelier, Sheen Kassouf, Edward O. Thorp and Case Sprenkle. Black and Scholes then attempted to apply the formula to the markets, but incurred financial losses, due to a lack of risk management in their trades. In 1970, they decided to return to the academic environment. After three years of efforts, the formula—named in honor of them for making it public—was finally published in 1973 in an article titled "The Pricing of Options and Corporate Liabilities", in the "Journal of Political Economy". Robert C. Merton was the first to publish a paper expanding the mathematical understanding of the options pricing model, and coined the term "Black–Scholes options pricing model". The formula led to a boom in options trading and provided mathematical legitimacy to the activities of the Chicago Board Options Exchange and other options markets around the world. Merton and Scholes received the 1997 Nobel Memorial Prize in Economic Sciences for their work, the committee citing their discovery of the risk neutral dynamic revision as a breakthrough that separates the option from the risk of the underlying security. Although ineligible for the prize because of his death in 1995, Black was mentioned as a contributor by the Swedish Academy. Fundamental hypotheses. The Black–Scholes model assumes that the market consists of at least one risky asset, usually called the stock, and one riskless asset, usually called the money market, cash, or bond. The following assumptions are made about the assets (which relate to the names of the assets): The assumptions about the market are: With these assumptions, suppose there is a derivative security also trading in this market. It is specified that this security will have a certain payoff at a specified date in the future, depending on the values taken by the stock up to that date. Even though the path the stock price will take in the future is unknown, the derivative's price can be determined at the current time. For the special case of a European call or put option, Black and Scholes showed that "it is possible to create a hedged position, consisting of a long position in the stock and a short position in the option, whose value will not depend on the price of the stock". Their dynamic hedging strategy led to a partial differential equation which governs the price of the option. Its solution is given by the Black–Scholes formula. Several of these assumptions of the original model have been removed in subsequent extensions of the model. Modern versions account for dynamic interest rates (Merton, 1976), transaction costs and taxes (Ingersoll, 1976), and dividend payout. Notation. The notation used in the analysis of the Black-Scholes model is defined as follows (definitions grouped by subject): General and market related: formula_0 is a time in years; with formula_1 generally representing the present year. formula_2 is the annualized risk-free interest rate, continuously compounded (also known as the "force of interest"). Asset related: formula_3 is the price of the underlying asset at time "t", also denoted as formula_4. formula_5 is the drift rate of formula_6, annualized. formula_7 is the standard deviation of the stock's returns. This is the square root of the quadratic variation of the stock's log price process, a measure of its volatility. Option related: formula_8 is the price of the option as a function of the underlying asset "S" at time "t," in particular: formula_9 is the price of a European call option and formula_10 is the price of a European put option. formula_11 is the time of option expiration. formula_12 is the time until maturity: formula_13. formula_14 is the strike price of the option, also known as the exercise price. formula_15 denotes the standard normal cumulative distribution function: formula_16 formula_17 denotes the standard normal probability density function: formula_18 Black–Scholes equation. The Black–Scholes equation is a parabolic partial differential equation that describes the price formula_19 of the option, where formula_6 is the price of the underlying and formula_0 is time: formula_20 A key financial insight behind the equation is that one can perfectly hedge the option by buying and selling the underlying asset and the bank account asset (cash) in such a way as to "eliminate risk". This implies that there is a unique price for the option given by the Black–Scholes formula (see the next section). Black–Scholes formula. The Black–Scholes formula calculates the price of European put and call options. This price is consistent with the Black–Scholes equation. This follows since the formula can be obtained by solving the equation for the corresponding terminal and boundary conditions: formula_21 The value of a call option for a non-dividend-paying underlying stock in terms of the Black–Scholes parameters is: formula_22 The price of a corresponding put option based on put–call parity with discount factor formula_23 is: formula_24 Alternative formulation. Introducing auxiliary variables allows for the formula to be simplified and reformulated in a form that can be more convenient (this is a special case of the Black '76 formula): formula_25 where: formula_26 is the discount factor formula_27 is the forward price of the underlying asset, and formula_28 Given put–call parity, which is expressed in these terms as: formula_29 the price of a put option is: formula_30 Interpretation. It is possible to have intuitive interpretations of the Black–Scholes formula, with the main subtlety being the interpretation of formula_31 and why there are two different terms. The formula can be interpreted by first decomposing a call option into the difference of two binary options: an asset-or-nothing call minus a cash-or-nothing call (long an asset-or-nothing call, short a cash-or-nothing call). A call option exchanges cash for an asset at expiry, while an asset-or-nothing call just yields the asset (with no cash in exchange) and a cash-or-nothing call just yields cash (with no asset in exchange). The Black–Scholes formula is a difference of two terms, and these two terms are equal to the values of the binary call options. These binary options are less frequently traded than vanilla call options, but are easier to analyze. Thus the formula: formula_32 breaks up as: formula_33 where formula_34 is the present value of an asset-or-nothing call and formula_35 is the present value of a cash-or-nothing call. The "D" factor is for discounting, because the expiration date is in future, and removing it changes "present" value to "future" value (value at expiry). Thus formula_36 is the future value of an asset-or-nothing call and formula_37 is the future value of a cash-or-nothing call. In risk-neutral terms, these are the expected value of the asset and the expected value of the cash in the risk-neutral measure. A naive, and slightly incorrect, interpretation of these terms is that formula_38 is the probability of the option expiring in the money formula_39, multiplied by the value of the underlying at expiry "F," while formula_40 is the probability of the option expiring in the money formula_41 multiplied by the value of the cash at expiry "K." This interpretation is incorrect because either both binaries expire in the money or both expire out of the money (either cash is exchanged for the asset or it is not), but the probabilities formula_39 and formula_42 are not equal. In fact, formula_31 can be interpreted as measures of moneyness (in standard deviations) and formula_43 as probabilities of expiring ITM ("percent moneyness"), in the respective numéraire, as discussed below. Simply put, the interpretation of the cash option, formula_40, is correct, as the value of the cash is independent of movements of the underlying asset, and thus can be interpreted as a simple product of "probability times value", while the formula_38 is more complicated, as the probability of expiring in the money and the value of the asset at expiry are not independent. More precisely, the value of the asset at expiry is variable in terms of cash, but is constant in terms of the asset itself (a fixed quantity of the asset), and thus these quantities are independent if one changes numéraire to the asset rather than cash. If one uses spot "S" instead of forward "F," in formula_31 instead of the formula_44 term there is formula_45 which can be interpreted as a drift factor (in the risk-neutral measure for appropriate numéraire). The use of "d"− for moneyness rather than the standardized moneyness formula_46 – in other words, the reason for the formula_44 factor – is due to the difference between the median and mean of the log-normal distribution; it is the same factor as in Itō's lemma applied to geometric Brownian motion. In addition, another way to see that the naive interpretation is incorrect is that replacing formula_39 by formula_42 in the formula yields a negative value for out-of-the-money call options. In detail, the terms formula_47 are the "probabilities of the option expiring in-the-money" under the equivalent exponential martingale probability measure (numéraire=stock) and the equivalent martingale probability measure (numéraire=risk free asset), respectively. The risk neutral probability density for the stock price formula_48 is formula_49 where formula_50 is defined as above. Specifically, formula_42 is the probability that the call will be exercised provided one assumes that the asset drift is the risk-free rate. formula_39, however, does not lend itself to a simple probability interpretation. formula_51 is correctly interpreted as the present value, using the risk-free interest rate, of the expected asset price at expiration, given that the asset price at expiration is above the exercise price. For related discussion – and graphical representation – see Datar–Mathews method for real option valuation. The equivalent martingale probability measure is also called the . Note that both of these are "probabilities" in a measure theoretic sense, and neither of these is the true probability of expiring in-the-money under the . To calculate the probability under the real ("physical") probability measure, additional information is required—the drift term in the physical measure, or equivalently, the market price of risk. Derivations. A standard derivation for solving the Black–Scholes PDE is given in the article Black–Scholes equation. The Feynman–Kac formula says that the solution to this type of PDE, when discounted appropriately, is actually a martingale. Thus the option price is the expected value of the discounted payoff of the option. Computing the option price via this expectation is the risk neutrality approach and can be done without knowledge of PDEs. Note the expectation of the option payoff is not done under the real world probability measure, but an artificial risk-neutral measure, which differs from the real world measure. For the underlying logic see section "risk neutral valuation" under Rational pricing as well as section " under Mathematical finance; for details, once again, see Hull. The Options Greeks. "The Greeks" measure the sensitivity of the value of a derivative product or a financial portfolio to changes in parameter values while holding the other parameters fixed. They are partial derivatives of the price with respect to the parameter values. One Greek, "gamma" (as well as others not listed here) is a partial derivative of another Greek, "delta" in this case. The Greeks are important not only in the mathematical theory of finance, but also for those actively trading. Financial institutions will typically set (risk) limit values for each of the Greeks that their traders must not exceed. Delta is the most important Greek since this usually confers the largest risk. Many traders will zero their delta at the end of the day if they are not speculating on the direction of the market and following a delta-neutral hedging approach as defined by Black–Scholes. When a trader seeks to establish an effective delta-hedge for a portfolio, the trader may also seek to neutralize the portfolio's gamma, as this will ensure that the hedge will be effective over a wider range of underlying price movements. The Greeks for Black–Scholes are given in closed form below. They can be obtained by differentiation of the Black–Scholes formula. Note that from the formulae, it is clear that the gamma is the same value for calls and puts and so too is the vega the same value for calls and puts options. This can be seen directly from put–call parity, since the difference of a put and a call is a forward, which is linear in "S" and independent of "σ" (so a forward has zero gamma and zero vega). N' is the standard normal probability density function. In practice, some sensitivities are usually quoted in scaled-down terms, to match the scale of likely changes in the parameters. For example, rho is often reported divided by 10,000 (1 basis point rate change), vega by 100 (1 vol point change), and theta by 365 or 252 (1 day decay based on either calendar days or trading days per year). Note that "Vega" is not a letter in the Greek alphabet; the name arises from misreading the Greek letter nu (variously rendered as formula_52, ν, and ν) as a V. Extensions of the model. The above model can be extended for variable (but deterministic) rates and volatilities. The model may also be used to value European options on instruments paying dividends. In this case, closed-form solutions are available if the dividend is a known proportion of the stock price. American options and options on stocks paying a known cash dividend (in the short term, more realistic than a proportional dividend) are more difficult to value, and a choice of solution techniques is available (for example lattices and grids). Instruments paying continuous yield dividends. For options on indices, it is reasonable to make the simplifying assumption that dividends are paid continuously, and that the dividend amount is proportional to the level of the index. The dividend payment paid over the time period formula_53 is then modelled as: formula_54 for some constant formula_55 (the dividend yield). Under this formulation the arbitrage-free price implied by the Black–Scholes model can be shown to be: formula_56 and formula_57 where now formula_58 is the modified forward price that occurs in the terms formula_59: formula_60 and formula_61. Instruments paying discrete proportional dividends. It is also possible to extend the Black–Scholes framework to options on instruments paying discrete proportional dividends. This is useful when the option is struck on a single stock. A typical model is to assume that a proportion formula_62 of the stock price is paid out at pre-determined times formula_63. The price of the stock is then modelled as: formula_64 where formula_65 is the number of dividends that have been paid by time formula_0. The price of a call option on such a stock is again: formula_66 where now formula_67 is the forward price for the dividend paying stock. American options. The problem of finding the price of an American option is related to the optimal stopping problem of finding the time to execute the option. Since the American option can be exercised at any time before the expiration date, the Black–Scholes equation becomes a variational inequality of the form: formula_68 together with formula_69 where formula_70 denotes the payoff at stock price formula_6 and the terminal condition: formula_71. In general this inequality does not have a closed form solution, though an American call with no dividends is equal to a European call and the Roll–Geske–Whaley method provides a solution for an American call with one dividend; see also Black's approximation. Barone-Adesi and Whaley is a further approximation formula. Here, the stochastic differential equation (which is valid for the value of any derivative) is split into two components: the European option value and the early exercise premium. With some assumptions, a quadratic equation that approximates the solution for the latter is then obtained. This solution involves finding the critical value, formula_72, such that one is indifferent between early exercise and holding to maturity. Bjerksund and Stensland provide an approximation based on an exercise strategy corresponding to a trigger price. Here, if the underlying asset price is greater than or equal to the trigger price it is optimal to exercise, and the value must equal formula_73, otherwise the option "boils down to: (i) a European up-and-out call option… and (ii) a rebate that is received at the knock-out date if the option is knocked out prior to the maturity date". The formula is readily modified for the valuation of a put option, using put–call parity. This approximation is computationally inexpensive and the method is fast, with evidence indicating that the approximation may be more accurate in pricing long dated options than Barone-Adesi and Whaley. Perpetual put. Despite the lack of a general analytical solution for American put options, it is possible to derive such a formula for the case of a perpetual option – meaning that the option never expires (i.e., formula_74). In this case, the time decay of the option is equal to zero, which leads to the Black–Scholes PDE becoming an ODE:formula_75Let formula_76 denote the lower exercise boundary, below which it is optimal to exercise the option. The boundary conditions are:formula_77The solutions to the ODE are a linear combination of any two linearly independent solutions:formula_78For formula_79, substitution of this solution into the ODE for formula_80 yields:formula_81Rearranging the terms gives:formula_82Using the quadratic formula, the solutions for formula_83 are:formula_84In order to have a finite solution for the perpetual put, since the boundary conditions imply upper and lower finite bounds on the value of the put, it is necessary to set formula_85, leading to the solution formula_86. From the first boundary condition, it is known that:formula_87Therefore, the value of the perpetual put becomes:formula_88The second boundary condition yields the location of the lower exercise boundary:formula_89To conclude, for formula_90, the perpetual American put option is worth:formula_91 Binary options. By solving the Black–Scholes differential equation with the Heaviside function as a boundary condition, one ends up with the pricing of options that pay one unit above some predefined strike price and nothing below. In fact, the Black–Scholes formula for the price of a vanilla call option (or put option) can be interpreted by decomposing a call option into an asset-or-nothing call option minus a cash-or-nothing call option, and similarly for a put—the binary options are easier to analyze, and correspond to the two terms in the Black–Scholes formula. Cash-or-nothing call. This pays out one unit of cash if the spot is above the strike at maturity. Its value is given by: formula_92 Cash-or-nothing put. This pays out one unit of cash if the spot is below the strike at maturity. Its value is given by: formula_93 Asset-or-nothing call. This pays out one unit of asset if the spot is above the strike at maturity. Its value is given by: formula_94 Asset-or-nothing put. This pays out one unit of asset if the spot is below the strike at maturity. Its value is given by: formula_95 Foreign Exchange (FX). Denoting by "S" the FOR/DOM exchange rate (i.e., 1 unit of foreign currency is worth S units of domestic currency) one can observe that paying out 1 unit of the domestic currency if the spot at maturity is above or below the strike is exactly like a cash-or nothing call and put respectively. Similarly, paying out 1 unit of the foreign currency if the spot at maturity is above or below the strike is exactly like an asset-or nothing call and put respectively. Hence by taking formula_96, the foreign interest rate, formula_97, the domestic interest rate, and the rest as above, the following results can be obtained: In the case of a digital call (this is a call FOR/put DOM) paying out one unit of the domestic currency gotten as present value: formula_98 In the case of a digital put (this is a put FOR/call DOM) paying out one unit of the domestic currency gotten as present value: formula_99 In the case of a digital call (this is a call FOR/put DOM) paying out one unit of the foreign currency gotten as present value: formula_100 In the case of a digital put (this is a put FOR/call DOM) paying out one unit of the foreign currency gotten as present value: formula_101 Skew. In the standard Black–Scholes model, one can interpret the premium of the binary option in the risk-neutral world as the expected value = probability of being in-the-money * unit, discounted to the present value. The Black–Scholes model relies on symmetry of distribution and ignores the skewness of the distribution of the asset. Market makers adjust for such skewness by, instead of using a single standard deviation for the underlying asset formula_7 across all strikes, incorporating a variable one formula_102 where volatility depends on strike price, thus incorporating the volatility skew into account. The skew matters because it affects the binary considerably more than the regular options. A binary call option is, at long expirations, similar to a tight call spread using two vanilla options. One can model the value of a binary cash-or-nothing option, "C", at strike "K", as an infinitesimally tight spread, where formula_103 is a vanilla European call: formula_104 Thus, the value of a binary call is the negative of the derivative of the price of a vanilla call with respect to strike price: formula_105 When one takes volatility skew into account, formula_7 is a function of formula_14: formula_106 The first term is equal to the premium of the binary option ignoring skew: formula_107 formula_108 is the Vega of the vanilla call; formula_109 is sometimes called the "skew slope" or just "skew". If the skew is typically negative, the value of a binary call will be higher when taking skew into account. formula_110 Relationship to vanilla options' Greeks. Since a binary call is a mathematical derivative of a vanilla call with respect to strike, the price of a binary call has the same shape as the delta of a vanilla call, and the delta of a binary call has the same shape as the gamma of a vanilla call. Black–Scholes in practice. The assumptions of the Black–Scholes model are not all empirically valid. The model is widely employed as a useful approximation to reality, but proper application requires understanding its limitations – blindly following the model exposes the user to unexpected risk. Among the most significant limitations are: In short, while in the Black–Scholes model one can perfectly hedge options by simply Delta hedging, in practice there are many other sources of risk. Results using the Black–Scholes model differ from real world prices because of simplifying assumptions of the model. One significant limitation is that in reality security prices do not follow a strict stationary log-normal process, nor is the risk-free interest actually known (and is not constant over time). The variance has been observed to be non-constant leading to models such as GARCH to model volatility changes. Pricing discrepancies between empirical and the Black–Scholes model have long been observed in options that are far out-of-the-money, corresponding to extreme price changes; such events would be very rare if returns were lognormally distributed, but are observed much more often in practice. Nevertheless, Black–Scholes pricing is widely used in practice, because it is: The first point is self-evidently useful. The others can be further discussed: Useful approximation: although volatility is not constant, results from the model are often helpful in setting up hedges in the correct proportions to minimize risk. Even when the results are not completely accurate, they serve as a first approximation to which adjustments can be made. Basis for more refined models: The Black–Scholes model is "robust" in that it can be adjusted to deal with some of its failures. Rather than considering some parameters (such as volatility or interest rates) as "constant," one considers them as "variables," and thus added sources of risk. This is reflected in the Greeks (the change in option value for a change in these parameters, or equivalently the partial derivatives with respect to these variables), and hedging these Greeks mitigates the risk caused by the non-constant nature of these parameters. Other defects cannot be mitigated by modifying the model, however, notably tail risk and liquidity risk, and these are instead managed outside the model, chiefly by minimizing these risks and by stress testing. Explicit modeling: this feature means that, rather than "assuming" a volatility "a priori" and computing prices from it, one can use the model to solve for volatility, which gives the implied volatility of an option at given prices, durations and exercise prices. Solving for volatility over a given set of durations and strike prices, one can construct an implied volatility surface. In this application of the Black–Scholes model, a coordinate transformation from the "price domain" to the "volatility domain" is obtained. Rather than quoting option prices in terms of dollars per unit (which are hard to compare across strikes, durations and coupon frequencies), option prices can thus be quoted in terms of implied volatility, which leads to trading of volatility in option markets. The volatility smile. One of the attractive features of the Black–Scholes model is that the parameters in the model other than the volatility (the time to maturity, the strike, the risk-free interest rate, and the current underlying price) are unequivocally observable. All other things being equal, an option's theoretical value is a monotonic increasing function of implied volatility. By computing the implied volatility for traded options with different strikes and maturities, the Black–Scholes model can be tested. If the Black–Scholes model held, then the implied volatility for a particular stock would be the same for all strikes and maturities. In practice, the volatility surface (the 3D graph of implied volatility against strike and maturity) is not flat. The typical shape of the implied volatility curve for a given maturity depends on the underlying instrument. Equities tend to have skewed curves: compared to at-the-money, implied volatility is substantially higher for low strikes, and slightly lower for high strikes. Currencies tend to have more symmetrical curves, with implied volatility lowest at-the-money, and higher volatilities in both wings. Commodities often have the reverse behavior to equities, with higher implied volatility for higher strikes. Despite the existence of the volatility smile (and the violation of all the other assumptions of the Black–Scholes model), the Black–Scholes PDE and Black–Scholes formula are still used extensively in practice. A typical approach is to regard the volatility surface as a fact about the market, and use an implied volatility from it in a Black–Scholes valuation model. This has been described as using "the wrong number in the wrong formula to get the right price". This approach also gives usable values for the hedge ratios (the Greeks). Even when more advanced models are used, traders prefer to think in terms of Black–Scholes implied volatility as it allows them to evaluate and compare options of different maturities, strikes, and so on. For a discussion as to the various alternative approaches developed here, see . Valuing bond options. Black–Scholes cannot be applied directly to bond securities because of pull-to-par. As the bond reaches its maturity date, all of the prices involved with the bond become known, thereby decreasing its volatility, and the simple Black–Scholes model does not reflect this process. A large number of extensions to Black–Scholes, beginning with the Black model, have been used to deal with this phenomenon. See . Interest rate curve. In practice, interest rates are not constant—they vary by tenor (coupon frequency), giving an interest rate curve which may be interpolated to pick an appropriate rate to use in the Black–Scholes formula. Another consideration is that interest rates vary over time. This volatility may make a significant contribution to the price, especially of long-dated options. This is simply like the interest rate and bond price relationship which is inversely related. Short stock rate. Taking a short stock position, as inherent in the derivation, is not typically free of cost; equivalently, it is possible to lend out a long stock position for a small fee. In either case, this can be treated as a continuous dividend for the purposes of a Black–Scholes valuation, provided that there is no glaring asymmetry between the short stock borrowing cost and the long stock lending income. Criticism and comments. Espen Gaarder Haug and Nassim Nicholas Taleb argue that the Black–Scholes model merely recasts existing widely used models in terms of practically impossible "dynamic hedging" rather than "risk", to make them more compatible with mainstream neoclassical economic theory. They also assert that Boness in 1964 had already published a formula that is "actually identical" to the Black–Scholes call option pricing equation. Edward Thorp also claims to have guessed the Black–Scholes formula in 1967 but kept it to himself to make money for his investors. Emanuel Derman and Taleb have also criticized dynamic hedging and state that a number of researchers had put forth similar models prior to Black and Scholes. In response, Paul Wilmott has defended the model. In his 2008 letter to the shareholders of Berkshire Hathaway, Warren Buffett wrote: "I believe the Black–Scholes formula, even though it is the standard for establishing the dollar liability for options, produces strange results when the long-term variety are being valued... The Black–Scholes formula has approached the status of holy writ in finance ... If the formula is applied to extended time periods, however, it can produce absurd results. In fairness, Black and Scholes almost certainly understood this point well. But their devoted followers may be ignoring whatever caveats the two men attached when they first unveiled the formula." British mathematician Ian Stewart, author of the 2012 book entitled "", said that Black–Scholes had "underpinned massive economic growth" and the "international financial system was trading derivatives valued at one quadrillion dollars per year" by 2007. He said that the Black–Scholes equation was the "mathematical justification for the trading"—and therefore—"one ingredient in a rich stew of financial irresponsibility, political ineptitude, perverse incentives and lax regulation" that contributed to the financial crisis of 2007–08. He clarified that "the equation itself wasn't the real problem", but its abuse in the financial industry. The Black–Scholes model assumes positive underlying prices; if the underlying has a negative price, the model does not work directly. When dealing with options whose underlying can go negative, practitioners may use a different model such as the Bachelier model or simply add a constant offset to the prices. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t" }, { "math_id": 1, "text": " t = 0 " }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "S(t)" }, { "math_id": 4, "text": "S_t" }, { "math_id": 5, "text": "\\mu" }, { "math_id": 6, "text": "S" }, { "math_id": 7, "text": "\\sigma" }, { "math_id": 8, "text": "V(S, t)" }, { "math_id": 9, "text": "C(S, t)" }, { "math_id": 10, "text": "P(S, t)" }, { "math_id": 11, "text": "T" }, { "math_id": 12, "text": "\\tau" }, { "math_id": 13, "text": "\\tau = T - t" }, { "math_id": 14, "text": "K" }, { "math_id": 15, "text": "N(x)" }, { "math_id": 16, "text": "N(x) = \\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^x e^{-z^2/2}\\, dz." }, { "math_id": 17, "text": "N'(x)" }, { "math_id": 18, "text": "N'(x) = \\frac{dN(x)}{dx} = \\frac{1}{\\sqrt{2\\pi}} e^{-x^2/2}. " }, { "math_id": 19, "text": " V(S, t) " }, { "math_id": 20, "text": "\\frac{\\partial V}{\\partial t} + \\frac{1}{2}\\sigma^2 S^2 \\frac{\\partial^2 V}{\\partial S^2} + rS\\frac{\\partial V}{\\partial S} - rV = 0" }, { "math_id": 21, "text": "\\begin{align}\n& C(0, t) = 0\\text{ for all }t \\\\\n& C(S, t) \\rightarrow S - K \\text{ as }S \\rightarrow \\infty \\\\\n& C(S, T) = \\max\\{S - K, 0\\}\n\\end{align}" }, { "math_id": 22, "text": "\\begin{align}\n C(S_t, t) &= N(d_+)S_t - N(d_-)Ke^{-r(T - t)} \\\\\n d_+ &= \\frac{1}{\\sigma\\sqrt{T - t}}\\left[\\ln\\left(\\frac{S_t}{K}\\right) + \\left(r + \\frac{\\sigma^2}{2}\\right)(T - t)\\right] \\\\\n d_- &= d_+ - \\sigma\\sqrt{T - t} \\\\\n\\end{align}" }, { "math_id": 23, "text": "e^{-r(T-t)}" }, { "math_id": 24, "text": "\\begin{align}\n P(S_t, t) &= Ke^{-r(T - t)} - S_t + C(S_t, t) \\\\\n &= N(-d_-) Ke^{-r(T - t)} - N(-d_+) S_t\n\\end{align}\\," }, { "math_id": 25, "text": "\\begin{align}\n C(F, \\tau) &= D \\left[ N(d_+) F - N(d_-) K \\right] \\\\\n d_+ &=\n \\frac{1}{\\sigma\\sqrt{\\tau}}\\left[\\ln\\left(\\frac{F}{K}\\right) + \\frac{1}{2}\\sigma^2\\tau\\right] \\\\\n d_- &= d_+ - \\sigma\\sqrt{\\tau}\n\\end{align}" }, { "math_id": 26, "text": "D = e^{-r\\tau}" }, { "math_id": 27, "text": "F = e^{r\\tau} S = \\frac{S}{D}" }, { "math_id": 28, "text": "S = DF" }, { "math_id": 29, "text": "C - P = D(F - K) = S - D K" }, { "math_id": 30, "text": "P(F, \\tau) = D \\left[ N(-d_-) K - N(-d_+) F \\right]" }, { "math_id": 31, "text": "d_\\pm" }, { "math_id": 32, "text": "C = D \\left[ N(d_+) F - N(d_-) K \\right]" }, { "math_id": 33, "text": "C = D N(d_+) F - D N(d_-) K," }, { "math_id": 34, "text": "D N(d_+) F" }, { "math_id": 35, "text": "D N(d_-) K" }, { "math_id": 36, "text": "N(d_+) ~ F" }, { "math_id": 37, "text": "N(d_-) ~ K" }, { "math_id": 38, "text": "N(d_+) F" }, { "math_id": 39, "text": "N(d_+)" }, { "math_id": 40, "text": "N(d_-) K" }, { "math_id": 41, "text": "N(d_-)," }, { "math_id": 42, "text": "N(d_-)" }, { "math_id": 43, "text": "N(d_\\pm)" }, { "math_id": 44, "text": "\\frac{1}{2}\\sigma^2" }, { "math_id": 45, "text": "\\left(r \\pm \\frac{1}{2}\\sigma^2\\right)\\tau," }, { "math_id": 46, "text": "m = \\frac{1}{\\sigma\\sqrt{\\tau}}\\ln\\left(\\frac{F}{K}\\right)" }, { "math_id": 47, "text": "N(d_+), N(d_-)" }, { "math_id": 48, "text": "S_T \\in (0, \\infty)" }, { "math_id": 49, "text": "p(S, T) = \\frac{N^\\prime [d_-(S_T)]}{S_T \\sigma\\sqrt{T}}" }, { "math_id": 50, "text": "d_- = d_-(K)" }, { "math_id": 51, "text": "SN(d_+)" }, { "math_id": 52, "text": "\\nu" }, { "math_id": 53, "text": "[t, t + dt]" }, { "math_id": 54, "text": "qS_t\\,dt" }, { "math_id": 55, "text": "q" }, { "math_id": 56, "text": "C(S_t, t) = e^{-r(T - t)}[FN(d_1) - KN(d_2)]\\," }, { "math_id": 57, "text": "P(S_t, t) = e^{-r(T - t)}[KN(-d_2) - FN(-d_1)]\\," }, { "math_id": 58, "text": "F = S_t e^{(r - q)(T - t)}\\," }, { "math_id": 59, "text": "d_1, d_2" }, { "math_id": 60, "text": "d_1 = \\frac{1}{\\sigma\\sqrt{T - t}}\\left[\\ln\\left(\\frac{S_t}{K}\\right) + \\left(r - q + \\frac{1}{2}\\sigma^2\\right)(T - t)\\right]" }, { "math_id": 61, "text": "d_2 = d_1 - \\sigma\\sqrt{T - t} = \\frac{1}{\\sigma\\sqrt{T - t}}\\left[\\ln\\left(\\frac{S_t}{K}\\right) + \\left(r - q - \\frac{1}{2}\\sigma^2\\right)(T - t)\\right]" }, { "math_id": 62, "text": "\\delta" }, { "math_id": 63, "text": "t_1, t_2, \\ldots, t_n " }, { "math_id": 64, "text": "S_t = S_0(1 - \\delta)^{n(t)}e^{ut + \\sigma W_t}" }, { "math_id": 65, "text": "n(t)" }, { "math_id": 66, "text": "C(S_0, T) = e^{-rT}[FN(d_1) - KN(d_2)]\\," }, { "math_id": 67, "text": "F = S_{0}(1 - \\delta)^{n(T)}e^{rT}\\," }, { "math_id": 68, "text": "\\frac{\\partial V}{\\partial t} + \\frac{1}{2}\\sigma^2 S^2 \\frac{\\partial^2 V}{\\partial S^2} + rS\\frac{\\partial V}{\\partial S} - rV \\leq 0" }, { "math_id": 69, "text": "V(S, t) \\geq H(S)" }, { "math_id": 70, "text": "H(S)" }, { "math_id": 71, "text": "V(S, T) = H(S)" }, { "math_id": 72, "text": "s*" }, { "math_id": 73, "text": "S - X" }, { "math_id": 74, "text": "T\\rightarrow \\infty" }, { "math_id": 75, "text": "{1\\over{2}}\\sigma^{2}S^{2}{d^{2}V\\over{dS^{2}}} + (r-q)S{dV\\over{dS}} - rV = 0" }, { "math_id": 76, "text": "S_{-}" }, { "math_id": 77, "text": "V(S_{-}) = K-S_{-}, \\quad V_{S}(S_{-}) = -1, \\quad V(S) \\leq K" }, { "math_id": 78, "text": "V(S) = A_{1}S^{\\lambda_{1}} + A_{2}S^{\\lambda_{2}}" }, { "math_id": 79, "text": "S_{-} \\leq S" }, { "math_id": 80, "text": "i = {1,2}" }, { "math_id": 81, "text": "\\left[ {1\\over{2}}\\sigma^{2}\\lambda_{i}(\\lambda_{i}-1) + (r-q)\\lambda_{i} - r \\right]S^{\\lambda_{i}} = 0" }, { "math_id": 82, "text": "{1\\over{2}}\\sigma^{2}\\lambda_{i}^{2} + \\left(r-q - {1\\over{2}} \\sigma^{2}\\right)\\lambda_{i} - r = 0" }, { "math_id": 83, "text": "\\lambda_{i}" }, { "math_id": 84, "text": "\\begin{aligned}\n\\lambda_{1} &= {-\\left(r-q-{1\\over{2}}\\sigma^{2} \\right ) + \\sqrt{\\left(r-q-{1\\over{2}}\\sigma^{2} \\right )^{2} + 2\\sigma^{2}r}\\over{\\sigma^{2}}} \\\\\n\\lambda_{2} &= {-\\left(r-q-{1\\over{2}}\\sigma^{2} \\right ) - \\sqrt{\\left(r-q-{1\\over{2}}\\sigma^{2} \\right )^{2} + 2\\sigma^{2}r}\\over{\\sigma^{2}}}\n\\end{aligned}" }, { "math_id": 85, "text": "A_{1} = 0" }, { "math_id": 86, "text": "V(S) = A_{2}S^{\\lambda_{2}}" }, { "math_id": 87, "text": "V(S_{-}) = A_{2}(S_{-})^{\\lambda_{2}} = K-S_{-} \\implies A_{2} = {K-S_{-}\\over{(S_{-})^{\\lambda_{2}}}}" }, { "math_id": 88, "text": "V(S) = (K-S_{-})\\left( {S\\over{S_{-}}} \\right)^{\\lambda_{2}}" }, { "math_id": 89, "text": "V_{S}(S_{-}) = \\lambda_{2}{K-S_{-}\\over{S_{-}}} = -1 \\implies S_{-} = {\\lambda_{2}K\\over{\\lambda_{2}-1}}" }, { "math_id": 90, "text": "S \\geq S_{-} = {\\lambda_{2}K\\over{\\lambda_{2}-1}}" }, { "math_id": 91, "text": "V(S) = {K\\over{1-\\lambda_{2}}} \\left( {\\lambda_{2}-1\\over{\\lambda_{2}}}\\right)^{\\lambda_{2}} \\left( {S\\over{K}} \\right)^{\\lambda_{2}}" }, { "math_id": 92, "text": " C =e^{-r (T-t)}N(d_2). \\," }, { "math_id": 93, "text": " P = e^{-r (T-t)}N(-d_2). \\," }, { "math_id": 94, "text": " C = Se^{-q (T-t)}N(d_1). \\," }, { "math_id": 95, "text": " P = Se^{-q (T-t)}N(-d_1)," }, { "math_id": 96, "text": "r_{f}" }, { "math_id": 97, "text": "r_{d}" }, { "math_id": 98, "text": " C = e^{-r_{d} T}N(d_2) \\," }, { "math_id": 99, "text": " P = e^{-r_{d}T}N(-d_2) \\," }, { "math_id": 100, "text": " C = Se^{-r_{f} T}N(d_1) \\," }, { "math_id": 101, "text": " P = Se^{-r_{f}T}N(-d_1) \\," }, { "math_id": 102, "text": "\\sigma(K)" }, { "math_id": 103, "text": "C_v" }, { "math_id": 104, "text": " C = \\lim_{\\epsilon \\to 0} \\frac{C_v(K-\\epsilon) - C_v(K)}{\\epsilon} " }, { "math_id": 105, "text": " C = -\\frac{dC_v}{dK} " }, { "math_id": 106, "text": " C = -\\frac{dC_v(K,\\sigma(K))}{dK} = -\\frac{\\partial C_v}{\\partial K} - \\frac{\\partial C_v}{\\partial \\sigma} \\frac{\\partial \\sigma}{\\partial K}" }, { "math_id": 107, "text": " -\\frac{\\partial C_v}{\\partial K} = -\\frac{\\partial (S N(d_1) - Ke^{-r(T-t)} N(d_2))}{\\partial K} = e^{-r (T-t)} N(d_2) = C_\\text{no skew}" }, { "math_id": 108, "text": "\\frac{\\partial C_v}{\\partial \\sigma}" }, { "math_id": 109, "text": "\\frac{\\partial \\sigma}{\\partial K}" }, { "math_id": 110, "text": " C = C_\\text{no skew} - \\text{Vega}_v \\cdot \\text{Skew}" } ]
https://en.wikipedia.org/wiki?curid=113515
1135199
Stroboscopic effect
Visual phenomenon The stroboscopic effect is a visual phenomenon caused by aliasing that occurs when continuous rotational or other cyclic motion is represented by a series of short or instantaneous samples (as opposed to a continuous view) at a sampling rate close to the period of the motion. It accounts for the "wagon-wheel effect", so-called because in video, spoked wheels (such as on horse-drawn wagons) sometimes appear to be turning backwards. A strobe fountain, a stream of water droplets falling at regular intervals lit with a strobe light, is an example of the stroboscopic effect being applied to a cyclic motion that is not rotational. When viewed under normal light, this is a normal water fountain. When viewed under a strobe light with its frequency tuned to the rate at which the droplets fall, the droplets appear to be suspended in mid-air. Adjusting the strobe frequency can make the droplets seemingly move slowly up or down. Stroboscopic principles, and their ability to create an illusion of motion, underlie the theory behind animation, film, and other moving pictures. Simon Stampfer, who coined the term in his 1833 patent application for his "stroboscopische Scheiben" (better known as the "phenakistiscope"), explained how the illusion of motion occurs when during unnoticed regular and very short interruptions of light, one figure gets replaced by a similar figure in a slightly different position. Any series of figures can thus be manipulated to show movements in any desired direction. Explanation. Consider the stroboscope as used in mechanical analysis. This may be a "strobe light" that is fired at an adjustable rate. For example, an object is rotating at 60 revolutions per second: if it is viewed with a series of short flashes at 60 times per second, each flash illuminates the object at the same position in its rotational cycle, so it appears that the object is stationary. Furthermore, at a frequency of 60 flashes per second, persistence of vision smooths out the sequence of flashes so that the perceived image is continuous. If the same rotating object is viewed at 61 flashes per second, each flash will illuminate it at a slightly earlier part of its rotational cycle. Sixty-one flashes will occur before the object is seen in the same position again, and the series of images will be perceived as if it is rotating backwards once per second. The same effect occurs if the object is viewed at 59 flashes per second, except that each flash illuminates it a little later in its rotational cycle and so, the object will seem to be rotating forwards. The same could be applied at other frequencies like the 50 Hz characteristic of electric distribution grids of most of countries in the world. In the case of motion pictures, action is captured as a rapid series of still images and the same stroboscopic effect can occur. Audio conversion from light patterns. The stroboscopic effect also plays a role in audio playback. Compact discs rely on strobing reflections of the laser from the surface of the disc in order to be processed (it is also used for computer data). DVDs and Blu-ray Discs have similar functions. The stroboscopic effect also plays a role for laser microphones. Wagon-wheel effect. Motion-picture cameras conventionally film at 24 frames per second. Although the wheels of a vehicle are not likely to be turning at 24 revolutions per second (as that would be extremely fast), suppose each wheel has 12 spokes and rotates at only two revolutions per second. Filmed at 24 frames per second, the spokes in each frame will appear in exactly the same position. Hence, the wheel will be perceived to be stationary. In fact, each photographically captured spoke in any one position will be a different actual spoke in each successive frame, but since the spokes are close to identical in shape and color, no difference will be perceived. Thus, as long as the number of times the wheel rotates per second is a factor of 24 and 12, the wheel will appear to be stationary. If the wheel rotates a little more slowly than two revolutions per second, the position of the spokes is seen to fall a little further behind in each successive frame and therefore, the wheel will seem to be turning backwards. Unwanted effects in common lighting. Stroboscopic effect is one of the particular temporal light artefacts. In common lighting applications, the stroboscopic effect is an unwanted effect which may become visible if a person is looking at a moving or rotating object which is illuminated by a time-modulated light source. The temporal light modulation may come from fluctuations of the light source itself or may be due to the application of certain dimming or light level regulation technologies. Another cause of light modulations may be lamps with unfiltered pulse-width modulation type external dimmers. Whether this is so may be tested with a rotating fidget spinner. Effects. Various scientific committees have assessed the potential health, performance and safety-related aspects resulting from temporal light modulations (TLMs) including stroboscopic effect. Adverse effects in common lighting application areas include annoyance, reduced task performance, visual fatigue and headache. The visibility aspects of stroboscopic effect are given in a technical note of CIE, see CIE TN 006:2016 and in the thesis of Perz. Stroboscopic effects may also lead to unsafe situations in workplaces with fast moving or rotating machinery. If the frequency of fast rotating machinery or moving parts coincides with the frequency, or multiples of the frequency, of the light modulation, the machinery can appear to be stationary, or to move with another speed, potentially leading to hazardous situations. Stroboscopic effects that become visible in rotating objects are also referred to as the wagon-wheel effect. In general, undesired effects in the visual perception of a human observer induced by light intensity fluctuations are called Temporal Light Artefacts (TLAs). Further background and explanations on the different TLA phenomena including stroboscopic effect is given in a recorded webinar “"Is it all just flicker?"”. In some special applications, TLMs may also induce desired effects. For instance, a stroboscope is tool that produces short repetitive flashes of light that can be used for measurement of movement frequencies or for analysis or timing of moving objects. Also stroboscopic visual training (SVT) is a recent tool aimed at improving visual and perceptual performance of sporters by executing activities under conditions of modulated lighting or intermittent vision. Root causes. Light emitted from lighting equipment such as luminaires and lamps may vary in strength as function of time, either intentionally or unintentionally. Intentional light variations are applied for warning, signalling (e.g. traffic-light signalling, flashing aviation light signals), entertainment (like stage lighting) with the purpose that flicker is perceived by people. Generally, the light output of lighting equipment may also have residual unintentional light level modulations due to the lighting equipment technology in connection with the type of electrical mains connection. For example, lighting equipment connected to a single-phase mains supply will typically have residual TLMs of twice the mains frequency, either at 100 or 120 Hz (depending on country). The magnitude, shape, periodicity and frequency of the TLMs will depend on many factors such as the type of light source, the electrical mains-supply frequency, the driver or ballast technology and type of light regulation technology applied (e.g. pulse-width modulation). If the modulation frequency is below the flicker fusion threshold and if the magnitude of the TLM exceeds a certain level, then such TLMs are perceived as flicker. Light modulations with modulation frequencies beyond the flicker fusion threshold are not directly perceived, but illusions in the form of stroboscopic effect may become visible (example see Figure 1). LEDs do not intrinsically produce temporal modulations; they just reproduce the input current waveform very well, and any ripple in the current waveform is reproduced by a light ripple because LEDs have a fast response; therefore, compared to conventional lighting technologies (incandescent, fluorescent), for LED lighting more variety in the TLA properties is seen. Many types and topologies of LED driver circuits are applied; simpler electronics and limited or no buffer capacitors often result in larger residual current ripple and thus larger temporal light modulation. Dimming technologies of either externally applied dimmers (incompatible dimmers) or internal light-level regulators may have additional impact on the level of stroboscopic effect; the level of temporal light modulation generally increases at lower light levels. "NOTE – The root cause temporal light modulation is often referred to as flicker. Also, stroboscopic effect is often referred to as flicker. Flicker is however a directly visible effect resulting from light modulations at relatively low modulation frequencies, typically below 80 Hz, whereas stroboscopic effect in common (residential) applications may become visible if light modulations are present with modulation frequencies, typically above 80 Hz." Mitigation. Generally, undesirable stroboscopic effect can be avoided by reducing the level of TLMs. Design of lighting equipment to reduce the TLMs of the light sources is typically a tradeoff for other product properties and generally increases cost and size, shortens lifetime or lowers energy efficiency. For instance, to reduce the modulation in the current to drive LEDs, which also reduces the visibility of TLAs, a large storage capacitor, such as electrolytic capacitor, is required. However, use of such capacitors significantly shortens the lifetime of the LED, as they are found to have the highest failure rate among all components. Another solution to lower the visibility of TLAs is to increase the frequency of the driving current, however this decreases the efficiency of the system and it increases its overall size. Visibility. Stroboscopic effect becomes visible if the modulation frequency of the TLM is in the range of 80 Hz to 2000 Hz and if the magnitude of the TLM exceeds a certain level. Other important factors that determine the visibility of TLMs as stroboscopic effect are: All observer-related influence quantities are stochastic parameters, because not all humans perceive the effect of same light ripple in the same way. That is why perception of stroboscopic effect is always expressed with a certain probability. For light levels encountered in common applications and for moderate speeds of movement of objects (connected to speeds that can be made by humans), an average sensitivity curve has been derived based on perception studies. The average sensitivity curve for sinusoidal modulated light waveforms, also called the stroboscopic effect contrast threshold function, as a function of frequency "f" is as follows: formula_0 The contrast threshold function is depicted in Figure 2. Stroboscopic effect becomes visible if the modulation frequency of the TLM is in the region between approximately 10 Hz to 2000 Hz and if the magnitude of the TLM exceeds a certain level. The contrast threshold function shows that at modulation frequencies near 100 Hz, stroboscopic effect will be visible at relatively low magnitudes of modulation. Although stroboscopic effect in theory is also visible in the frequency range below 100 Hz, in practice visibility of flicker will dominate over stroboscopic effect in the frequency range up to 60 Hz. Moreover, large magnitudes of intentional repetitive TLMs with frequencies below 100 Hz are unlikely to occur in practice because residual TLMs generally occur at modulation frequencies that are twice the mains frequency (100 Hz or 120 Hz). Detailed explanations on the visibility of stroboscopic effect and other temporal light artefacts are also given in CIE TN 006:2016 and in a recorded webinar “"Is it all just flicker?"”. Objective assessment of stroboscopic effect. Stroboscopic effect visibility meter. For objective assessment of stroboscopic effect the "stroboscopic effect visibility measure" (SVM) has been developed.  The specification of the stroboscopic effect visibility meter and the test method for objective assessment of lighting equipment is published in IEC technical report IEC TR 63158. "SVM" is calculated using the following summation formula: formula_1 where "C"m is the relative amplitude of the m-th Fourier component (trigonometric Fourier series representation) of the relative illuminance (relative to the DC-level); "T"m is the stroboscopic effect contrast threshold function for visibility of stroboscopic effect of a sine wave at the frequency of the m-th Fourier component (see ). "SVM" can be used for objective assessment by a human observer of visible stroboscopic effects of temporal light modulation of lighting equipment in general indoor applications, with typical indoor light levels (&gt; 100 lx) and with moderate movements of an observer or a nearby handled object (&lt; 4 m/s). For assessing unwanted stroboscopic effects in other applications, such as the misperception of rapidly rotating or moving machinery in a workshop for example, other metrics and methods can be required or the assessment can be done by subjective testing (observation). "NOTE – Several alternative metrics such as modulation depth, flicker percentage or flicker index are being applied for specifying the stroboscopic effect performance of lighting equipment. None of these metrics are suitable to predict actual human perception because human perception is impacted by modulation depth, modulation frequency, wave shape and if applicable the duty cycle of the TLM." Matlab toolbox. A Matlab stroboscopic effect visibility measure toolbox including a function for calculating "SVM" and some application examples are available on the Matlab Central via the Mathworks Community. Acceptance criterion. If the value of SVM equals one, the input modulation of the light waveform produces a stroboscopic effect that is just visible, i.e. at the visibility threshold. This means that an average observer will be able to detect the artefact with a probability of 50%. If the value of the visibility measure is above unity, the effect has a probability of detection of more than 50%. If the value of the visibility measure is smaller than unity, the probability of detection is less than 50%. These visibility thresholds show the average detection of an average human observer in a population. This does not, however, guarantee acceptability. For some less critical applications, the acceptability level of an artefact might be well above the visibility threshold. For other applications, the acceptable levels might be below the visibility threshold. NEMA 77-2017 amongst others gives guidance for acceptance criteria in different applications. Test and measurement applications. A typical test setup for stroboscopic effect testing is shown in Figure 3. The stroboscopic effect visibility meter can be applied for different purposes (see IEC TR 63158): Dangers in workplaces. Stroboscopic effect may lead to unsafe situations in workplaces with fast moving or rotating machinery. If the frequency of fast rotating machinery or moving parts coincides with the frequency, or multiples of the frequency, of the light modulation, the machinery can appear to be stationary, or to move with another speed, potentially leading to hazardous situations. Because of the illusion that the stroboscopic effect can give to moving machinery, it is advised that single-phase lighting is avoided. For example, a factory that is lit from a single-phase supply with basic lighting will have a flicker of 100 or 120 Hz (depending on country, 50 Hz x 2 in Europe, 60 Hz x 2 in US, double the nominal frequency), thus any machinery rotating at multiples of 50 or 60 Hz (3000–3600rpm) may appear to not be turning, increasing the risk of injury to an operator. Solutions include deploying the lighting over a full 3-phase supply, or by using high-frequency controllers that drive the lights at safer frequencies or direct current lighting.
[ { "math_id": 0, "text": "T(f) = 2.865 \\times 10^{-5} \\times f^{1.543} + 0.225" }, { "math_id": 1, "text": "SVM=\\sqrt[3.7]{\\textstyle \\sum_{m=1}^\\infty \\displaystyle\\left ( \\frac{C_m}{T_m} \\right )^{3.7}}," } ]
https://en.wikipedia.org/wiki?curid=1135199
1135311
Matched filter
Filters used in signal processing that are optimal in some sense. In signal processing, the output of the matched filter is given by correlating a known delayed signal, or "template", with an unknown signal to detect the presence of the template in the unknown signal. This is equivalent to convolving the unknown signal with a conjugated time-reversed version of the template. The matched filter is the optimal linear filter for maximizing the signal-to-noise ratio (SNR) in the presence of additive stochastic noise. Matched filters are commonly used in radar, in which a known signal is sent out, and the reflected signal is examined for common elements of the out-going signal. Pulse compression is an example of matched filtering. It is so called because the impulse response is matched to input pulse signals. Two-dimensional matched filters are commonly used in image processing, e.g., to improve the SNR of X-ray observations. Additional applications of note are in seismology and gravitational-wave astronomy. Matched filtering is a demodulation technique with LTI (linear time invariant) filters to maximize SNR. It was originally also known as a "North filter". Derivation. Derivation via matrix algebra. The following section derives the matched filter for a discrete-time system. The derivation for a continuous-time system is similar, with summations replaced with integrals. The matched filter is the linear filter, formula_0, that maximizes the output signal-to-noise ratio. formula_1 where formula_2 is the input as a function of the independent variable formula_3, and formula_4 is the filtered output. Though we most often express filters as the impulse response of convolution systems, as above (see LTI system theory), it is easiest to think of the matched filter in the context of the inner product, which we will see shortly. We can derive the linear filter that maximizes output signal-to-noise ratio by invoking a geometric argument. The intuition behind the matched filter relies on correlating the received signal (a vector) with a filter (another vector) that is parallel with the signal, maximizing the inner product. This enhances the signal. When we consider the additive stochastic noise, we have the additional challenge of minimizing the output due to noise by choosing a filter that is orthogonal to the noise. Let us formally define the problem. We seek a filter, formula_0, such that we maximize the output signal-to-noise ratio, where the output is the inner product of the filter and the observed signal formula_5. Our observed signal consists of the desirable signal formula_6 and additive noise formula_7: formula_8 Let us define the auto-correlation matrix of the noise, reminding ourselves that this matrix has Hermitian symmetry, a property that will become useful in the derivation: formula_9 where formula_10 denotes the conjugate transpose of formula_7, and formula_11 denotes expectation (note that in case the noise formula_7 has zero-mean, its auto-correlation matrix formula_12 is equal to its covariance matrix). Let us call our output, formula_13, the inner product of our filter and the observed signal such that formula_14 We now define the signal-to-noise ratio, which is our objective function, to be the ratio of the power of the output due to the desired signal to the power of the output due to the noise: formula_15 We rewrite the above: formula_16 We wish to maximize this quantity by choosing formula_0. Expanding the denominator of our objective function, we have formula_17 Now, our formula_18 becomes formula_19 We will rewrite this expression with some matrix manipulation. The reason for this seemingly counterproductive measure will become evident shortly. Exploiting the Hermitian symmetry of the auto-correlation matrix formula_12, we can write formula_20 We would like to find an upper bound on this expression. To do so, we first recognize a form of the Cauchy–Schwarz inequality: formula_21 which is to say that the square of the inner product of two vectors can only be as large as the product of the individual inner products of the vectors. This concept returns to the intuition behind the matched filter: this upper bound is achieved when the two vectors formula_22 and formula_23 are parallel. We resume our derivation by expressing the upper bound on our formula_18 in light of the geometric inequality above: formula_24 Our valiant matrix manipulation has now paid off. We see that the expression for our upper bound can be greatly simplified: formula_25 We can achieve this upper bound if we choose, formula_26 where formula_27 is an arbitrary real number. To verify this, we plug into our expression for the output formula_18: formula_28 Thus, our optimal matched filter is formula_29 We often choose to normalize the expected value of the power of the filter output due to the noise to unity. That is, we constrain formula_30 This constraint implies a value of formula_27, for which we can solve: formula_31 yielding formula_32 giving us our normalized filter, formula_33 If we care to write the impulse response formula_0 of the filter for the convolution system, it is simply the complex conjugate time reversal of the input formula_6. Though we have derived the matched filter in discrete time, we can extend the concept to continuous-time systems if we replace formula_12 with the continuous-time autocorrelation function of the noise, assuming a continuous signal formula_34, continuous noise formula_35, and a continuous filter formula_36. Derivation via Lagrangian. Alternatively, we may solve for the matched filter by solving our maximization problem with a Lagrangian. Again, the matched filter endeavors to maximize the output signal-to-noise ratio (formula_18) of a filtered deterministic signal in stochastic additive noise. The observed sequence, again, is formula_37 with the noise auto-correlation matrix, formula_38 The signal-to-noise ratio is formula_39 where formula_40 and formula_41. Evaluating the expression in the numerator, we have formula_42 and in the denominator, formula_43 The signal-to-noise ratio becomes formula_44 If we now constrain the denominator to be 1, the problem of maximizing formula_18 is reduced to maximizing the numerator. We can then formulate the problem using a Lagrange multiplier: formula_45 formula_46 formula_47 formula_48 which we recognize as a "generalized eigenvalue problem" formula_49 Since formula_50 is of unit rank, it has only one nonzero eigenvalue. It can be shown that this eigenvalue equals formula_51 yielding the following optimal matched filter formula_52 This is the same result found in the previous subsection. Interpretation as a least-squares estimator. Derivation. Matched filtering can also be interpreted as a least-squares estimator for the optimal location and scaling of a given model or template. Once again, let the observed sequence be defined as formula_53 where formula_54 is uncorrelated zero mean noise. The signal formula_55 is assumed to be a scaled and shifted version of a known model sequence formula_56: formula_57 We want to find optimal estimates formula_58 and formula_59 for the unknown shift formula_60 and scaling formula_61 by minimizing the least-squares residual between the observed sequence formula_62 and a "probing sequence" formula_63: formula_64 The appropriate formula_63 will later turn out to be the matched filter, but is as yet unspecified. Expanding formula_62 and the square within the sum yields formula_65 The first term in brackets is a constant (since the observed signal is given) and has no influence on the optimal solution. The last term has constant expected value because the noise is uncorrelated and has zero mean. We can therefore drop both terms from the optimization. After reversing the sign, we obtain the equivalent optimization problem formula_66 Setting the derivative w.r.t. formula_67 to zero gives an analytic solution for formula_59: formula_68 Inserting this into our objective function yields a reduced maximization problem for just formula_58: formula_69 The numerator can be upper-bounded by means of the Cauchy–Schwarz inequality: formula_70 The optimization problem assumes its maximum when equality holds in this expression. According to the properties of the Cauchy–Schwarz inequality, this is only possible when formula_71 for arbitrary non-zero constants formula_72 or formula_73, and the optimal solution is obtained at formula_74 as desired. Thus, our "probing sequence" formula_63 must be proportional to the signal model formula_75, and the convenient choice formula_76 yields the matched filter formula_77 Note that the filter is the mirrored signal model. This ensures that the operation formula_78 to be applied in order to find the optimum is indeed the convolution between the observed sequence formula_62 and the matched filter formula_79. The filtered sequence assumes its maximum at the position where the observed sequence formula_62 best matches (in a least-squares sense) the signal model formula_56. Implications. The matched filter may be derived in a variety of ways, but as a special case of a least-squares procedure it may also be interpreted as a maximum likelihood method in the context of a (coloured) Gaussian noise model and the associated Whittle likelihood. If the transmitted signal possessed "no" unknown parameters (like time-of-arrival, amplitude...), then the matched filter would, according to the Neyman–Pearson lemma, minimize the error probability. However, since the exact signal generally is determined by unknown parameters that effectively are estimated (or "fitted") in the filtering process, the matched filter constitutes a "generalized maximum likelihood" (test-) statistic. The filtered time series may then be interpreted as (proportional to) the profile likelihood, the maximized conditional likelihood as a function of the time parameter. This implies in particular that the error probability (in the sense of Neyman and Pearson, i.e., concerning maximization of the detection probability for a given false-alarm probability) is not necessarily optimal. What is commonly referred to as the "Signal-to-noise ratio (SNR)", which is supposed to be maximized by a matched filter, in this context corresponds to formula_80, where formula_81 is the (conditionally) maximized likelihood ratio. The construction of the matched filter is based on a "known" noise spectrum. In reality, however, the noise spectrum is usually estimated from data and hence only known up to a limited precision. For the case of an uncertain spectrum, the matched filter may be generalized to a more robust iterative procedure with favourable properties also in non-Gaussian noise. Frequency-domain interpretation. When viewed in the frequency domain, it is evident that the matched filter applies the greatest weighting to spectral components exhibiting the greatest signal-to-noise ratio (i.e., large weight where noise is relatively low, and vice versa). In general this requires a non-flat frequency response, but the associated "distortion" is no cause for concern in situations such as radar and digital communications, where the original waveform is known and the objective is the detection of this signal against the background noise. On the technical side, the matched filter is a "weighted least-squares" method based on the (heteroscedastic) frequency-domain data (where the "weights" are determined via the noise spectrum, see also previous section), or equivalently, a "least-squares" method applied to the whitened data. Examples. Radar and sonar. Matched filters are often used in signal detection. As an example, suppose that we wish to judge the distance of an object by reflecting a signal off it. We may choose to transmit a pure-tone sinusoid at 1 Hz. We assume that our received signal is an attenuated and phase-shifted form of the transmitted signal with added noise. To judge the distance of the object, we correlate the received signal with a matched filter, which, in the case of white (uncorrelated) noise, is another pure-tone 1-Hz sinusoid. When the output of the matched filter system exceeds a certain threshold, we conclude with high probability that the received signal has been reflected off the object. Using the speed of propagation and the time that we first observe the reflected signal, we can estimate the distance of the object. If we change the shape of the pulse in a specially-designed way, the signal-to-noise ratio and the distance resolution can be even improved after matched filtering: this is a technique known as pulse compression. Additionally, matched filters can be used in parameter estimation problems (see estimation theory). To return to our previous example, we may desire to estimate the speed of the object, in addition to its position. To exploit the Doppler effect, we would like to estimate the frequency of the received signal. To do so, we may correlate the received signal with several matched filters of sinusoids at varying frequencies. The matched filter with the highest output will reveal, with high probability, the frequency of the reflected signal and help us determine the radial velocity of the object, i.e. the relative speed either directly towards or away from the observer. This method is, in fact, a simple version of the discrete Fourier transform (DFT). The DFT takes an formula_82-valued complex input and correlates it with formula_82 matched filters, corresponding to complex exponentials at formula_82 different frequencies, to yield formula_82 complex-valued numbers corresponding to the relative amplitudes and phases of the sinusoidal components (see Moving target indication). Digital communications. The matched filter is also used in communications. In the context of a communication system that sends binary messages from the transmitter to the receiver across a noisy channel, a matched filter can be used to detect the transmitted pulses in the noisy received signal. Imagine we want to send the sequence "0101100100" coded in non polar non-return-to-zero (NRZ) through a certain channel. Mathematically, a sequence in NRZ code can be described as a sequence of unit pulses or shifted rect functions, each pulse being weighted by +1 if the bit is "1" and by -1 if the bit is "0". Formally, the scaling factor for the formula_83 bit is, formula_84 We can represent our message, formula_85, as the sum of shifted unit pulses: formula_86 where formula_87 is the time length of one bit and formula_88 is the rectangular function. Thus, the signal to be sent by the transmitter is If we model our noisy channel as an AWGN channel, white Gaussian noise is added to the signal. At the receiver end, for a Signal-to-noise ratio of 3 dB, this may look like: A first glance will not reveal the original transmitted sequence. There is a high power of noise relative to the power of the desired signal (i.e., there is a low signal-to-noise ratio). If the receiver were to sample this signal at the correct moments, the resulting binary message could be incorrect. To increase our signal-to-noise ratio, we pass the received signal through a matched filter. In this case, the filter should be matched to an NRZ pulse (equivalent to a "1" coded in NRZ code). Precisely, the impulse response of the ideal matched filter, assuming white (uncorrelated) noise should be a time-reversed complex-conjugated scaled version of the signal that we are seeking. We choose formula_89 In this case, due to symmetry, the time-reversed complex conjugate of formula_36 is in fact formula_36, allowing us to call formula_36 the impulse response of our matched filter convolution system. After convolving with the correct matched filter, the resulting signal, formula_90 is, formula_91 where formula_92 denotes convolution. Which can now be safely sampled by the receiver at the correct sampling instants, and compared to an appropriate threshold, resulting in a correct interpretation of the binary message. Gravitational-wave astronomy. Matched filters play a central role in gravitational-wave astronomy. The first observation of gravitational waves was based on large-scale filtering of each detector's output for signals resembling the expected shape, followed by subsequent screening for coincident and coherent triggers between both instruments. False-alarm rates, and with that, the statistical significance of the detection were then assessed using resampling methods. Inference on the astrophysical source parameters was completed using Bayesian methods based on parameterized theoretical models for the signal waveform and (again) on the Whittle likelihood. Seismology. Matched filters find use in seismology to detect similar earthquake or other seismic signals, often using multicomponent and/or multichannel empirically determined templates . Matched filtering applications in seismology include the generation of large event catalogues to study earthquake seismicity and volcanic activity , and in the global detection of nuclear explosions . Biology. Animals living in relatively static environments would have relatively fixed features of the environment to perceive. This allows the evolution of filters that match the expected signal with the highest signal-to-noise ratio, the matched filter. Sensors that perceive the world "through such a 'matched filter' severely limits the amount of information the brain can pick up from the outside world, but it frees the brain from the need to perform more intricate computations to extract the information finally needed for fulfilling a particular task." Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h" }, { "math_id": 1, "text": "\\ y[n] = \\sum_{k=-\\infty}^{\\infty} h[n-k] x[k]," }, { "math_id": 2, "text": "x[k]" }, { "math_id": 3, "text": "k" }, { "math_id": 4, "text": "y[n]" }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "s" }, { "math_id": 7, "text": "v" }, { "math_id": 8, "text": "\\ x=s+v.\\," }, { "math_id": 9, "text": "\\ R_v=E\\{ vv^\\mathrm{H} \\}\\," }, { "math_id": 10, "text": "v^\\mathrm{H}" }, { "math_id": 11, "text": "E" }, { "math_id": 12, "text": "R_v" }, { "math_id": 13, "text": "y" }, { "math_id": 14, "text": "\\ y = \\sum_{k=-\\infty}^{\\infty} h^*[k] x[k] = h^\\mathrm{H}x = h^\\mathrm{H}s + h^\\mathrm{H}v = y_s + y_v." }, { "math_id": 15, "text": "\\mathrm{SNR} = \\frac{|y_s|^2}{E\\{|y_v|^2\\}}." }, { "math_id": 16, "text": "\\mathrm{SNR} = \\frac{|h^\\mathrm{H}s|^2}{E\\{|h^\\mathrm{H}v|^2\\}}." }, { "math_id": 17, "text": "\\ E\\{ |h^\\mathrm{H}v|^2 \\} = E\\{ (h^\\mathrm{H}v){(h^\\mathrm{H}v)}^\\mathrm{H} \\} = h^\\mathrm{H} E\\{vv^\\mathrm{H}\\} h = h^\\mathrm{H}R_vh.\\," }, { "math_id": 18, "text": "\\mathrm{SNR}" }, { "math_id": 19, "text": "\\mathrm{SNR} = \\frac{ |h^\\mathrm{H}s|^2 }{ h^\\mathrm{H}R_vh }." }, { "math_id": 20, "text": "\\mathrm{SNR} = \\frac{ | {(R_v^{1/2}h)}^\\mathrm{H} (R_v^{-1/2}s) |^2 }\n { {(R_v^{1/2}h)}^\\mathrm{H} (R_v^{1/2}h) }," }, { "math_id": 21, "text": "\\ |a^\\mathrm{H}b|^2 \\leq (a^\\mathrm{H}a)(b^\\mathrm{H}b),\\, " }, { "math_id": 22, "text": "a" }, { "math_id": 23, "text": "b" }, { "math_id": 24, "text": "\\mathrm{SNR} = \\frac{ | {(R_v^{1/2}h)}^\\mathrm{H} (R_v^{-1/2}s) |^2 }\n { {(R_v^{1/2}h)}^\\mathrm{H} (R_v^{1/2}h) }\n \\leq\n \\frac{ \\left[\n \t\t\t{(R_v^{1/2}h)}^\\mathrm{H} (R_v^{1/2}h)\n \t\t\\right]\n \t\t\\left[\n \t\t\t{(R_v^{-1/2}s)}^\\mathrm{H} (R_v^{-1/2}s)\n \t\t\\right] }\n { {(R_v^{1/2}h)}^\\mathrm{H} (R_v^{1/2}h) }.\n " }, { "math_id": 25, "text": "\\mathrm{SNR} = \\frac{ | {(R_v^{1/2}h)}^\\mathrm{H} (R_v^{-1/2}s) |^2 }\n { {(R_v^{1/2}h)}^\\mathrm{H} (R_v^{1/2}h) }\n \\leq s^\\mathrm{H} R_v^{-1} s.\n " }, { "math_id": 26, "text": "\\ R_v^{1/2}h = \\alpha R_v^{-1/2}s" }, { "math_id": 27, "text": "\\alpha" }, { "math_id": 28, "text": "\\mathrm{SNR} = \\frac{ | {(R_v^{1/2}h)}^\\mathrm{H} (R_v^{-1/2}s) |^2 }\n { {(R_v^{1/2}h)}^\\mathrm{H} (R_v^{1/2}h) }\n = \\frac{ \\alpha^2 | {(R_v^{-1/2}s)}^\\mathrm{H} (R_v^{-1/2}s) |^2 }\n { \\alpha^2 {(R_v^{-1/2}s)}^\\mathrm{H} (R_v^{-1/2}s) }\n = \\frac{ | s^\\mathrm{H} R_v^{-1} s |^2 }\n { s^\\mathrm{H} R_v^{-1} s }\n = s^\\mathrm{H} R_v^{-1} s.\n " }, { "math_id": 29, "text": "\\ h = \\alpha R_v^{-1}s." }, { "math_id": 30, "text": "\\ E\\{ |y_v|^2 \\} = 1.\\," }, { "math_id": 31, "text": "\\ E\\{ |y_v|^2 \\} = \\alpha^2 s^\\mathrm{H} R_v^{-1} s = 1," }, { "math_id": 32, "text": "\\ \\alpha = \\frac{1}{\\sqrt{s^\\mathrm{H} R_v^{-1} s}}," }, { "math_id": 33, "text": "\\ h = \\frac{1}{\\sqrt{s^\\mathrm{H} R_v^{-1} s}} R_v^{-1}s." }, { "math_id": 34, "text": "s(t)" }, { "math_id": 35, "text": "v(t)" }, { "math_id": 36, "text": "h(t)" }, { "math_id": 37, "text": "\\ x = s + v,\\," }, { "math_id": 38, "text": "\\ R_v = E\\{vv^\\mathrm{H}\\}.\\, " }, { "math_id": 39, "text": "\\mathrm{SNR} = \\frac{|y_s|^2}{ E\\{|y_v|^2\\} }," }, { "math_id": 40, "text": "y_s = h^\\mathrm{H} s" }, { "math_id": 41, "text": "y_v = h^\\mathrm{H} v" }, { "math_id": 42, "text": "\\ |y_s|^2 = {y_s}^\\mathrm{H} y_s = h^\\mathrm{H} s s^\\mathrm{H} h.\\, " }, { "math_id": 43, "text": "\\ E\\{|y_v|^2\\} = E\\{ {y_v}^\\mathrm{H} y_v \\} = E\\{ h^\\mathrm{H} v v^\\mathrm{H} h \\} = h^\\mathrm{H} R_v h.\\," }, { "math_id": 44, "text": "\\mathrm{SNR} = \\frac{h^\\mathrm{H} s s^\\mathrm{H} h}{ h^\\mathrm{H} R_v h }." }, { "math_id": 45, "text": "\\ h^\\mathrm{H} R_v h = 1 " }, { "math_id": 46, "text": "\\ \\mathcal{L} = h^\\mathrm{H} s s^\\mathrm{H} h + \\lambda (1 - h^\\mathrm{H} R_v h ) " }, { "math_id": 47, "text": "\\ \\nabla_{h^*} \\mathcal{L} = s s^\\mathrm{H} h - \\lambda R_v h = 0 " }, { "math_id": 48, "text": "\\ (s s^\\mathrm{H}) h = \\lambda R_v h " }, { "math_id": 49, "text": "\\ h^\\mathrm{H} (s s^\\mathrm{H}) h = \\lambda h^\\mathrm{H} R_v h. " }, { "math_id": 50, "text": "s s^\\mathrm{H}" }, { "math_id": 51, "text": "\\ \\lambda_{\\max} = s^\\mathrm{H} R_v^{-1} s, " }, { "math_id": 52, "text": "\\ h = \\frac{1}{\\sqrt{s^\\mathrm{H} R_v^{-1} s}} R_v^{-1} s. " }, { "math_id": 53, "text": "\\ x_k = s_k + v_k,\\," }, { "math_id": 54, "text": "v_k" }, { "math_id": 55, "text": "s_k" }, { "math_id": 56, "text": "f_k" }, { "math_id": 57, "text": "\\ s_k = \\mu_0\\cdot f_{k-j_0}" }, { "math_id": 58, "text": "j^*" }, { "math_id": 59, "text": "\\mu^*" }, { "math_id": 60, "text": "j_0" }, { "math_id": 61, "text": "\\mu_0" }, { "math_id": 62, "text": "x_k" }, { "math_id": 63, "text": "h_{j-k}" }, { "math_id": 64, "text": "\\ j^*,\\mu^* = \\arg\\min_{j,\\mu} \\sum_k \\left(x_k - \\mu\\cdot h_{j-k}\\right)^2" }, { "math_id": 65, "text": "\\ j^*,\\mu^* = \\arg\\min_{j,\\mu}\\left[ \\sum_k (s_k+v_k)^2 + \\mu^2\\sum_k h_{j-k}^2 - 2\\mu\\sum_k s_k h_{j-k} - 2\\mu\\sum_k v_k h_{j-k}\\right]. " }, { "math_id": 66, "text": "\\ j^*,\\mu^* = \\arg\\max_{j,\\mu}\\left[ 2\\mu\\sum_k s_k h_{j-k} - \\mu^2\\sum_k h_{j-k}^2\\right]. " }, { "math_id": 67, "text": "\\mu" }, { "math_id": 68, "text": "\\ \\mu^* = \\frac{\\sum_k s_k h_{j-k}}{\\sum_k h_{j-k}^2}." }, { "math_id": 69, "text": "\\ j^* = \\arg\\max_j\\frac{\\left(\\sum_k s_k h_{j-k}\\right)^2}{\\sum_k h_{j-k}^2}. " }, { "math_id": 70, "text": "\\ \\frac{\\left(\\sum_k s_k h_{j-k}\\right)^2}{\\sum_k h_{j-k}^2} \\le \\frac{\\sum_k s_k^2 \\cdot \\sum_k h_{j-k}^2}{\\sum_k h_{j-k}^2} = \\sum_k s_k^2 = \\text{constant}. " }, { "math_id": 71, "text": "\\ h_{j-k}=\\nu \\cdot s_k = \\kappa\\cdot f_{k-j_0}." }, { "math_id": 72, "text": "\\nu" }, { "math_id": 73, "text": "\\kappa" }, { "math_id": 74, "text": "j^*=j_0" }, { "math_id": 75, "text": "f_{k-j_0}" }, { "math_id": 76, "text": "\\kappa=1" }, { "math_id": 77, "text": "\\ h_{k}=f_{-k}." }, { "math_id": 78, "text": "\\sum_k x_k h_{j-k}" }, { "math_id": 79, "text": "h_k" }, { "math_id": 80, "text": "\\sqrt{2\\log(\\mathcal{L})}" }, { "math_id": 81, "text": "\\mathcal{L}" }, { "math_id": 82, "text": "N" }, { "math_id": 83, "text": "k^\\mathrm{th}" }, { "math_id": 84, "text": "\\ a_k =\n\t\t\\begin{cases}\n\t\t\t+1, & \\text{if bit } k \\text{ is } 1, \\\\\n\t\t\t-1, & \\text{if bit } k \\text{ is } 0.\n\t \\end{cases}\n" }, { "math_id": 85, "text": "M(t)" }, { "math_id": 86, "text": "\\ M(t) = \\sum_{k=-\\infty}^\\infty a_k \\times \\Pi \\left( \\frac{t-kT}{T} \\right). " }, { "math_id": 87, "text": "T" }, { "math_id": 88, "text": "\\Pi(x)" }, { "math_id": 89, "text": "\\ h(t) = \\Pi\\left( \\frac{t}{T} \\right)." }, { "math_id": 90, "text": "M_\\mathrm{filtered}(t)" }, { "math_id": 91, "text": "\\ M_\\mathrm{filtered}(t) = (M * h)(t)" }, { "math_id": 92, "text": "*" } ]
https://en.wikipedia.org/wiki?curid=1135311
1135324
Conjugate variables
Variables that are Fourier transform duals Conjugate variables are pairs of variables mathematically defined in such a way that they become Fourier transform duals, or more generally are related through Pontryagin duality. The duality relations lead naturally to an uncertainty relation—in physics called the Heisenberg uncertainty principle—between them. In mathematical terms, conjugate variables are part of a symplectic basis, and the uncertainty relation corresponds to the symplectic form. Also, conjugate variables are related by Noether's theorem, which states that if the laws of physics are invariant with respect to a change in one of the conjugate variables, then the other conjugate variable will not change with time (i.e. it will be conserved). Examples. There are many types of conjugate variables, depending on the type of work a certain system is doing (or is being subjected to). Examples of canonically conjugate variables include the following: Derivatives of action. In classical physics, the derivatives of action are conjugate variables to the quantity with respect to which one is differentiating. In quantum mechanics, these same pairs of variables are related by the Heisenberg uncertainty principle. Quantum theory. In quantum mechanics, conjugate variables are realized as pairs of observables whose operators do not commute. In conventional terminology, they are said to be "incompatible observables". Consider, as an example, the measurable quantities given by position formula_3 and momentum formula_4. In the quantum-mechanical formalism, the two observables formula_5 and formula_6 correspond to operators formula_7 and formula_8, which necessarily satisfy the canonical commutation relation: formula_9 For every non-zero commutator of two operators, there exists an "uncertainty principle", which in our present example may be expressed in the form: formula_10 In this ill-defined notation, formula_11 and formula_12 denote "uncertainty" in the simultaneous specification of formula_5 and formula_6. A more precise, and statistically complete, statement involving the standard deviation formula_13 reads: formula_14 More generally, for any two observables formula_15 and formula_16 corresponding to operators formula_17 and formula_18, the generalized uncertainty principle is given by: formula_19 Now suppose we were to explicitly define two particular operators, assigning each a "specific" mathematical form, such that the pair satisfies the aforementioned commutation relation. It's important to remember that our particular "choice" of operators would merely reflect one of many equivalent, or isomorphic, representations of the general algebraic structure that fundamentally characterizes quantum mechanics. The generalization is provided formally by the Heisenberg Lie algebra formula_20, with a corresponding group called the Heisenberg group formula_21. Fluid mechanics. In Hamiltonian fluid mechanics and quantum hydrodynamics, the "action" itself (or "velocity potential") is the conjugate variable of the "density" (or "probability density). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\Delta E \\times \\Delta t " }, { "math_id": 1, "text": " m^2 s^{-1}" }, { "math_id": 2, "text": "\\mathbf{N}=t\\mathbf{p}-E\\mathbf{r}" }, { "math_id": 3, "text": " \\left (x \\right) " }, { "math_id": 4, "text": " \\left (p \\right) " }, { "math_id": 5, "text": " x " }, { "math_id": 6, "text": " p " }, { "math_id": 7, "text": " \\widehat{x} " }, { "math_id": 8, "text": " \\widehat{p\\,} " }, { "math_id": 9, "text": "[\\widehat{x},\\widehat{p\\,}]=\\widehat{x}\\widehat{p\\,}-\\widehat{p\\,}\\widehat{x}=i \\hbar" }, { "math_id": 10, "text": " \\Delta x \\, \\Delta p \\geq \\hbar/2 " }, { "math_id": 11, "text": " \\Delta x " }, { "math_id": 12, "text": " \\Delta p " }, { "math_id": 13, "text": " \\sigma " }, { "math_id": 14, "text": " \\sigma_x \\sigma_p \\geq \\hbar/2 " }, { "math_id": 15, "text": " A " }, { "math_id": 16, "text": " B " }, { "math_id": 17, "text": " \\widehat{A} " }, { "math_id": 18, "text": " \\widehat{B} " }, { "math_id": 19, "text": " {\\sigma_A}^2 {\\sigma_B}^2 \\geq \\left (\\frac{1}{2i} \\left \\langle \\left [ \\widehat{A},\\widehat{B} \\right ] \\right \\rangle \\right)^2 " }, { "math_id": 20, "text": "\\mathfrak h_3" }, { "math_id": 21, "text": " H_3 " } ]
https://en.wikipedia.org/wiki?curid=1135324
1135333
Ambiguity function
In pulsed radar and sonar signal processing, an ambiguity function is a two-dimensional function of propagation delay formula_0 and Doppler frequency formula_1, formula_2. It represents the distortion of a returned pulse due to the receiver matched filter (commonly, but not exclusively, used in pulse compression radar) of the return from a moving target. The ambiguity function is defined by the properties of the pulse and of the filter, and not any particular target scenario. Many definitions of the ambiguity function exist; some are restricted to narrowband signals and others are suitable to describe the delay and Doppler relationship of wideband signals. Often the definition of the ambiguity function is given as the magnitude squared of other definitions (Weiss). For a given complex baseband pulse formula_3, the narrowband ambiguity function is given by formula_4 where formula_5 denotes the complex conjugate and formula_6 is the imaginary unit. Note that for zero Doppler shift (formula_7), this reduces to the autocorrelation of formula_3. A more concise way of representing the ambiguity function consists of examining the one-dimensional zero-delay and zero-Doppler "cuts"; that is, formula_8 and formula_9, respectively. The matched filter output as a function of time (the signal one would observe in a radar system) is a Doppler cut, with the constant frequency given by the target's Doppler shift: formula_10. Background and motivation. Pulse-Doppler radar equipment sends out a series of radio frequency pulses. Each pulse has a certain shape (waveform)—how long the pulse is, what its frequency is, whether the frequency changes during the pulse, and so on. If the waves reflect off a single object, the detector will see a signal which, in the simplest case, is a copy of the original pulse but delayed by a certain time formula_0—related to the object's distance—and shifted by a certain frequency formula_1—related to the object's velocity (Doppler shift). If the original emitted pulse waveform is formula_3, then the detected signal (neglecting noise, attenuation, and distortion, and wideband corrections) will be: formula_11 The detected signal will never be "exactly" equal to any formula_12 because of noise. Nevertheless, if the detected signal has a high correlation with formula_12, for a certain delay and Doppler shift formula_13, then that suggests that there is an object with formula_13. Unfortunately, this procedure may yield false positives, i.e. wrong values formula_14 which are nevertheless highly correlated with the detected signal. In this sense, the detected signal may be "ambiguous". The ambiguity occurs specifically when there is a high correlation between formula_12 and formula_15 for formula_16. This motivates the "ambiguity function" formula_17. The defining property of formula_17 is that the correlation between formula_12 and formula_15 is equal to formula_18. Different pulse shapes (waveforms) formula_3 have different ambiguity functions, and the ambiguity function is relevant when choosing what pulse to use. The function formula_17 is complex-valued; the degree of "ambiguity" is related to its magnitude formula_19. Relationship to time–frequency distributions. The ambiguity function plays a key role in the field of time–frequency signal processing, as it is related to the Wigner–Ville distribution by a 2-dimensional Fourier transform. This relationship is fundamental to the formulation of other time–frequency distributions: the bilinear time–frequency distributions are obtained by a 2-dimensional filtering in the ambiguity domain (that is, the ambiguity function of the signal). This class of distribution may be better adapted to the signals considered. Moreover, the ambiguity distribution can be seen as the short-time Fourier transform of a signal using the signal itself as the window function. This remark has been used to define an ambiguity distribution over the time-scale domain instead of the time-frequency domain. Wideband ambiguity function. The wideband ambiguity function of formula_20 is: formula_21 where "formula_22" is a time scale factor of the received signal relative to the transmitted signal given by: formula_23 for a target moving with constant radial velocity "v". The reflection of the signal is represented with compression (or expansion) in time by the factor "formula_24", which is equivalent to a compression by the factor "formula_25" in the frequency domain (with an amplitude scaling). When the wave speed in the medium is sufficiently faster than the target speed, as is common with radar, this compression in frequency is closely approximated by a shift in frequency Δf = fc*v/c (known as the doppler shift). For a narrow band signal, this approximation results in the narrowband ambiguity function given above, which can be computed efficiently by making use of the FFT algorithm. Ideal ambiguity function. An ambiguity function of interest is a 2-dimensional Dirac delta function or "thumbtack" function; that is, a function which is infinite at (0,0) and zero elsewhere. formula_26 An ambiguity function of this kind would be somewhat of a misnomer; it would have no ambiguities at all, and both the zero-delay and zero-Doppler cuts would be an impulse. This is not usually desirable (if a target has any Doppler shift from an unknown velocity it will disappear from the radar picture), but if Doppler processing is independently performed, knowledge of the precise Doppler frequency allows ranging without interference from any other targets which are not also moving at exactly the same velocity. This type of ambiguity function is produced by ideal white noise (infinite in duration and infinite in bandwidth). However, this would require infinite power and is not physically realizable. There is no pulse formula_3 that will produce formula_27 from the definition of the ambiguity function. Approximations exist, however, and noise-like signals such as binary phase-shift keyed waveforms using maximal-length sequences are the best known performers in this regard. Properties. (1) Maximum value formula_28 (2) Symmetry about the origin formula_29 (3) Volume invariance formula_30 (4) Modulation by a linear FM signal formula_31 (5) Frequency energy spectrum formula_32 (6) Upper bounds for formula_33 and lower bounds for formula_34 exist for the formula_35 power integrals formula_36. These bounds are sharp and are achieved if and only if formula_37 is a Gaussian function. Square pulse. Consider a simple square pulse of duration formula_0 and amplitude formula_38: formula_39 where formula_40 is the Heaviside step function. The matched filter output is given by the autocorrelation of the pulse, which is a triangular pulse of height formula_41 and duration formula_42 (the zero-Doppler cut). However, if the measured pulse has a frequency offset due to Doppler shift, the matched filter output is distorted into a sinc function. The greater the Doppler shift, the smaller the peak of the resulting sinc, and the more difficult it is to detect the target. In general, the square pulse is not a desirable waveform from a pulse compression standpoint, because the autocorrelation function is too short in amplitude, making it difficult to detect targets in noise, and too wide in time, making it difficult to discern multiple overlapping targets. LFM pulse. A commonly used radar or sonar pulse is the linear frequency modulated (LFM) pulse (or "chirp"). It has the advantage of greater bandwidth while keeping the pulse duration short and envelope constant. A constant envelope LFM pulse has an ambiguity function similar to that of the square pulse, except that it is skewed in the delay-Doppler plane. Slight Doppler mismatches for the LFM pulse do not change the general shape of the pulse and reduce the amplitude very little, but they do appear to shift the pulse in time. Thus, an uncompensated Doppler shift changes the target's apparent range; this phenomenon is called range-Doppler coupling. Multistatic ambiguity functions. The ambiguity function can be extended to multistatic radars, which comprise multiple non-colocated transmitters and/or receivers (and can include bistatic radar as a special case). For these types of radar, the simple linear relationship between time and range that exists in the monostatic case no longer applies, and is instead dependent on the specific geometry – i.e. the relative location of transmitter(s), receiver(s) and target. Therefore, the multistatic ambiguity function is mostly usefully defined as a function of two- or three-dimensional position and velocity vectors for a given multistatic geometry and transmitted waveform. Just as the monostatic ambiguity function is naturally derived from the matched filter, the multistatic ambiguity function is derived from the corresponding optimal "multistatic" detector – i.e. that which maximizes the probability of detection given a fixed probability of false alarm through joint processing of the signals at all receivers. The nature of this detection algorithm depends on whether or not the target fluctuations observed by each bistatic pair within the multistatic system are mutually correlated. If so, the optimal detector performs phase coherent summation of received signals which can result in very high target location accuracy. If not, the optimal detector performs incoherent summation of received signals which gives diversity gain. Such systems are sometimes described as "MIMO radars" due to the information theoretic similarities to MIMO communication systems. Ambiguity function plane. An ambiguity function plane can be viewed as a combination of an infinite number of radial lines. Each radial line can be viewed as the fractional Fourier transform of a stationary random process. Example. The Ambiguity function (AF) is the operators that are related to the WDF.&lt;br&gt; formula_43 (1)If formula_44&lt;br&gt; formula_45 formula_46 formula_47 formula_48 formula_49 &lt;br&gt; WDF and AF for the signal with only 1 term (2) If formula_50 formula_45 formula_51 + formula_52 + formula_53 + formula_54 formula_55 &lt;br&gt; formula_56 formula_57 &lt;br&gt; When formula_58 formula_59 where When formula_67 ≠ formula_68 formula_69 WDF and AF for the signal with 2 terms&lt;br&gt; &lt;br&gt; For the ambiguity function: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tau" }, { "math_id": 1, "text": "f" }, { "math_id": 2, "text": "\\chi(\\tau,f)" }, { "math_id": 3, "text": "s(t)" }, { "math_id": 4, "text": "\\chi(\\tau,f)=\\int_{-\\infty}^\\infty s(t)s^*(t-\\tau) e^{i 2 \\pi f t} \\, dt" }, { "math_id": 5, "text": "^*" }, { "math_id": 6, "text": "i" }, { "math_id": 7, "text": "f=0" }, { "math_id": 8, "text": "\\chi(0,f)" }, { "math_id": 9, "text": "\\chi(\\tau,0)" }, { "math_id": 10, "text": "\\chi(\\tau,f_D)" }, { "math_id": 11, "text": "s_{\\tau,f}(t) \\equiv s(t-\\tau)e^{i 2\\pi f t}." }, { "math_id": 12, "text": "s_{\\tau,f}" }, { "math_id": 13, "text": "(\\tau,f)" }, { "math_id": 14, "text": "(\\tau',f')" }, { "math_id": 15, "text": "s_{\\tau',f'}" }, { "math_id": 16, "text": "(\\tau,f) \\neq (\\tau',f')" }, { "math_id": 17, "text": "\\chi" }, { "math_id": 18, "text": "\\chi(\\tau-\\tau', f-f')" }, { "math_id": 19, "text": "|\\chi(\\tau,f)|^2" }, { "math_id": 20, "text": "s \\in L^2(R)" }, { "math_id": 21, "text": "WB_{ss}(\\tau,\\alpha)=\\sqrt{|{\\alpha}|}\\int_{-\\infty}^\\infty s(t)s^*(\\alpha (t-\\tau)) \\, dt" }, { "math_id": 22, "text": "{\\alpha}" }, { "math_id": 23, "text": "\\alpha = \\frac{c+v}{c-v}" }, { "math_id": 24, "text": " \\alpha " }, { "math_id": 25, "text": "\\alpha^{-1}" }, { "math_id": 26, "text": "\\chi(\\tau,f) = \\delta(\\tau) \\delta(f) \\, " }, { "math_id": 27, "text": "\\delta(\\tau) \\delta(f)" }, { "math_id": 28, "text": "|\\chi(\\tau,f)|^2 \\le |\\chi(0,0)|^2" }, { "math_id": 29, "text": "\\chi(\\tau,f) = \\exp[j2\\pi \\tau f]\\chi^{*}(-\\tau,-f) \\, " }, { "math_id": 30, "text": "\\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty |\\chi(\\tau,f)|^2 \\, d\\tau \\,df=|\\chi(0,0)|^2 = E^2" }, { "math_id": 31, "text": "\\text{If } s(t) \\rightarrow |\\chi(\\tau,f)| \\text{ then }s(t) \\exp[j\\pi kt^2] {\\rightarrow} |\\chi(\\tau,f+k\\tau)| \\, " }, { "math_id": 32, "text": "S(f)S^*(f) = \\int_{-\\infty}^\\infty \\chi(\\tau,0) e^{-j2\\pi\\tau f} \\, d\\tau " }, { "math_id": 33, "text": " p>2 " }, { "math_id": 34, "text": " p<2 " }, { "math_id": 35, "text": " p^{th} " }, { "math_id": 36, "text": "\\int_{-\\infty}^\\infty \\int_{-\\infty}^\\infty |\\chi(\\tau,f)|^p \\, d\\tau \\,df " }, { "math_id": 37, "text": " s(t) " }, { "math_id": 38, "text": "A" }, { "math_id": 39, "text": "A (u(t)-u(t-\\tau)) \\, " }, { "math_id": 40, "text": "u(t)" }, { "math_id": 41, "text": "\\tau^2 A^2" }, { "math_id": 42, "text": "2 \\tau" }, { "math_id": 43, "text": "A_{x}(\\tau,n) = \\int^\\infty_{-\\infty}x(t+\\frac{\\tau}{2}) x^{*}(t-\\frac{\\tau}{2}) e^{-j 2 \\pi tn} dt" }, { "math_id": 44, "text": "x(t) = exp[-\\alpha\\pi{(t-t_{0})^2} + j2\\pi f_{0}t]" }, { "math_id": 45, "text": "A_{x}(\\tau,n)" }, { "math_id": 46, "text": "= \\int^\\infty_{-\\infty}e^{-\\alpha\\pi (t+\\tau/2-t_{0})^{2}+j2\\pi f_{0}(t+\\tau/2)}+e^{-\\alpha\\pi (t-\\tau/2-t_{0})^{2}-j2\\pi f_{0}(t-\\tau/2)}e^{-j2\\pi tn}dt" }, { "math_id": 47, "text": "= \\int^\\infty_{-\\infty}e^{-\\alpha\\pi [2(t-t_{0})^{2}+\\tau^{2}/2]+j2\\pi f_{0}\\tau}e^{-j2\\pi tn}dt" }, { "math_id": 48, "text": "= \\int^\\infty_{-\\infty}e^{-\\alpha\\pi [2t^{2}-\\tau^{2}/2]+j2\\pi f_{0}\\tau}e^{-j2\\pi tn}e^{-j2\\pi t_{0}n}dt" }, { "math_id": 49, "text": "A_{x}(\\tau,n) = \\sqrt\\frac{1}{2\\alpha}exp[-\\pi (\\frac{\\alpha\\tau^{2}}{2}+\\frac{n^{2}}{2\\alpha})]exp[j2\\pi (f_{0}\\tau-t_{0}n)]" }, { "math_id": 50, "text": "x(t) = exp[-\\alpha_{1}\\pi (t-t_{1})^{2}+j2\\pi f_{1}t] + exp[-\\alpha_{2}\\pi (t-t_{2})^{2}+j2\\pi f_{2}t]" }, { "math_id": 51, "text": "= \\int^\\infty_{-\\infty}x_{1}(t+\\tau/2)x_{1}^{*}(t-\\tau/2)e^{-j2\\pi tn}dt" }, { "math_id": 52, "text": "\\int^\\infty_{-\\infty}x_{2}(t+\\tau/2)x_{2}^{*}(t-\\tau/2)e^{-j2\\pi tn}dt" }, { "math_id": 53, "text": "\\int^\\infty_{-\\infty}x_{1}(t+\\tau/2)x_{2}^{*}(t-\\tau/2)e^{-j2\\pi tn}dt" }, { "math_id": 54, "text": " \\int^\\infty_{-\\infty}x_{2}(t+\\tau/2)x_{1}^{*}(t-\\tau/2)e^{-j2\\pi tn}dt" }, { "math_id": 55, "text": "A_{x}(\\tau,n) = A_{x1}(\\tau,n) + A_{x2}(\\tau,n) + A_{x1x2}(\\tau,n) + A_{x2x1}(\\tau,n)" }, { "math_id": 56, "text": "A_{x}(\\tau,n) = \\sqrt\\frac{1}{2\\alpha_{1}}exp[-\\pi (\\frac{\\alpha_{1}\\tau^{2}}{2}+\\frac{n^{2}}{2\\alpha_{1}})]exp[j2\\pi (f_{1}\\tau-t_{1}n)]" }, { "math_id": 57, "text": "A_{x}(\\tau,n) = \\sqrt\\frac{1}{2\\alpha_{2}}exp[-\\pi (\\frac{\\alpha_{2}\\tau^{2}}{2}+\\frac{n^{2}}{2\\alpha_{1}})]exp[j2\\pi (f_{2}\\tau-t_{2}n)]" }, { "math_id": 58, "text": "\\alpha_{1} = \\alpha_{2}" }, { "math_id": 59, "text": "A_{x1x2}(\\tau,n) = \\sqrt\\frac{1}{2\\alpha_{u}}exp[-\\pi (\\alpha_{u}\\frac{(\\tau -t_{d})^{2}}{2}+\\frac{(n-f_{d})^{2}}{2\\alpha_{u}})]exp[j2\\pi (f_{u}\\tau-t_{u}n+f_{d}t_{u})]" }, { "math_id": 60, "text": "t_{u} = (t_{1}+t_{2}/2)" }, { "math_id": 61, "text": "f_{u} = (f_{1}+f_{2})/2" }, { "math_id": 62, "text": "\\alpha_{u} = (\\alpha_{1}+\\alpha_{2})/2" }, { "math_id": 63, "text": "t_{d} = t_{1}+t_{2}" }, { "math_id": 64, "text": "f_{d} = f_{1}-f_{2}" }, { "math_id": 65, "text": "\\alpha_{d} = \\alpha_{1}-\\alpha_{2}" }, { "math_id": 66, "text": "A_{x2x1}(\\tau,n) = A_{x1x2}^{*}(-\\tau,-n)" }, { "math_id": 67, "text": "\\alpha_{1}" }, { "math_id": 68, "text": "\\alpha_{2}" }, { "math_id": 69, "text": "A_{x1x2}(\\tau,n) = \\sqrt\\frac{1}{2\\alpha_{u}}exp[-\\pi \\frac{[(n-f_{d})+j(\\alpha_{1}t_{1}+\\alpha_{2}t_{2})-j\\alpha_{d}\\tau /2]^{2}}{2\\alpha_{u}}exp[-\\pi(\\alpha_{1}(t_{1}-\\frac{\\tau}{2})^{2})+\\alpha_{2}(t_{2}-\\frac{\\tau}{2})^{2})]exp[j2\\pi \n f_{u}\\tau]" } ]
https://en.wikipedia.org/wiki?curid=1135333
113564
General linear group
Group of "n" × "n" invertible matrices In mathematics, the general linear group of degree "n" is the set of "n"×"n" invertible matrices, together with the operation of ordinary matrix multiplication. This forms a group, because the product of two invertible matrices is again invertible, and the inverse of an invertible matrix is invertible, with the identity matrix as the identity element of the group. The group is so named because the columns (and also the rows) of an invertible matrix are linearly independent, hence the vectors/points they define are in general linear position, and matrices in the general linear group take points in general linear position to points in general linear position. To be more precise, it is necessary to specify what kind of objects may appear in the entries of the matrix. For example, the general linear group over R (the set of real numbers) is the group of "n"×"n" invertible matrices of real numbers, and is denoted by GL"n"(R) or GL("n", R). More generally, the general linear group of degree "n" over any field "F" (such as the complex numbers), or a ring "R" (such as the ring of integers), is the set of "n"×"n" invertible matrices with entries from "F" (or "R"), again with matrix multiplication as the group operation. Typical notation is GL"n"("F") or GL("n", "F"), or simply GL("n") if the field is understood. More generally still, the general linear group of a vector space GL("V") is the automorphism group, not necessarily written as matrices. The special linear group, written SL("n", "F") or SL"n"("F"), is the subgroup of GL("n", "F") consisting of matrices with a determinant of 1. The group GL("n", "F") and its subgroups are often called linear groups or matrix groups (the automorphism group GL("V") is a linear group but not a matrix group). These groups are important in the theory of group representations, and also arise in the study of spatial symmetries and symmetries of vector spaces in general, as well as the study of polynomials. The modular group may be realised as a quotient of the special linear group SL(2, Z). If "n" ≥ 2, then the group GL("n", "F") is not abelian. General linear group of a vector space. If "V" is a vector space over the field "F", the general linear group of "V", written GL("V") or Aut("V"), is the group of all automorphisms of "V", i.e. the set of all bijective linear transformations "V" → "V", together with functional composition as group operation. If "V" has finite dimension "n", then GL("V") and GL("n", "F") are isomorphic. The isomorphism is not canonical; it depends on a choice of basis in "V". Given a basis ("e"1, ..., "e""n") of "V" and an automorphism "T" in GL("V"), we have then for every basis vector "e""i" that formula_0 for some constants "a""ij" in "F"; the matrix corresponding to "T" is then just the matrix with entries given by the "a""ji". In a similar way, for a commutative ring "R" the group GL("n", "R") may be interpreted as the group of automorphisms of a "free" "R"-module "M" of rank "n". One can also define GL("M") for any "R"-module, but in general this is not isomorphic to GL("n", "R") (for any "n"). In terms of determinants. Over a field "F", a matrix is invertible if and only if its determinant is nonzero. Therefore, an alternative definition of GL("n", "F") is as the group of matrices with nonzero determinant. Over a commutative ring "R", more care is needed: a matrix over "R" is invertible if and only if its determinant is a unit in "R", that is, if its determinant is invertible in "R". Therefore, GL("n", "R") may be defined as the group of matrices whose determinants are units. Over a non-commutative ring "R", determinants are not at all well behaved. In this case, GL("n", "R") may be defined as the unit group of the matrix ring M("n", "R"). As a Lie group. Real case. The general linear group GL("n", R) over the field of real numbers is a real Lie group of dimension "n"2. To see this, note that the set of all "n"×"n" real matrices, M"n"(R), forms a real vector space of dimension "n"2. The subset GL("n", R) consists of those matrices whose determinant is non-zero. The determinant is a polynomial map, and hence GL("n", R) is an open affine subvariety of M"n"(R) (a non-empty open subset of M"n"(R) in the Zariski topology), and therefore a smooth manifold of the same dimension. The Lie algebra of GL("n", R), denoted formula_1 consists of all "n"×"n" real matrices with the commutator serving as the Lie bracket. As a manifold, GL("n", R) is not connected but rather has two connected components: the matrices with positive determinant and the ones with negative determinant. The identity component, denoted by GL+("n", R), consists of the real "n"×"n" matrices with positive determinant. This is also a Lie group of dimension "n"2; it has the same Lie algebra as GL("n", R). The polar decomposition, which is unique for invertible matrices, shows that there is a homeomorphism between GL("n", R) and the Cartesian product of O("n") with the set of positive-definite symmetric matrices. Similarly, it shows that there is a homeomorphism between GL+("n", R) and the Cartesian product of SO("n") with the set of positive-definite symmetric matrices. Because the latter is contractible, the fundamental group of GL+("n", R) is isomorphic to that of SO("n"). The homeomorphism also shows that the group GL("n", R) is noncompact. “The” maximal compact subgroup of GL("n", R) is the orthogonal group O("n"), while "the" maximal compact subgroup of GL+("n", R) is the special orthogonal group SO("n"). As for SO("n"), the group GL+("n", R) is not simply connected (except when "n" = 1), but rather has a fundamental group isomorphic to Z for "n" = 2 or Z2 for "n" &gt; 2. Complex case. The general linear group over the field of complex numbers, GL("n", C), is a "complex" Lie group of complex dimension "n"2. As a real Lie group (through realification) it has dimension 2"n"2. The set of all real matrices forms a real Lie subgroup. These correspond to the inclusions GL("n", R) &lt; GL("n", C) &lt; GL("2n", R), which have real dimensions "n"2, 2"n"2, and 4"n"2 = (2"n")2. Complex "n"-dimensional matrices can be characterized as real 2"n"-dimensional matrices that preserve a linear complex structure — concretely, that commute with a matrix "J" such that "J"2 = −"I", where "J" corresponds to multiplying by the imaginary unit "i". The Lie algebra corresponding to GL("n", C) consists of all "n"×"n" complex matrices with the commutator serving as the Lie bracket. Unlike the real case, GL("n", C) is connected. This follows, in part, since the multiplicative group of complex numbers C∗ is connected. The group manifold GL("n", C) is not compact; rather its maximal compact subgroup is the unitary group U("n"). As for U("n"), the group manifold GL("n", C) is not simply connected but has a fundamental group isomorphic to Z. Over finite fields. If "F" is a finite field with "q" elements, then we sometimes write GL("n", "q") instead of GL("n", "F"). When "p" is prime, GL("n", "p") is the outer automorphism group of the group Z"p""n", and also the automorphism group, because Z"p""n" is abelian, so the inner automorphism group is trivial. The order of GL("n", "q") is: formula_2 This can be shown by counting the possible columns of the matrix: the first column can be anything but the zero vector; the second column can be anything but the multiples of the first column; and in general, the "k"th column can be any vector not in the linear span of the first "k" − 1 columns. In "q"-analog notation, this is formula_3. For example, GL(3, 2) has order (8 − 1)(8 − 2)(8 − 4) = 168. It is the automorphism group of the Fano plane and of the group Z23. This group is also isomorphic to PSL(2, 7). More generally, one can count points of Grassmannian over "F": in other words the number of subspaces of a given dimension "k". This requires only finding the order of the stabilizer subgroup of one such subspace and dividing into the formula just given, by the orbit-stabilizer theorem. These formulas are connected to the Schubert decomposition of the Grassmannian, and are "q"-analogs of the Betti numbers of complex Grassmannians. This was one of the clues leading to the Weil conjectures. Note that in the limit "q" ↦ 1 the order of GL("n", "q") goes to 0! – but under the correct procedure (dividing by ("q" − 1)"n") we see that it is the order of the symmetric group (See Lorscheid's article) – in the philosophy of the field with one element, one thus interprets the symmetric group as the general linear group over the field with one element: "S"n ≅ GL("n", 1). History. The general linear group over a prime field, GL("ν", "p"), was constructed and its order computed by Évariste Galois in 1832, in his last letter (to Chevalier) and second (of three) attached manuscripts, which he used in the context of studying the Galois group of the general equation of order "p""ν". Special linear group. The special linear group, SL("n", "F"), is the group of all matrices with determinant 1. They are special in that they lie on a subvariety – they satisfy a polynomial equation (as the determinant is a polynomial in the entries). Matrices of this type form a group as the determinant of the product of two matrices is the product of the determinants of each matrix. SL("n", "F") is a normal subgroup of GL("n", "F"). If we write "F"× for the multiplicative group of "F" (excluding 0), then the determinant is a group homomorphism det: GL("n", "F") → "F"×. that is surjective and its kernel is the special linear group. Therefore, by the first isomorphism theorem, GL("n", "F")/SL("n", "F") is isomorphic to "F"×. In fact, GL("n", "F") can be written as a semidirect product: GL("n", "F") = SL("n", "F") ⋊ "F"× The special linear group is also the derived group (also known as commutator subgroup) of the GL("n", "F") (for a field or a division ring "F") provided that formula_4 or "k" is not the field with two elements. When "F" is R or C, SL("n", "F") is a Lie subgroup of GL("n", "F") of dimension "n"2 − 1. The Lie algebra of SL("n", "F") consists of all "n"×"n" matrices over "F" with vanishing trace. The Lie bracket is given by the commutator. The special linear group SL("n", R) can be characterized as the group of "volume and orientation-preserving" linear transformations of R"n". The group SL("n", C) is simply connected, while SL("n", R) is not. SL("n", R) has the same fundamental group as GL+("n", R), that is, Z for "n" = 2 and Z2 for "n" &gt; 2. Other subgroups. Diagonal subgroups. The set of all invertible diagonal matrices forms a subgroup of GL("n", "F") isomorphic to ("F"×)"n". In fields like R and C, these correspond to rescaling the space; the so-called dilations and contractions. A scalar matrix is a diagonal matrix which is a constant times the identity matrix. The set of all nonzero scalar matrices forms a subgroup of GL("n", "F") isomorphic to "F"×. This group is the center of GL("n", "F"). In particular, it is a normal, abelian subgroup. The center of SL("n", "F") is simply the set of all scalar matrices with unit determinant, and is isomorphic to the group of "n"th roots of unity in the field "F". Classical groups. The so-called classical groups are subgroups of GL("V") which preserve some sort of bilinear form on a vector space "V". These include the These groups provide important examples of Lie groups. Related groups and monoids. Projective linear group. The projective linear group PGL("n", "F") and the projective special linear group PSL("n", "F") are the quotients of GL("n", "F") and SL("n", "F") by their centers (which consist of the multiples of the identity matrix therein); they are the induced action on the associated projective space. Affine group. The affine group Aff("n", "F") is an extension of GL("n", "F") by the group of translations in "F""n". It can be written as a semidirect product: Aff("n", "F") = GL("n", "F") ⋉ "F""n" where GL("n", "F") acts on "F""n" in the natural manner. The affine group can be viewed as the group of all affine transformations of the affine space underlying the vector space "F""n". One has analogous constructions for other subgroups of the general linear group: for instance, the special affine group is the subgroup defined by the semidirect product, SL("n", "F") ⋉ "F""n", and the Poincaré group is the affine group associated to the Lorentz group, O(1, 3, "F") ⋉ "F""n". General semilinear group. The general semilinear group ΓL("n", "F") is the group of all invertible semilinear transformations, and contains GL. A semilinear transformation is a transformation which is linear “up to a twist”, meaning “up to a field automorphism under scalar multiplication”. It can be written as a semidirect product: ΓL("n", "F") = Gal("F") ⋉ GL("n", "F") where Gal("F") is the Galois group of "F" (over its prime field), which acts on GL("n", "F") by the Galois action on the entries. The main interest of ΓL("n", "F") is that the associated projective semilinear group PΓL("n", "F") (which contains PGL("n", "F")) is the collineation group of projective space, for "n" &gt; 2, and thus semilinear maps are of interest in projective geometry. Full linear monoid. The Full Linear Monoid, derived upon removal of the determinant's non-zero restriction, forms an algebraic structure akin to a monoid, often referred to as the full linear monoid or occasionally as the full linear semigroup or general linear monoid. Notably, it constitutes a regular semigroup. If one removes the restriction of the determinant being non-zero, the resulting algebraic structure is a monoid, usually called the full linear monoid, but occasionally also "full linear semigroup", "general linear monoid" etc. It is actually a regular semigroup. Infinite general linear group. The infinite general linear group or stable general linear group is the direct limit of the inclusions GL("n", "F") → GL("n" + 1, "F") as the upper left block matrix. It is denoted by either GL("F") or GL(∞, "F"), and can also be interpreted as invertible infinite matrices which differ from the identity matrix in only finitely many places. It is used in algebraic K-theory to define K1, and over the reals has a well-understood topology, thanks to Bott periodicity. It should not be confused with the space of (bounded) invertible operators on a Hilbert space, which is a larger group, and topologically much simpler, namely contractible – see Kuiper's theorem. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "T(e_i) = \\sum_{j=1}^n a_{ji} e_j" }, { "math_id": 1, "text": "\\mathfrak{gl}_n," }, { "math_id": 2, "text": "\\prod_{k=0}^{n-1}(q^n-q^k)=(q^n - 1)(q^n - q)(q^n - q^2)\\ \\cdots\\ (q^n - q^{n-1})." }, { "math_id": 3, "text": "[n]_q!(q-1)^n q^{n \\choose 2}" }, { "math_id": 4, "text": "n \\ne 2" } ]
https://en.wikipedia.org/wiki?curid=113564
11357006
Woods–Saxon potential
The Woods–Saxon potential is a mean field potential for the nucleons (protons and neutrons) inside the atomic nucleus, which is used to describe approximately the forces applied on each nucleon, in the nuclear shell model for the structure of the nucleus. The potential is named after Roger D. Woods and David S. Saxon. The form of the potential, in terms of the distance "r" from the center of nucleus, is: formula_0 where "V"0 (having dimension of energy) represents the potential well depth, "a" is a length representing the "surface thickness" of the nucleus, and formula_1 is the nuclear radius where "r"0 and "A" is the mass number. Typical values for the parameters are: "V"0 ≈, "a" ≈. For large atomic number "A" this potential is similar to a potential well. It has the following desired properties The Schrödinger equation of this potential can be solved analytically, by transforming it into a hypergeometric differential equation. The radial part of the wavefunction solution is given by formula_2 where formula_3, formula_4, formula_5, formula_6 and formula_7. Here formula_8 is the hypergeometric function. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V(r) = -\\frac{V_0}{1+\\exp({r-R\\over a})}" }, { "math_id": 1, "text": "R = r_0 A^{1/3}" }, { "math_id": 2, "text": "u(r)= \\frac 1r y^\\nu (1-y)^\\mu {}_2F_1 (\\mu+\\nu, \\mu+\\nu+1; 2\\nu+1; y)" }, { "math_id": 3, "text": "y = \\dfrac{1}{1+\\exp\\left(\\frac {r-R}{a}\\right)}" }, { "math_id": 4, "text": "\\mu = i\\sqrt{\\gamma^2-\\nu^2}" }, { "math_id": 5, "text": "\\dfrac{2mE}{\\hbar^2}=-\\nu^2 " }, { "math_id": 6, "text": "\\nu <0" }, { "math_id": 7, "text": "\\dfrac{2mV_0}{\\hbar^2}a^2=\\gamma^2" }, { "math_id": 8, "text": "{}_2F_1(a,b;c;z) = \\sum_{n=0}^\\infty \\frac{(a)_n (b)_n}{(c)_n} \\frac{z^n}{n!}" } ]
https://en.wikipedia.org/wiki?curid=11357006
1136348
Hoeffding's inequality
Probabilistic inequality applying on sum of bounded random variables In probability theory, Hoeffding's inequality provides an upper bound on the probability that the sum of bounded independent random variables deviates from its expected value by more than a certain amount. Hoeffding's inequality was proven by Wassily Hoeffding in 1963. Hoeffding's inequality is a special case of the Azuma–Hoeffding inequality and McDiarmid's inequality. It is similar to the Chernoff bound, but tends to be less sharp, in particular when the variance of the random variables is small. It is similar to, but incomparable with, one of Bernstein's inequalities. Statement. Let "X"1, ..., "Xn" be independent random variables such that formula_0 almost surely. Consider the sum of these random variables, formula_1 Then Hoeffding's theorem states that, for all "t" &gt; 0, formula_2 Here E["S"n] is the expected value of Sn. Note that the inequalities also hold when the Xi have been obtained using sampling without replacement; in this case the random variables are not independent anymore. A proof of this statement can be found in Hoeffding's paper. For slightly better bounds in the case of sampling without replacement, see for instance the paper by . Generalization. Let formula_3 be independent observations such that formula_4 and formula_5. Let formula_6. Then, for any formula_7,formula_8 Special Case: Bernoulli RVs. Suppose formula_9 and formula_10 for all "i". This can occur when "Xi" are independent Bernoulli random variables, though they need not be identically distributed. Then we get the inequality formula_11 or equivalently, formula_12 for all formula_13. This is a version of the additive Chernoff bound which is more general, since it allows for random variables that take values between zero and one, but also weaker, since the Chernoff bound gives a better tail bound when the random variables have small variance. General case of bounded from above random variables. Hoeffding's inequality can be extended to the case of bounded from above random variables. Let "X"1, ..., "Xn" be independent random variables such that formula_14 and formula_15 almost surely. Denote by formula_16 Hoeffding's inequality for bounded from aboved random variables states that for all formula_13, formula_17 In particular, if formula_18 for all formula_19, then for all formula_13, formula_20 General case of sub-Gaussian random variables. The proof of Hoeffding's inequality can be generalized to any sub-Gaussian distribution. Recall that a random variable "X" is called sub-Gaussian, if formula_21 for some formula_22. For any bounded variable "X", formula_23 for formula_24 for some sufficiently large "T". Then formula_25 for all formula_26 so taking formula_27 yields formula_28 for formula_26. So every bounded variable is sub-Gaussian. For a random variable "X", the following norm is finite if and only if "X" is sub-Gaussian: formula_29 Then let "X"1, ..., "Xn" be zero-mean independent sub-Gaussian random variables, the general version of the Hoeffding's inequality states that: formula_30 where "c" &gt; 0 is an absolute constant. Proof. The proof of Hoeffding's inequality follows similarly to concentration inequalities like Chernoff bounds. The main difference is the use of Hoeffding's Lemma: Suppose X is a real random variable such that formula_31 almost surely. Then formula_32 Using this lemma, we can prove Hoeffding's inequality. As in the theorem statement, suppose "X"1, ..., "Xn" are n independent random variables such that formula_33 almost surely for all "i", and let formula_34. Then for "s", "t" &gt; 0, Markov's inequality and the independence of Xi implies: formula_35 This upper bound is the best for the value of s minimizing the value inside the exponential. This can be done easily by optimizing a quadratic, giving formula_36 Writing the above bound for this value of s, we get the desired bound: formula_37 Usage. Confidence intervals. Hoeffding's inequality can be used to derive confidence intervals. We consider a coin that shows heads with probability p and tails with probability 1 − "p". We toss the coin n times, generating n samples formula_38 (which are i.i.d Bernoulli random variables). The expected number of times the coin comes up heads is "pn". Furthermore, the probability that the coin comes up heads at least k times can be exactly quantified by the following expression: formula_39 where "H"("n") is the number of heads in n coin tosses. When "k" = ("p" + "ε")"n" for some "ε" &gt; 0, Hoeffding's inequality bounds this probability by a term that is exponentially small in "ε"2"n": formula_40 Since this bound holds on both sides of the mean, Hoeffding's inequality implies that the number of heads that we see is concentrated around its mean, with exponentially small tail. formula_41 Thinking of formula_42 as the "observed" mean, this probability can be interpreted as the level of significance formula_43 (probability of making an error) for a confidence interval around formula_44 of size 2ɛ: formula_45 Finding n for opposite inequality sign in the above, i.e. n that violates inequality but not equality above, gives us: formula_46 Therefore, we require at least formula_47 samples to acquire a formula_48-confidence interval formula_49. Hence, the cost of acquiring the confidence interval is sublinear in terms of confidence level and quadratic in terms of precision. Note that there are more efficient methods of estimating a confidence interval. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "a_i \\leq X_i \\leq b_i" }, { "math_id": 1, "text": "S_n = X_1 + \\cdots + X_n." }, { "math_id": 2, "text": "\\begin{align}\n\\operatorname{P} \\left(S_n - \\mathrm{E}\\left [S_n \\right] \\geq t \\right) &\\leq \\exp \\left(-\\frac{2t^2}{\\sum_{i=1}^n (b_i - a_i)^2} \\right) \\\\\n\\operatorname{P} \\left(\\left |S_n - \\mathrm{E}\\left [S_n \\right] \\right | \\geq t \\right) &\\leq 2\\exp \\left(-\\frac{2t^2}{\\sum_{i=1}^n(b_i - a_i)^2} \\right)\n\\end{align}" }, { "math_id": 3, "text": "Y_1, \\dots, Y_n " }, { "math_id": 4, "text": "\\operatorname{E} (Y_i) = 0" }, { "math_id": 5, "text": "a_i \\le Y_i \\le b_i " }, { "math_id": 6, "text": "\\epsilon > 0" }, { "math_id": 7, "text": "t > 0" }, { "math_id": 8, "text": "P \\left( \\sum_{i=1}^n Y_i \\ge \\epsilon \\right) \\le \\exp \\left( - t \\epsilon + \\sum_{i=1}^n t^2 (b_i - a_i)^2 / 8 \\right)" }, { "math_id": 9, "text": "a_i = 0" }, { "math_id": 10, "text": "b_i = 1" }, { "math_id": 11, "text": "\\begin{align}\n\\operatorname{P}\\left(S_n - \\mathrm{E}\\left[S_n\\right] \\geq t\\right) &\\leq \\exp(-2t^2/n)\\\\\n\\operatorname{P}\\left(\\left|S_n - \\mathrm{E}\\left[S_n\\right]\\right| \\geq t\\right) &\\leq 2\\exp(-2t^2/n)\n\\end{align}" }, { "math_id": 12, "text": "\\begin{align}\n\\operatorname{P}\\left((S_n - \\mathrm{E}\\left[S_n\\right])/n \\geq t\\right) &\\leq \\exp(-2 n t^2 )\\\\\n\\operatorname{P}\\left(\\left|(S_n - \\mathrm{E}\\left[S_n\\right])/n\\right| \\geq t\\right) &\\leq 2\\exp(-2 n t^2)\n\\end{align}" }, { "math_id": 13, "text": "t \\geq 0" }, { "math_id": 14, "text": " \\mathrm{E}X_{i}=0" }, { "math_id": 15, "text": " X_i \\leq b_i" }, { "math_id": 16, "text": "\\begin{align}\nC_i^2= \\left\\{ \\begin{array}{ll}\n\\mathrm{E}X_{i}^2 , & \\mathrm{if}\\ \\mathrm{E}X_{i}^2 \\geq b^2_{i} , \\\\\n\\displaystyle\\frac{1}{4}\\left( b_{i} + \\frac{\\mathrm{E}X_{i}^2 }{ b_{i} }\\right)^2, & \\textrm{otherwise}.\n\\end{array} \\right.\n\\end{align}" }, { "math_id": 17, "text": " \n\\mathrm{P}\\left( \\left| \\sum_{i=1}^n X_i \\right| \\geq t \\right) \\leq 2\\exp\\left( -\\frac{ t^2}{2\\sum_{i=1}^nC_i^2 } \\right). \n" }, { "math_id": 18, "text": "\\mathrm{E}X_{i}^2 \\geq b^2_{i}" }, { "math_id": 19, "text": "i" }, { "math_id": 20, "text": "\n\\mathrm{P}\\left( \\left| \\sum_{i=1}^n X_i \\right| \\geq t \\right) \\leq 2\\exp\\left( -\\frac{ t^2}{2\\sum_{i=1}^n \\mathrm{E}X_{i}^2 } \\right). \n" }, { "math_id": 21, "text": "\\mathrm{P}(|X|\\geq t)\\leq 2e^{-ct^2}," }, { "math_id": 22, "text": "c>0" }, { "math_id": 23, "text": "\\mathrm{P}(|X|\\geq t) = 0 \\leq 2e^{-ct^2}" }, { "math_id": 24, "text": "t > T" }, { "math_id": 25, "text": "2e^{-cT^2} \\leq 2e^{-ct^2}" }, { "math_id": 26, "text": "t \\leq T" }, { "math_id": 27, "text": "c = \\log(2) / T^2" }, { "math_id": 28, "text": "\\mathrm{P}(|X|\\geq t)\\leq 1 \\leq 2e^{-cT^2} \\leq 2e^{-ct^2}," }, { "math_id": 29, "text": "\\Vert X \\Vert_{\\psi_2} := \\inf\\left\\{c\\geq 0: \\mathrm{E} \\left( e^{X^2/c^2} \\right) \\leq 2\\right\\}." }, { "math_id": 30, "text": "\n\\mathrm{P}\\left( \\left| \\sum_{i=1}^n X_i \\right| \\geq t \\right) \\leq 2\\exp\\left( -\\frac{ct^2}{\\sum_{i=1}^n \\Vert X_i \\Vert^2_{\\psi_2}} \\right),\n" }, { "math_id": 31, "text": "X\\in\\left[a,b\\right]" }, { "math_id": 32, "text": "\\mathrm{E} \\left [e^{s\\left (X-\\mathrm{E}\\left [X \\right ]\\right )} \\right ]\\leq \\exp\\left(\\tfrac{1}{8} s^2 (b-a)^2\\right)." }, { "math_id": 33, "text": "X_i\\in [a_i,b_i]" }, { "math_id": 34, "text": "S_n = X_1 + \\cdots + X_n" }, { "math_id": 35, "text": "\\begin{align}\n\\operatorname{P}\\left(S_n-\\mathrm{E}\\left [S_n \\right ]\\geq t \\right) &= \\operatorname{P} \\left (\\exp(s(S_n-\\mathrm{E}\\left [S_n \\right ])) \\geq \\exp(st) \\right)\\\\\n&\\leq \\exp(-st)\\mathrm{E} \\left [\\exp(s(S_n-\\mathrm{E}\\left [S_n \\right ])) \\right ]\\\\\n&= \\exp(-st) \\prod_{i=1}^n\\mathrm{E} \\left [\\exp(s(X_i-\\mathrm{E}\\left [X_i\\right])) \\right ]\\\\\n&\\leq \\exp(-st) \\prod_{i=1}^n \\exp\\Big(\\frac{s^2 (b_i-a_i)^2}{8}\\Big)\\\\\n&= \\exp\\left(-st+\\tfrac{1}{8} s^2 \\sum_{i=1}^n(b_i-a_i)^2\\right)\n\\end{align}" }, { "math_id": 36, "text": "s=\\frac{4t}{\\sum_{i=1}^n(b_i-a_i)^2}." }, { "math_id": 37, "text": "\\operatorname{P} \\left(S_n-\\mathrm{E}\\left [S_n \\right ]\\geq t \\right)\\leq \\exp\\left(-\\frac{2t^2}{\\sum_{i=1}^n(b_i-a_i)^2}\\right)." }, { "math_id": 38, "text": "X_1,\\ldots,X_n" }, { "math_id": 39, "text": "\\operatorname{P}( H(n) \\geq k)= \\sum_{i=k}^n \\binom{n}{i} p^i (1-p)^{n-i}," }, { "math_id": 40, "text": "\\operatorname{P}( H(n) - pn >\\varepsilon n)\\leq\\exp\\left(-2\\varepsilon^2 n\\right)." }, { "math_id": 41, "text": "\\operatorname{P}\\left(|H(n) - pn| >\\varepsilon n\\right)\\leq 2\\exp\\left(-2\\varepsilon^2 n\\right)." }, { "math_id": 42, "text": "\\overline{X} = \\frac{1}{n}H(n)" }, { "math_id": 43, "text": "\\alpha" }, { "math_id": 44, "text": "p" }, { "math_id": 45, "text": "\\alpha=\\operatorname{P}(\\ \\overline{X} \\notin [p-\\varepsilon, p+\\varepsilon]) \\leq 2e^{-2\\varepsilon^2n}" }, { "math_id": 46, "text": "n\\geq \\frac{\\log(2/\\alpha)}{2\\varepsilon^2}" }, { "math_id": 47, "text": "\\textstyle \\frac{\\log(2/\\alpha)}{2\\varepsilon^2}" }, { "math_id": 48, "text": "\\textstyle (1-\\alpha)" }, { "math_id": 49, "text": "\\textstyle p \\pm \\varepsilon" } ]
https://en.wikipedia.org/wiki?curid=1136348
1136363
1,000,000,000
Natural number 1,000,000,000 (one billion, short scale; one thousand million or one milliard, one yard, long scale) is the natural number following 999,999,999 and preceding 1,000,000,001. With a number, "billion" can be abbreviated as b, bil or bn. In standard form, it is written as 1 × 109. The metric prefix giga indicates 1,000,000,000 times the base unit. Its symbol is G. One billion years may be called an "eon" in astronomy or geology. Previously in British English (but not in American English), the word "billion" referred exclusively to a million millions (1,000,000,000,000). However, this is not common anymore, and the word has been used to mean one thousand million (1,000,000,000) for several decades. The term milliard could also be used to refer to 1,000,000,000; whereas "milliard" is rarely used in English, variations on this name often appear in other languages. In the Indian numbering system, it is known as 100 crore or 1 arab. 1,000,000,000 is also the cube of 1000. Sense of scale. The facts below give a sense of how large 1,000,000,000 (109) is in the context of time according to current scientific evidence: Count. A is a cube; B consists of 1000 cubes the size of cube "A", C consists of 1000 cubes the size of cube "B"; and D consists of 1000 cubes the size of cube "C". Thus there are 1 million "A"-sized cubes in "C"; and 1,000,000,000 "A"-sized cubes in "D". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C(19) = \\frac{\\binom{2 \\times 19}{19}}{19+1} = \\frac{(2 \\times 19)!}{19! \\times (19+1)!}" }, { "math_id": 1, "text": "\\sum_{d|34} \\binom{34}{d}" }, { "math_id": 2, "text": "F_0" }, { "math_id": 3, "text": "F_4" }, { "math_id": 4, "text": "F_5" }, { "math_id": 5, "text": "C(20) = \\frac{\\binom{2 \\times 20}{20}}{20+1} = \\frac{(2 \\times 20)!}{20! \\times (20+1)!}" }, { "math_id": 6, "text": "C(n)" }, { "math_id": 7, "text": "n" } ]
https://en.wikipedia.org/wiki?curid=1136363
1136447
Microcanonical ensemble
Ensemble of states with an exactly specified total energy In statistical mechanics, the microcanonical ensemble is a statistical ensemble that represents the possible states of a mechanical system whose total energy is exactly specified. The system is assumed to be isolated in the sense that it cannot exchange energy or particles with its environment, so that (by conservation of energy) the energy of the system does not change with time. The primary macroscopic variables of the microcanonical ensemble are the total number of particles in the system (symbol: "N"), the system's volume (symbol: "V"), as well as the total energy in the system (symbol: "E"). Each of these is assumed to be constant in the ensemble. For this reason, the microcanonical ensemble is sometimes called the "NVE" ensemble. In simple terms, the microcanonical ensemble is defined by assigning an equal probability to every microstate whose energy falls within a range centered at "E". All other microstates are given a probability of zero. Since the probabilities must add up to 1, the probability "P" is the inverse of the number of microstates "W" within the range of energy, formula_0 The range of energy is then reduced in width until it is infinitesimally narrow, still centered at "E". In the limit of this process, the microcanonical ensemble is obtained. Applicability. Because of its connection with the elementary assumptions of equilibrium statistical mechanics (particularly the postulate of a priori equal probabilities), the microcanonical ensemble is an important conceptual building block in the theory. It is sometimes considered to be the fundamental distribution of equilibrium statistical mechanics. It is also useful in some numerical applications, such as molecular dynamics. On the other hand, most nontrivial systems are mathematically cumbersome to describe in the microcanonical ensemble, and there are also ambiguities regarding the definitions of entropy and temperature. For these reasons, other ensembles are often preferred for theoretical calculations. The applicability of the microcanonical ensemble to real-world systems depends on the importance of energy fluctuations, which may result from interactions between the system and its environment as well as uncontrolled factors in preparing the system. Generally, fluctuations are negligible if a system is macroscopically large, or if it is manufactured with precisely known energy and thereafter maintained in near isolation from its environment. In such cases the microcanonical ensemble is applicable. Otherwise, different ensembles are more appropriate – such as the canonical ensemble (fluctuating energy) or the grand canonical ensemble (fluctuating energy and particle number). Properties. Thermodynamic quantities. The fundamental thermodynamic potential of the microcanonical ensemble is entropy. There are at least three possible definitions, each given in terms of the phase volume function "v"("E"). In classical mechanics "v"("E") this is the volume of the region of phase space where the energy is less than "E". In quantum mechanics "v"("E") is roughly the number of energy eigenstates with energy less than "E"; however this must be smoothed so that we can take its derivative (see the Precise expressions section for details on how this is done). The definitions of microcanonical entropy are: In the microcanonical ensemble, the temperature is a derived quantity rather than an external control parameter. It is defined as the derivative of the chosen entropy with respect to energy. For example, one can define the "temperatures" "Tv" and "Ts" as follows: formula_2 formula_3 Like entropy, there are multiple ways to understand temperature in the microcanonical ensemble. More generally, the correspondence between these ensemble-based definitions and their thermodynamic counterparts is not perfect, particularly for finite systems. The microcanonical pressure and chemical potential are given by: formula_4 Phase transitions. Under their strict definition, phase transitions correspond to nonanalytic behavior in the thermodynamic potential or its derivatives. Using this definition, phase transitions in the microcanonical ensemble can occur in systems of any size. This contrasts with the canonical and grand canonical ensembles, for which phase transitions can occur only in the thermodynamic limit – i.e., in systems with infinitely many degrees of freedom. Roughly speaking, the reservoirs defining the canonical or grand canonical ensembles introduce fluctuations that "smooth out" any nonanalytic behavior in the free energy of finite systems. This smoothing effect is usually negligible in macroscopic systems, which are sufficiently large that the free energy can approximate nonanalytic behavior exceedingly well. However, the technical difference in ensembles may be important in the theoretical analysis of small systems. Information entropy. For a given mechanical system (fixed "N", "V") and a given range of energy, the uniform distribution of probability "P" over microstates (as in the microcanonical ensemble) maximizes the ensemble average . Thermodynamic analogies. Early work in statistical mechanics by Ludwig Boltzmann led to his eponymous entropy equation for a system of a given total energy, "S" "k" log "W", where "W" is the number of distinct states accessible by the system at that energy. Boltzmann did not elaborate too deeply on what exactly constitutes the set of distinct states of a system, besides the special case of an ideal gas. This topic was investigated to completion by Josiah Willard Gibbs who developed the generalized statistical mechanics for arbitrary mechanical systems, and defined the microcanonical ensemble described in this article. Gibbs investigated carefully the analogies between the microcanonical ensemble and thermodynamics, especially how they break down in the case of systems of few degrees of freedom. He introduced two further definitions of microcanonical entropy that do not depend on "ω" – the volume and surface entropy described above. (Note that the surface entropy differs from the Boltzmann entropy only by an "ω"-dependent offset.) The volume entropy formula_5 and associated temperature formula_6 are closely analogous to thermodynamic entropy and temperature. It is possible to show exactly that formula_7 ( is the ensemble average pressure) as expected for the first law of thermodynamics. A similar equation can be found for the surface entropy formula_8 (or Boltzmann entropy formula_1) and its associated temperature "T"s, however the "pressure" in this equation is a complicated quantity unrelated to the average pressure. The microcanonical temperatures formula_6 and formula_9 are not entirely satisfactory in their analogy to temperature as defined using a canonical ensemble. Outside of the thermodynamic limit, a number of artefacts occur. The preferred solution to these problems is avoid use of the microcanonical ensemble. In many realistic cases a system is thermostatted to a heat bath so that the energy is not precisely known. Then, a more accurate description is the canonical ensemble or grand canonical ensemble, both of which have complete correspondence to thermodynamics. Precise expressions for the ensemble. The precise mathematical expression for a statistical ensemble depends on the kind of mechanics under consideration – quantum or classical – since the notion of a "microstate" is considerably different in these two cases. In quantum mechanics, diagonalization provides a discrete set of microstates with specific energies. The classical mechanical case involves instead an integral over canonical phase space, and the size of microstates in phase space can be chosen somewhat arbitrarily. To construct the microcanonical ensemble, it is necessary in both types of mechanics to first specify a range of energy. In the expressions below the function formula_10 (a function of "H", peaking at "E" with width "ω") will be used to represent the range of energy in which to include states. An example of this function would be formula_11 or, more smoothly, formula_12 Quantum mechanical. A statistical ensemble in quantum mechanics is represented by a density matrix, denoted by formula_13. The microcanonical ensemble can be written using bra–ket notation, in terms of the system's energy eigenstates and energy eigenvalues. Given a complete basis of energy eigenstates , indexed by "i", the microcanonical ensemble is formula_14 where the "H""i" are the energy eigenvalues determined by formula_15 (here "Ĥ" is the system's total energy operator, i. e., Hamiltonian operator). The value of "W" is determined by demanding that formula_13 is a normalized density matrix, and so formula_16 The state volume function (used to calculate entropy) is given by formula_17 The microcanonical ensemble is defined by taking the limit of the density matrix as the energy width goes to zero, however a problematic situation occurs once the energy width becomes smaller than the spacing between energy levels. For very small energy width, the ensemble does not exist at all for most values of "E", since no states fall within the range. When the ensemble does exist, it typically only contains one (or two) states, since in a complex system the energy levels are only ever equal by accident (see random matrix theory for more discussion on this point). Moreover, the state-volume function also increases only in discrete increments, and so its derivative is only ever infinite or zero, making it difficult to define the density of states. This problem can be solved by not taking the energy range completely to zero and smoothing the state-volume function, however this makes the definition of the ensemble more complicated, since it becomes then necessary to specify the energy range in addition to other variables (together, an "NVEω" ensemble). Classical mechanical. In classical mechanics, an ensemble is represented by a joint probability density function "ρ"("p"1, ... "p""n", "q"1, ... "q""n") defined over the system's phase space. The phase space has "n" generalized coordinates called "q"1, ... "q""n", and "n" associated canonical momenta called "p"1, ... "p""n". The probability density function for the microcanonical ensemble is: formula_18 where Again, the value of "W" is determined by demanding that "ρ" is a normalized probability density function: formula_19 This integral is taken over the entire phase space. The state volume function (used to calculate entropy) is defined by formula_20 As the energy width "ω" is taken to zero, the value of "W" decreases in proportion to "ω" as "W" = "ω" ("dv"/"dE"). Based on the above definition, the microcanonical ensemble can be visualized as an infinitesimally thin shell in phase space, centered on a constant-energy surface. Although the microcanonical ensemble is confined to this surface, it is not necessarily uniformly distributed over that surface: if the gradient of energy in phase space varies, then the microcanonical ensemble is "thicker" (more concentrated) in some parts of the surface than others. This feature is an unavoidable consequence of requiring that the microcanonical ensemble is a steady-state ensemble. Examples. Ideal gas. The fundamental quantity in the microcanonical ensemble is formula_21, which is equal to the phase space volume compatible with given formula_22. From formula_23, all thermodynamic quantities can be calculated. For an ideal gas, the energy is independent of the particle positions, which therefore contribute a factor of formula_24 to formula_23. The momenta, by contrast, are constrained to a formula_25-dimensional (hyper-)spherical shell of radius formula_26; their contribution is equal to the surface volume of this shell. The resulting expression for formula_23 is: formula_27 where formula_28 is the gamma function, and the factor formula_29 has been included to account for the indistinguishability of particles (see Gibbs paradox). In the large formula_30 limit, the Boltzmann entropy formula_31 is formula_32 This is also known as the Sackur–Tetrode equation. The temperature is given by formula_33 which agrees with analogous result from the kinetic theory of gases. Calculating the pressure gives the ideal gas law: formula_34 Finally, the chemical potential formula_35 is formula_36 Ideal gas in a uniform gravitational field. The microcanonical phase volume can also be calculated explicitly for an ideal gas in a uniform gravitational field. The results are stated below for a 3-dimensional ideal gas of formula_30 particles, each with mass formula_37, confined in a thermally isolated container that is infinitely long in the "z"-direction and has constant cross-sectional area formula_38. The gravitational field is assumed to act in the minus "z" direction with strength formula_39. The phase volume formula_40 is formula_41 where formula_42 is the total energy, kinetic plus gravitational. The gas density formula_43 as a function of height formula_44 can be obtained by integrating over the phase volume coordinates. The result is: formula_45 Similarly, the distribution of the velocity magnitude formula_46 (averaged over all heights) is formula_47 The analogues of these equations in the canonical ensemble are the barometric formula and the Maxwell–Boltzmann distribution, respectively. In the limit formula_48, the microcanonical and canonical expressions coincide; however, they differ for finite formula_30. In particular, in the microcanonical ensemble, the positions and velocities are not statistically independent. As a result, the kinetic temperature, defined as the average kinetic energy in a given volume formula_49, is nonuniform throughout the container: formula_50 By contrast, the temperature is uniform in the canonical ensemble, for any formula_30. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P = 1/W," }, { "math_id": 1, "text": "S_\\text{B}" }, { "math_id": 2, "text": "1/T_v = dS_v/dE," }, { "math_id": 3, "text": "1/T_s = dS_s/dE = dS_\\text{B}/dE." }, { "math_id": 4, "text": " \\frac{p}{T}=\\frac{\\partial S}{\\partial V}; \\qquad \\frac{\\mu}{T}=-\\frac{\\partial S}{\\partial N}" }, { "math_id": 5, "text": "S_v" }, { "math_id": 6, "text": "T_v" }, { "math_id": 7, "text": "dE = T_v dS_v - \\langle P\\rangle dV," }, { "math_id": 8, "text": "S_s" }, { "math_id": 9, "text": "T_s" }, { "math_id": 10, "text": "f\\left(\\tfrac{H - E}{\\omega}\\right)" }, { "math_id": 11, "text": "f(x) = \\begin{cases} 1, & \\mathrm{if}~|x| < \\tfrac 12, \\\\ 0, & \\mathrm{otherwise.} \\end{cases}" }, { "math_id": 12, "text": "f(x) = e^{-\\pi x^2}." }, { "math_id": 13, "text": "\\hat\\rho" }, { "math_id": 14, "text": "\\hat\\rho = \\frac{1}{W} \\sum_i f\\left(\\tfrac{H_i - E}{\\omega}\\right) |\\psi_i\\rangle \\langle \\psi_i |," }, { "math_id": 15, "text": "\\hat H |\\psi_i\\rangle = H_i |\\psi_i\\rangle" }, { "math_id": 16, "text": "W = \\sum_i f\\left(\\tfrac{H_i - E}{\\omega}\\right)." }, { "math_id": 17, "text": "v(E) = \\sum_{H_i < E} 1." }, { "math_id": 18, "text": "\\rho = \\frac{1}{h^n C} \\frac{1}{W} f\\left(\\tfrac{H-E}{\\omega}\\right)," }, { "math_id": 19, "text": "W = \\int \\ldots \\int \\frac{1}{h^n C} f\\left(\\tfrac{H-E}{\\omega}\\right) \\, dp_1 \\ldots dq_n " }, { "math_id": 20, "text": "v(E) = \\int \\ldots \\int_{H < E} \\frac{1}{h^n C} \\, dp_1 \\ldots dq_n ." }, { "math_id": 21, "text": "W(E, V, N)" }, { "math_id": 22, "text": "(E, V, N)" }, { "math_id": 23, "text": "W" }, { "math_id": 24, "text": "V^N" }, { "math_id": 25, "text": "3N" }, { "math_id": 26, "text": "\\sqrt{2mE}" }, { "math_id": 27, "text": " \nW = \\frac{V^N}{N!} \\frac{2\\pi^{3N/2}}{\\Gamma(3N/2)}\\left(2mE\\right)^{(3N-1)/2}\n" }, { "math_id": 28, "text": " \\Gamma(.) " }, { "math_id": 29, "text": "N!" }, { "math_id": 30, "text": "N" }, { "math_id": 31, "text": "S = k_{\\mathrm{B}} \\log W" }, { "math_id": 32, "text": "\nS = k_{\\rm B} N \\log \\left[ \\frac VN \\left(\\frac{4\\pi m}{3}\\frac EN\\right)^{3/2}\\right]+{\\frac 52} k_{\\rm B} N + O\\left( \\log N \\right)\n" }, { "math_id": 33, "text": "\n\\frac{1}{T} \\equiv \\frac{\\partial S}{\\partial E} = \\frac{3}{2} \\frac{N k_{\\mathrm{B}}}{E}\n" }, { "math_id": 34, "text": "\n\\frac{p}{T} \\equiv \\frac{\\partial S}{\\partial V} = \\frac{N k_{\\mathrm{B}}}{V} \\quad \\rightarrow \\quad pV = N k_{\\mathrm{B}} T\n" }, { "math_id": 35, "text": "\\mu" }, { "math_id": 36, "text": "\n\\mu \\equiv -T \\frac{\\partial S}{\\partial N} = k_{\\rm B} T \\log \\left[\\frac{V}{N} \\, \\left(\\frac{4 \\pi m E}{3N} \\right)^{3/2} \\right]\n" }, { "math_id": 37, "text": "m" }, { "math_id": 38, "text": "A" }, { "math_id": 39, "text": "g" }, { "math_id": 40, "text": "W(E, N)" }, { "math_id": 41, "text": "\nW(E, N) = \\frac{(2 \\pi)^{3N/2}A^{N}m^{N/2}}{g^N \\Gamma(5N/2)} E^{\\frac{5N}{2}-1}\n" }, { "math_id": 42, "text": "E" }, { "math_id": 43, "text": "\\rho(z)" }, { "math_id": 44, "text": "z" }, { "math_id": 45, "text": "\n\\rho(z) = \\left(\\frac{5N}{2} - 1 \\right) \\frac{mg}{E}\\left(1 - \\frac{mgz}{E} \\right)^{\\frac{5N}{2}-2}\n" }, { "math_id": 46, "text": "|\\vec{v}|" }, { "math_id": 47, "text": "\nf(|\\vec{v}|) = \\frac{\\Gamma(5N/2)}{\\Gamma(3/2)\\Gamma(5N/2-3/2)} \\times \\frac{m^{3/2}|\\vec{v}|^{2}}{2^{1/2}E^{3/2}} \\times \\left(1 - \\frac{m |\\vec{v}|^{2}}{2 E} \\right)^{\\frac{5(N-1)}{2}}\n" }, { "math_id": 48, "text": "N \\rightarrow \\infty" }, { "math_id": 49, "text": "A \\, dz" }, { "math_id": 50, "text": "\nT_{\\mathrm{kinetic}} = \\frac{3 E}{5 N - 2}\\left(1 - \\frac{mgz}{E} \\right)\n" } ]
https://en.wikipedia.org/wiki?curid=1136447
11365496
Brascamp–Lieb inequality
In mathematics, the Brascamp–Lieb inequality is either of two inequalities. The first is a result in geometry concerning integrable functions on "n"-dimensional Euclidean space formula_0. It generalizes the Loomis–Whitney inequality and Hölder's inequality. The second is a result of probability theory which gives a concentration inequality for log-concave probability distributions. Both are named after Herm Jan Brascamp and Elliott H. Lieb. The geometric inequality. Fix natural numbers "m" and "n". For 1 ≤ "i" ≤ "m", let "n""i" ∈ N and let "c""i" &gt; 0 so that formula_1 Choose non-negative, integrable functions formula_2 and surjective linear maps formula_3 Then the following inequality holds: formula_4 where "D" is given by formula_5 Another way to state this is that the constant "D" is what one would obtain by restricting attention to the case in which each formula_6 is a centered Gaussian function, namely formula_7. Alternative forms. Consider a probability density function formula_8. This probability density function formula_9 is said to be a log-concave measure if the formula_10 function is convex. Such probability density functions have tails which decay exponentially fast, so most of the probability mass resides in a small region around the mode of formula_11. The Brascamp–Lieb inequality gives another characterization of the compactness of formula_11 by bounding the mean of any statistic formula_12. Formally, let formula_13 be any derivable function. The Brascamp–Lieb inequality reads: formula_14 where H is the Hessian and formula_15 is the Nabla symbol. BCCT inequality. The inequality is generalized in 2008 to account for both continuous and discrete cases, and for all linear maps, with precise estimates on the constant. Definition: the Brascamp-Lieb datum (BL datum) For any formula_22 with formula_23, defineformula_24 Now define the Brascamp-Lieb constant for the BL datum:formula_25 &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — (BCCT, 2007) formula_26 is finite iff formula_27, and for all subspace formula_28 of formula_29, formula_30 formula_26 is reached by gaussians: formula_33 Discrete case. Setup: With this setup, we have (Theorem 2.4, Theorem 3.12 ) &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — If there exists some formula_38 such that formula_39 Then for all formula_40, formula_41 and in particular, formula_42 Note that the constant formula_43 is not always tight. BL polytope. Given BL datum formula_44, the conditions for formula_45 are Thus, the subset of formula_46 that satisfies the above two conditions is a closed convex polytope defined by linear inequalities. This is the BL polytope. Note that while there are infinitely many possible choices of subspace formula_28 of formula_29, there are only finitely many possible equations of formula_47, so the subset is a closed convex polytope. Similarly we can define the BL polytope for the discrete case. Relationships to other inequalities. The geometric Brascamp–Lieb inequality. The case of the Brascamp–Lieb inequality in which all the "n""i" are equal to 1 was proved earlier than the general case. In 1989, Keith Ball introduced a "geometric form" of this inequality. Suppose that formula_48 are unit vectors in formula_49 and formula_50 are positive numbers satisfying formula_51 for all formula_52, and that formula_53 are positive measurable functions on formula_54. Then formula_55 Thus, when the vectors formula_56 resolve the inner product the inequality has a particularly simple form: the constant is equal to 1 and the extremal Gaussian densities are identical. Ball used this inequality to estimate volume ratios and isoperimetric quotients for convex sets in and. There is also a geometric version of the more general inequality in which the maps formula_57 are orthogonal projections and formula_58 where formula_59 is the identity operator on formula_49. Hölder's inequality. Take "n""i" = "n", "B""i" = id, the identity map on formula_0, replacing "f""i" by "f", and let "c""i" = 1 / "p""i" for 1 ≤ "i" ≤ "m". Then formula_60 and the log-concavity of the determinant of a positive definite matrix implies that "D" = 1. This yields Hölder's inequality in formula_0: formula_61 Poincaré inequality. The Brascamp–Lieb inequality is an extension of the Poincaré inequality which only concerns Gaussian probability distributions. Cramér–Rao bound. The Brascamp–Lieb inequality is also related to the Cramér–Rao bound. While Brascamp–Lieb is an upper-bound, the Cramér–Rao bound lower-bounds the variance of formula_62. The Cramér–Rao bound states formula_63. which is very similar to the Brascamp–Lieb inequality in the alternative form shown above. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb{R}^{n}" }, { "math_id": 1, "text": "\\sum_{i = 1}^m c_i n_i = n." }, { "math_id": 2, "text": "f_i \\in L^1 \\left( \\mathbb{R}^{n_i} ; [0, + \\infty] \\right)" }, { "math_id": 3, "text": "B_i : \\mathbb{R}^n \\to \\mathbb{R}^{n_i}." }, { "math_id": 4, "text": "\\int_{\\mathbb{R}^n} \\prod_{i = 1}^m f_i \\left( B_i x \\right)^{c_i} \\, \\mathrm{d} x \\leq D^{- 1/2} \\prod_{i = 1}^m \\left( \\int_{\\mathbb{R}^{n_i}} f_i (y) \\, \\mathrm{d} y \\right)^{c_i}," }, { "math_id": 5, "text": "D = \\inf \\left\\{ \\left. \\frac{\\det \\left( \\sum_{i = 1}^m c_i B_i^{*} A_i B_i \\right)}{\\prod_{i = 1}^m ( \\det A_i )^{c_i}} \\right| A_i \\text{ is a positive-definite } n_i \\times n_i \\text{ matrix} \\right\\}." }, { "math_id": 6, "text": "f_{i}" }, { "math_id": 7, "text": "f_{i}(y) = \\exp \\{-(y,\\, A_{i}\\, y)\\}" }, { "math_id": 8, "text": "p(x)=\\exp(-\\phi(x))" }, { "math_id": 9, "text": "p(x)" }, { "math_id": 10, "text": " \\phi(x) " }, { "math_id": 11, "text": " p(x) " }, { "math_id": 12, "text": " S(x)" }, { "math_id": 13, "text": " S(x) " }, { "math_id": 14, "text": " \\operatorname{var}_p (S(x)) \\leq E_p (\\nabla^T S(x) [H \\phi(x)]^{-1} \\nabla S(x)) " }, { "math_id": 15, "text": "\\nabla" }, { "math_id": 16, "text": "d, n\\geq 1" }, { "math_id": 17, "text": "d_1, ..., d_n \\in \\{1, 2, ..., d\\}" }, { "math_id": 18, "text": "p_1, ..., p_n \\in [0, \\infty)" }, { "math_id": 19, "text": "B_i: \\R^d \\to \\R^{d_i}" }, { "math_id": 20, "text": "\\cap_i ker(B_i) = \\{0\\}" }, { "math_id": 21, "text": "(B, p) = (B_1, ..., B_n, p_1, ..., p_n)" }, { "math_id": 22, "text": "f_i \\in L^1(R^{d_i})" }, { "math_id": 23, "text": "f_i \\geq 0" }, { "math_id": 24, "text": "BL(B, p, f) := \\frac{\\int_H \\prod_{j=1}^m\\left(f_j \\circ B_j\\right)^{p_j}}{\\prod_{j=1}^m\\left(\\int_{H_j} f_j\\right)^{p_j}}" }, { "math_id": 25, "text": "BL(B, p) = \\max_{f }BL(B, p, f)" }, { "math_id": 26, "text": "BL(B, p)" }, { "math_id": 27, "text": "d = \\sum_i p_i d_i" }, { "math_id": 28, "text": "V" }, { "math_id": 29, "text": "\\R^d" }, { "math_id": 30, "text": "dim(V) \\leq\\sum_i p_i dim(B_i(V)) " }, { "math_id": 31, "text": "A_i : \\R^{d_i} \\to \\R^{d_i}" }, { "math_id": 32, "text": "f_i = e^{-\\langle A_i x, x\\rangle}" }, { "math_id": 33, "text": "\n\\frac{\\int_H \\prod_{j=1}^m\\left(f_j \\circ B_j\\right)^{p_j}}{\\prod_{j=1}^m\\left(\\int_{H_j} f_j\\right)^{p_j}} \\to \\infty\n" }, { "math_id": 34, "text": "G, G_1, ..., G_n" }, { "math_id": 35, "text": "\\phi_j : G \\to G_j" }, { "math_id": 36, "text": "(G, G_1, ..., G_n, \\phi_1, ... \\phi_n)" }, { "math_id": 37, "text": "T(G)" }, { "math_id": 38, "text": "s_1, ..., s_n \\in [0, 1]" }, { "math_id": 39, "text": "rank(H) \\leq \\sum_j s_j rank(\\phi_j(H))\n\t \\quad \\forall H \\leq G" }, { "math_id": 40, "text": "0 \\geq f_j \\in \\ell^{1/s_j}(G_j)" }, { "math_id": 41, "text": "\\left\\|\\prod_j f_j \\circ \\phi_j\\right\\|_1 \\leq |T(G)| \\prod_j \\|f_j \\|_{1/s_j}" }, { "math_id": 42, "text": "|E| \\leq |T(G)| \\prod_j |\\phi_j(E)|^{s_j} \n\t \\quad \\forall E \\subset G" }, { "math_id": 43, "text": "|T(G)|" }, { "math_id": 44, "text": "(B, p)" }, { "math_id": 45, "text": "BL(B, p) < \\infty" }, { "math_id": 46, "text": "p\\in [0, \\infty)^n" }, { "math_id": 47, "text": "dim(V) \\leq\\sum_i p_i dim(B_i(V)) " }, { "math_id": 48, "text": "(u_i)_{i = 1}^m" }, { "math_id": 49, "text": "\\mathbb{R}^n" }, { "math_id": 50, "text": "(c_i)_{i = 1}^m" }, { "math_id": 51, "text": "\\sum_{i = 1}^{m} c_i \\langle x , u_i \\rangle u_i = x" }, { "math_id": 52, "text": "x \\in \\mathbb{R}^n" }, { "math_id": 53, "text": "(f_i)_{i = 1}^m" }, { "math_id": 54, "text": "\\mathbb{R}" }, { "math_id": 55, "text": "\\int_{\\mathbb{R}^n} \\prod_{i = 1}^{m} f_i(\\langle x,u_i \\rangle)^{c_i} \\, \\mathrm{d} x \\leq \\prod_{i = 1}^{m} \\left( \\int_{\\mathbb{R}} f_i(t) \\, \\mathrm{d} t \\right)^{c_i}." }, { "math_id": 56, "text": "(u_i)" }, { "math_id": 57, "text": "B_i" }, { "math_id": 58, "text": "\\sum_{i = 1}^{m} c_i B_i = I" }, { "math_id": 59, "text": "I" }, { "math_id": 60, "text": "\\sum_{i = 1}^m \\frac{1}{p_i} = 1" }, { "math_id": 61, "text": "\\int_{\\mathbb{R}^n} \\prod_{i = 1}^m f_{i} (x) \\, \\mathrm{d} x \\leq \\prod_{i = 1}^{m} \\| f_i \\|_{p_i}." }, { "math_id": 62, "text": "\\operatorname{var}_p (S(x))" }, { "math_id": 63, "text": " \\operatorname{var}_p (S(x)) \\geq E_p (\\nabla^T S(x) ) [ E_p( H \\phi(x) )]^{-1} E_p( \\nabla S(x) )\\!" } ]
https://en.wikipedia.org/wiki?curid=11365496
1136614
Laguerre's method
Polynomial root-finding algorithm In numerical analysis, Laguerre's method is a root-finding algorithm tailored to polynomials. In other words, Laguerre's method can be used to numerically solve the equation "p"("x") 0 for a given polynomial "p"("x"). One of the most useful properties of this method is that it is, from extensive empirical study, very close to being a "sure-fire" method, meaning that it is almost guaranteed to always converge to "some" root of the polynomial, no matter what initial guess is chosen. However, for computer computation, more efficient methods are known, with which it is guaranteed to find all roots (see ) or all real roots (see Real-root isolation). This method is named in honour of the French mathematician, Edmond Laguerre. Definition. The algorithm of the Laguerre method to find one root of a polynomial "p"("x") of degree n is: 0, 1, 2, ... If a root has been found, the corresponding linear factor can be removed from "p". This deflation step reduces the degree of the polynomial by one, so that eventually, approximations for all roots of "p" can be found. Note however that deflation can lead to approximate factors that differ significantly from the corresponding exact factors. This error is least if the roots are found in the order of increasing magnitude. Derivation. The fundamental theorem of algebra states that every nth degree polynomial formula_5 can be written in the form formula_6 so that formula_7 are the roots of the polynomial. If we take the natural logarithm of both sides, we find that formula_8 Denote the logarithmic derivative by formula_9 and the negated second derivative by formula_10 We then make what calls a "'drastic set of assumptions'", that the root we are looking for, say, formula_11 is a short distance,formula_12 away from our guess formula_13 and all the other roots are all clustered together, at some further distance formula_14 If we denote these distances by formula_15 and formula_16 or exactly, formula_17 then our equation for formula_18 may be written as formula_19 and the expression for formula_20 becomes formula_21 Solving these equations for formula_12 we find that formula_22 where in this case, the square root of the (possibly) complex number is chosen to produce largest absolute value of the denominator and make formula_23 as small as possible; equivalently, it satisfies: formula_24 where formula_25 denotes real part of a complex number, and formula_26 is the complex conjugate of formula_27 or formula_28 where the square root of a complex number is chosen to have a non-negative real part. For small values of formula_29 this formula differs from the offset of the third order Halley's method by an error of formula_30 so convergence close to a root will be cubic as well. Fallback. Even if the "'drastic set of assumptions'" does not work well for some particular polynomial "p"("x"), then "p"("x") can be transformed into a related polynomial r for which the assumptions are viable; e.g. by first shifting the origin towards a suitable complex number w, giving a second polynomial "q"("x") "p"("x" − "w"), that give distinct roots clearly distinct magnitudes, if necessary (which it will be if some roots are complex conjugates). After that, getting a third polynomial r from "q"("x") by repeatedly applying the root squaring transformation from Graeffe's method, enough times to make the smaller roots significantly smaller than the largest root (and so, clustered comparatively nearer to zero). The approximate root from Graeffe's method, can then be used to start the new iteration for Laguerre's method on r. An approximate root for "p"("x") may then be obtained straightforwardly from that for r. If we make the even more extreme assumption that the terms in formula_18 corresponding to the roots formula_31 are negligibly small compared to the root formula_32 this leads to Newton's method. Properties. If x is a simple root of the polynomial formula_33 then Laguerre's method converges cubically whenever the initial guess, formula_34 is close enough to the root formula_35 On the other hand, when formula_11 is a multiple root convergence is merely linear, with the penalty of calculating values for the polynomial and its first and second derivatives at each stage of the iteration. A major advantage of Laguerre's method is that it is almost guaranteed to converge to "some" root of the polynomial "no matter where the initial approximation is chosen". This is in contrast to other methods such as the Newton–Raphson method and Stephensen's method, which notoriously fail to converge for poorly chosen initial guesses. Laguerre's method may even converge to a complex root of the polynomial, because the radicand of the square root may be of a negative number, in the formula for the correction, formula_12 given above – manageable so long as complex numbers can be conveniently accomodated for the calculation. This may be considered an advantage or a liability depending on the application to which the method is being used. Empirical evidence shows that convergence failure is extremely rare, making this a good candidate for a general purpose polynomial root finding algorithm. However, given the fairly limited theoretical understanding of the algorithm, many numerical analysts are hesitant to use it as a default, and prefer better understood methods such as the Jenkins–Traub algorithm, for which more solid theory has been developed and whose limits are known. The algorithm is fairly simple to use, compared to other "sure-fire" methods, and simple enough for hand calculation, aided by a pocket calculator, if a computer is not available. The speed at which the method converges means that one is only very rarely required to compute more than a few iterations to get high accuracy. References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "p(x_k)" }, { "math_id": 1, "text": " G = \\frac{p'(x_k)}{p(x_k)}" }, { "math_id": 2, "text": " H = G^2 - \\frac{p''(x_k)}{p(x_k)}" }, { "math_id": 3, "text": " a = \\frac{n}{G \\plusmn \\sqrt{(n-1)(nH - G^2)}} " }, { "math_id": 4, "text": " x_{k+1} = x_k - a " }, { "math_id": 5, "text": "\\ p\\ " }, { "math_id": 6, "text": "\\ p(x) = C \\left( x - x_1 \\right) \\left( x - x_2 \\right)\\ \\cdots\\ \\left(x - x_n \\right)\\ ," }, { "math_id": 7, "text": "\\ x_1, x_2,\\ \\ldots,\\ x_n\\ ," }, { "math_id": 8, "text": "\\ln \\bigl|\\ p(x)\\ \\bigr| ~=~ \\ln \\bigl|\\ C\\ \\bigr|\\ +\\ \\ln \\bigl|\\ x - x_1\\ \\bigr|\\ +\\ \\ln \\bigl|\\ x - x_2\\ \\bigr|\\ +\\ \\cdots\\ +\\ \\ln \\bigl|\\ x - x_n\\ \\bigr| ~." }, { "math_id": 9, "text": "\\begin{align}\n\\ G &~=~ \\frac{ \\operatorname{d} }{ \\operatorname{d} x } \\ln \\Bigl|\\ p(x)\\ \\Bigr| ~=~ \\frac{ 1 }{\\ x - x_1\\ } + \\frac{ 1 }{\\ x - x_2\\ } +\\ \\cdots\\ + \\frac{ 1 }{\\ x - x_n\\ } \\\\[0.3 em]\n &~=~ \\frac{\\ p'(x)\\ }{\\ \\bigl|\\ p(x)\\ \\bigr|\\ }\\ ,\n\\end{align}" }, { "math_id": 10, "text": "\\begin{align}\n\\ H &~=~ -\\frac{ \\operatorname{d}^2 }{ \\operatorname{d} x^2 } \\ln \\Bigl|\\ p(x)\\ \\Bigr| ~=~ \\frac{ 1 }{~~ (x - x_1)^2\\ } + \\frac{ 1 }{~~ (x - x_2)^2\\ } +\\ \\cdots\\ + \\frac{ 1 }{~~ (x - x_n)^2\\ } \\\\[0.3 em]\n &~=~ -\\frac{\\ p''(x)\\ }{\\ \\bigl|\\ p(x)\\ \\bigr|\\ }\\ +\\ \\left( \\frac{\\ p'(x)\\ }{\\ p(x)\\ } \\right)^2 \\cdot\\ \\sgn\\!\\Bigl(\\ p(x)\\ \\Bigr) \n~.\\end{align}" }, { "math_id": 11, "text": "\\ x_1\\ " }, { "math_id": 12, "text": "\\ a\\ ," }, { "math_id": 13, "text": "\\ x\\ ," }, { "math_id": 14, "text": "\\ b ~." }, { "math_id": 15, "text": "\\ a\\ \\equiv\\ x - x_1\\ " }, { "math_id": 16, "text": "\\ b\\ \\approx\\ x - x_2\\ \\approx\\ x - x_3\\ \\approx\\ \\ldots\\ \\approx\\ x - x_n\\ ," }, { "math_id": 17, "text": "\\ b\\ \\equiv\\ \\operatorname\\mathsf{harmonic\\ mean}\\Bigl\\{\\ x - x_2,\\ x - x_3,\\ \\ldots\\ x - x_n\\ \\Bigr\\}\\ " }, { "math_id": 18, "text": "\\ G\\ " }, { "math_id": 19, "text": " G = \\frac{\\ 1\\ }{ a } + \\frac{\\ n - 1\\ }{ b } " }, { "math_id": 20, "text": "\\ H\\ " }, { "math_id": 21, "text": " H = \\frac{ 1 }{~ a^2\\ } + \\frac{\\ n - 1\\ }{~ b^2\\ } ~." }, { "math_id": 22, "text": " a = \\frac{ n }{\\ G \\plusmn \\sqrt{\\bigl( n - 1 \\bigr)\\bigl( n\\ H - G^2 \\bigr)\\ }\\ }\\ ," }, { "math_id": 23, "text": "\\ a\\ " }, { "math_id": 24, "text": " \\operatorname\\mathcal{R_e} \\biggl\\{\\ \\overline{G} \\sqrt{ \\left( n - 1 \\right) \\left( n\\ H - G^2\\right)\\ }\\ \\biggr\\} > 0\\ ," }, { "math_id": 25, "text": "\\ \\mathcal{R_e}\\ " }, { "math_id": 26, "text": "\\ \\overline{G}\\ " }, { "math_id": 27, "text": "\\ G\\ ;" }, { "math_id": 28, "text": " a = \\frac{\\ p(x)\\ }{ p'(x) } \\cdot \\Biggl\\{\\ \\frac{\\ 1\\ }{ n } + \\frac{\\ n - 1\\ }{ n }\\ \\sqrt{ 1 - \\frac{ n }{\\ n-1\\ }\\ \\frac{\\ p(x)\\ p''(x)\\ }{~ p'(x)^2\\ }\\ }\\ \\Biggr\\}^{-1}\\ ," }, { "math_id": 29, "text": "\\ p(x)\\ " }, { "math_id": 30, "text": "\\ \\operatorname\\mathcal{O}\\bigl\\{\\ (p(x))^3\\ \\bigr\\}\\ ," }, { "math_id": 31, "text": "\\ x_2,\\ x_3,\\ \\ldots,\\ x_n\\ " }, { "math_id": 32, "text": "\\ x_1\\ ," }, { "math_id": 33, "text": "\\ p(x)\\ ," }, { "math_id": 34, "text": "\\ x^{(0)}\\ ," }, { "math_id": 35, "text": "\\ x_1 ~." } ]
https://en.wikipedia.org/wiki?curid=1136614
1136647
Merck Manual of Diagnosis and Therapy
Medical textbook The Merck Manual of Diagnosis and Therapy, referred to as "The Merck Manual", is the world's best-selling medical textbook, and the oldest continuously published English language medical textbook. First published in 1899, the current print edition of the book, the 20th Edition, was published in 2018. In 2014, Merck decided to move "The Merck Manual" to digital-only, online publication, available in both professional and consumer versions; this decision was reversed in 2017, with the publication of the 20th edition the following year. "The Merck Manual of Diagnosis and Therapy" is one of several medical textbooks, collectively known as "The Merck Manuals", which are published by Merck Publishing, a subsidiary of the pharmaceutical company Merck Co., Inc. in the United States and Canada, and MSD (as "The MSD Manuals") in other countries in the world. Merck also formerly published "The Merck Index", "An Encyclopedia of Chemicals, Drugs, and Biologicals." History and editions. The first edition of "The Merck Manual" was published in 1899 by Merck &amp; Co., Inc. for physicians and pharmacists and was titled "Merck's Manual of the Materia Medica". The 192 page book which sold for US $1.00, was divided into three sections, Part I ("Materia Medica") was an alphabetical listing of all known compounds thought to be of therapeutic value with uses and doses; Part II ("Therapeutic Indications") was an alphabetical compendium of symptoms, signs, and diseases with a list of all known treatments; and Part III ("Classification of Medicaments "(sic)" According to their Physiologic Actions") was a listing of therapeutic agents according to their method of action or drug classification. Many of the terms used are now considered archaic, such as abasia, astasia, errhines and rubefacients - sternutatories, and many of the agents listed are now not considered to be standard therapeutic agents but were considered useful at the time, including poisonous compounds such as mercury, lead, strychnine and arsenic. There were 108 remedies listed for indigestion (dyspepsia), including alcohol, arsenic, cocaine, gold chloride, mercury, morphine, nux vomica, opium, silver nitrate, strychnine, and "Turkish baths (for malaise after dining out)". Bismuth, calcium, magnesium salts were also on the list, which are ingredients found in many modern gastrointestinal treatments available today. Arsenic was recommended for over 100 illnesses including anemia, diarrhea, hydrophobia, elephantiasis, and impotence. The formulas include "aletris cordial", a "uterine tonic and restorative", which contained "aletris farinosa or True Unicorn combined with aromatics". The manufacturer, Rio Chemicals of St. Louis was clear to differentiate the inclusion of true unicorn rather than false unicorn in its preparation. The earliest versions did contain drugs that are still in use today for the same purposes, for example digitalis for heart failure;, salicylates for headache rheumatism and fever, nitroglycerin for cardiac angina pectoris;, and bismuth salicylate for diarrhea Merck also began publishing "Merck's Archives of the Materia Medica", a monthly journal consisting of papers related to drugs and uses, which was available for an annual subscription of US $1.00. The second edition of "The Merck Manual" was published in 1901, was expanded to 282 pages and included new sections on poisons and antidotes, tables and conversion charts, and a detailed explanation of the metric system. The 5th edition, published in 1923 was delayed due to paper shortages caused by World War I, and the release of the 6th edition was delayed until 1934 due to the Stock Market Crash. The editor of that edition, Dr. M. R. Dinkelspiel had overseen the growth and reorganization of the Manual to discuss specific diseases, diagnosis and treatment options, and external specialists reviewed each section. The 8th edition of the Manual was delayed by World War II until 1950. The 13th edition, released in 1977 was the first time the textbook was produced using magnetic tape and IBM punch cards, the previous version having been typed on a manual typewriter. The Centennial (17th) Edition published in 1999 included a separate facsimile version of the 1899 1st edition. It is reported that both Admiral Richard E. Byrd took the book with him on his expedition to the South Pole in 1929 and Albert Schweitzer had a copy of The Merck Manual with him at his hospital mission in Africa in 1913. The recommended doses given in Part 1 of 1901 edition of "The Manual" were for adults when given by mouth. It included the following dose adjustment recommendations: &lt;templatestyles src="Template:Quote_box/styles.css" /&gt; "The DOSES, unless otherwise stated, are for adults and per os. To determine the dose for children, add 12 to the age, and divide by the age; 1 divided by the result represents the fraction of the adult dose suitable for the child. For example, a child three years old will require" formula_0 formula_1 formula_2 "of the adult dose. Of powerful narcotics, children will require scarcely more than one-half of this proportion. Children bear opiates poorly; while they stand comparatively large doses of arsenic, belladonna, ipecac, mercurials, pilocarpine, rhubarb and some other purgatives, and squill. For hypodermic injection the dose is ordinarily about one-half of that given." "Merck's Manual of the Materia Medica.", 1901 Content. "The Merck Manual" is organized, like many internal medicine textbooks, into organ systems (see List of Medical Topics below) which discuss each major diseases of that system, covering diagnosis (signs, symptoms, tests), prognosis and treatment. It provides a comprehensive yet concise compendium of medical knowledge into about 3500 pages, by emphasizing practical information of use to a practicing physician. In addition to 24 sections covering medical topics, it includes a pharmacology section listing drugs by generic and brand name, a list of drug interactions and a pill identifier, a News and Commentary section, videos on procedures and examination techniques, quizzes and case histories, clinical calculators, conversion tables and other resources. The text is characterized by the combination of conciseness, completeness, and being up-to-date. It is updated continuously by an independent editorial board and over 300 peer reviewers that contribute to the textbook, which goes through an average of 10 revisions by both internal and external reviewers before publication. The internal editorial staff consists of 4 physician reviews, one executive editor and four non-medical lay editors. The latest version has been translated into 17 languages. In addition to the online version, "The Merck Manual Professional Edition" is also available as a mobile app in both iOS and Android platforms, produced by Unbound Medicine, Inc. Medical topic sections (online edition). K1. Cardiovascular Disorders 2. Clinical Pharmacology 3. Critical Care Medicine 4. Dental Disorders 5. Dermatological Disorders 6. Ear, Nose, and Throat Disorders 7. Endocrine and Metabolic Disorders 8. Eye Disorders 9. Gastrointestinal Disorders 10. Genitourinary Disorders 11. Geriatrics 12. Gynecology and Obstetrics 13. Hematology and Oncology 14. Hepatic and Biliary Disorders 15. Immunology; Allergic Disorders 16. Infectious Diseases 17. Injuries; Poisoning 18. Musculoskeletal and Connective Tissue Disorders 19. Neurologic Disorders 20. Nutritional Disorders 21. Pediatrics 22. Psychiatric Disorders 23. Pulmonary Disorders 24. Special Subjects Awards and recognition. "The Merck Manual" was listed in the 2003 Brandon Hill "Selected List of Books and Journals for the Small Medical Library" as a recommended medical textbook for diagnosis, geriatrics, and patient education. "The Merck Manuals" were awarded five 2015 eHealthcare Leadership Awards including a Gold Award for Best Healthcare Content for Professionals, and a Distinction Award: Best Overall Healthcare Site, Consumer at the nineteenth annual Healthcare Internet Conference held in November 2015 in Orlando, Florida. Merck Publishing offers resources for "The Merck Manual Award" provided annually to outstanding medical students. The qualifications for the award are determined by each medical school. Medical schools that give this award include University of North Carolina School of Medicine, University of Central Florida School of Medicine and the University of Illinois School of Medicine. Other Merck manuals. "The Merck Manual of Geriatrics". First published in 1990, sections of "The Merck Manual" were made into a separate volume dealing with diseases and management of illnesses in the elderly. It has gone through three print editions, the last version published in 2000. Since the transition of The Merck Manual in 2015 to a web only based version, the Manual of Geriatrics is accessible through the Professional and Consumer portals of the online text. A search engine on the Merck Manual site allows searches limited to the contents of "The Merck Manual of Geriatrics". "The Merck Manual of Patient Symptoms". "The Merck Manual of Patient Symptoms" is a concise, pocket size reference guide intended for medical students and allied health care professionals in training. It covers symptoms, diagnosis and treatment. Consumer editions. "The Merck Manual of Medical Information – Home Edition". "The Merck Manual of Medical Information – Home Edition" was published in 1997 and was a re-edited version of the Professional version using less technical language intended for patients, caregivers and people interested in medical topics without training in health fields. This edition sold over 2 million copies. The "Second Home Edition" was released in 2003, and the third edition was published in 2009 as "The Merck Manual Home Health Handbook", and sold over 4 million copies. Since 2015 the Consumer version content is available only via the online Merck Manual website. A condensed consumer-oriented version was published at "The Merck Manual Go-To Home Guide for Symptoms" in 2013. "The Merck Manual of Women's and Men's Health". In 2014, "The Second Home Edition" was extracted from the Professional version of "The Manual" and published as "The Merck Manual of Women's and Men's Health" "The Merck Manual of Health &amp; Aging". A consumer version of "The Merck Manual of Geriatrics" was released in print in 2004 as "The Merck Manual of Health &amp; Aging", which included information on aging and the care of older people in non-technical language for the public. The content was incorporated into the Consumer version of the online Merck Manual in 2015. Veterinary medicine. "The Merck Veterinary Manual". "The Merck Veterinary Manual" has been published since 1955 for professional veterinarians and other professionals in the veterinary field. It is the most widely used veterinary medicine textbook. It is still published in a print version and the 11th edition is scheduled for release on July 12, 2016. The "Merck Veterinary Manual" has been translated into seven languages, including Croatian, French, Italian, Japanese, Portuguese, Romanian and Spanish. It is also available as a mobile app in both iOS and Android platforms, as well as an online version. "Merck/Merial Manual for Pet Health (Home Edition)". A consumer version written in non-technical language as a joint publication between Merck and Merial released as the "Merck/Merial Manual for Pet Health (Home Edition)" was first published in 2007. . A consumer oriented version of the Merck Veterinary Manual is available online as the "Pet Health Edition". Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\text{pediatric dose:} = \\frac{1}{\\left ( \\frac{(age)+12}{(age)}\\right )}" }, { "math_id": 1, "text": "\\text{e.g.:} \\tfrac {3+12}{3}= 5" }, { "math_id": 2, "text": "\\tfrac{1}{5} = 0.2" } ]
https://en.wikipedia.org/wiki?curid=1136647